id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.13466
The Influence of Forebody Topology on Aerodynamic Drag and Aeroacoustics Characteristics of Squareback Vehicles using CAA
This study numerically investigates the aerodynamic forces and flow-induced noise generated by SAE-T4, Ahmed, and Hybrid forebody shapes with a squareback vehicle configuration using SBES-FW-H. The results show significant differences in lift coefficients and the presence of a horseshoe vortex at the mirror, with smaller eddies that interact with A-pillar vortices, resulting in pronounced pressure fluctuations and noise generation on the side window for the three configurations. Surprisingly, negligible differences in aerodynamic drag and radiated sound are predicted despite these effects.
H. Viswanathan, K. K. Chode
2023-06-23T12:11:31Z
http://arxiv.org/abs/2306.13466v1
The Influence of Forebody Topology on Aerodynamic Drag and Aeroacoustics Characteristics of Squareback Vehicles using CAA ###### Abstract This study numerically investigates the aerodynamic forces and flow-induced noise generated by SAE-T4, Ahmed, and Hybrid forebody shapes with a squareback vehicle configuration using SBES-FW-H. The results show significant differences in lift coefficients and the presence of a horseshoe vortex at the mirror, with smaller eddies that interact with A-pillar vortices, resulting in pronounced pressure fluctuations and noise generation on the side window for the three configurations. Surprisingly, negligible differences in aerodynamic drag and radiated sound are predicted despite these effects. _Keywords:_ Computational Aeroacoustics (CAA), Drag, Squareback Vehicles, Stress Blended Eddy (SBES) Simulation, Ffowes Williams Hawkings (FW-H). ## 1 Introduction The automotive industry continues to transform the aerodynamic and acoustics environment to alleviate discomfort, enhance communication systems, reduce vehicular emission noise such as pass-by noise signature, influence sound barriers on urban roads, and improve overall safety [1,2]. Vehicle noise sources can primarily be classified into three categories: aerodynamic noise, mechanical noise, and tire-road noise, with additional secondary classifications, such as slosh noise, occasionally gaining prominence [3-5]. In the current era of vehicle electrification, aerodynamic noise holds primary significance during cruising as wind-induced noise increases with the vehicle's speed and supersedes tire noise at around 100 km/h. Therefore, predicting and mitigating aerodynamic noise is critical to improving the driving experience and road safety. Despite its potential significance, the impact of forebody on drag and its implications for sound generation and radiation from vehicles have been largely overlooked. To address this gap in the literature, this study investigates the impact of various forebody shapes on overall vehicle drag and its influence on the sound generated and radiated using computational aeroacoustics (CAA) method. The findings of this study expands the current knowledgebase on the relationship between the overall drag of different forebody shapes and aeroacoustics' performance of generic squareback vehicles. ## 2 System Description In this study, three squareback vehicles with different forebody configurations were investigated, including the SAE-Type 4, Ahmed Body, and a Hybrid Body that combines features of the former two, as depicted in Fig. 1. To ensure comparability among the vehicle configurations studied, their height (h), length (L), reference areas (A), and with a generic mirror representing a square cylinder of length (d), and side window positions were kept identical. Each model was subjected to a freestream velocity of U\({}_{\infty}=27.78\) m/s, corresponding to a Reynolds number of Re\({}_{\text{L}}=7\times 10^{6}\) based on the length of the body. The simulation was setup to closely replicate the experimental conditions for the SAE-Type 4 configuration described in Refs. [6, 7]. In each case, the models were placed in a computational domain of 12 L x 3.6 L x 3.6 L based on ERCOFTAC guidelines adapted from previous studies [8, 9]. Additionally, the positions of the mirror and side window were matched to mitigate any extraneous variables that could confound the aerodynamic assessment and resulting sound generation and radiation characteristics of each configuration. The near-field flow was computed using the Stress Blended Eddy Simulation (SBES), which offers advantages over DDES and IDDES approaches, including stronger RANS layer shielding and a rapid, more definitive transition from RANS to LES models, even with unstructured meshes. In the current study, a grid with wall normal units of (\(\Delta\text{x}^{+}=70\) - \(980\); \(\Delta\text{y}^{+}<1\), \(\Delta\text{z}^{+}=70\) - \(980\)) was developed in line with the previous study [10]. To predict the far-field noise, the Ffowcs Williams-Hawkings (FW-H) acoustic analogy was employed. All numerical simulations were carried out using ANSYS-Fluent (Version 2020) at Sheffield Hallam University's High-Performance Computing Cluster. ## 2 Results and discussion A comparison of the hydrodynamic pressure fluctuations (HPF) predicted for the SAE-T4 case on the side window of various forebodies with experimental data from Nusser et al. (2021), at a location closest to the mirror position in Fig. 2a. Whilst the experimental data was only available for the SAE-Type 4 configuration from previous works [6, 7], the SBES simulations of this configuration exhibited good agreement with the experimental data. Encouraged by these results, the simulation methodology and framework were extended to investigate the other two configurations. Therefore, it can be confidently asserted that the numerical simulations of all three configurations, although experimental data was only available for one, are reliable representations of the aerodynamic and aeroacoustics behavior of these configurations. Figure 1: Schematic representation of all the models with used in this study with dimensions. Figure 2: Comparison of (a) the predictions of the HPF on the side window at a location closest to the mirror from SBES for different forebody cases with experimental data of SAE-Type 4 from Nusser et al. (2021), (b) the predictions of the radiated sound numerical (SPL) from FW-H for different forebodies at a microphone located at 4m from the side window. Moreover, Fig. 2a presents the results obtained for different forebody cases, indicating the presence of two distinct tonal peaks at approximately 40 Hz (peak-1) and 80 Hz (peak-2). These peaks correspond to Strohaul frequencies of \(St\approx 0.116\) and \(0.232\), respectively, where St is defined as \(St=\frac{f.d}{U_{\infty}}\), with f representing the frequency of vortex shedding and \(d\) representing the characteristic length (side of the square) of the square cylinder. While the peak-1 is more pronounced in the SAE-T4 case and the peak-2 is more prominent in the Ahmed body case, both distinct peaks are less pronounced in the Hybrid case compared to the other two cases. Notably, there are no distinct differences in the sound radiated from all three forebody cases, as shown in Figure 2b, and only negligible differences are observed in the predicted aerodynamic drag coefficient (\(\mathbf{C_{d}}\)) and Overall (radiated) Sound Pressure Level (OASPL) at the microphone as shown in Table 1. However, significant differences in the predicted lift coefficient (\(\mathbf{C_{l}}\)) are observed for all the three cases. \begin{table} \begin{tabular}{c c c c} \hline \hline **Geometry** & \(\mathbf{C_{d}}\) & \(\mathbf{C_{l}}\) & **OASPL** \\ \hline SAE T4 & 0.2604 & -0.1266 & 60.46 dB \\ \hline Ahmed Body & 0.2598 & -0.1774 & 60.12 dB \\ \hline Hybrid Body & 0.2605 & -0.1326 & 59.93 dB \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of force coefficients and noise radiated from all three geometries. Figure 3: Comparison of vortical structures of an instantaneous flow field visualised by isosurfaces of Q = 1100 s\({}^{2}\) coloured with instantaneous x-velocity (images on the left). On the right, the RMS of pressure fluctuations in dB on the side window for the three forebody cases. To gain insight into the interaction of the A-pillar with the sideview mirror, the instantaneous flow structures for different forebody cases are compared, as shown in Figure 3 (images on the left). At the upstream of the mirror, the presence of a horseshoe vortex is evident, with highly unsteady smaller eddies generated downstream that interact with the A-pillar vortices, particularly in the case of SAE-T4 as compared to the Hybrid case, which inherits the features of both the Ahmed body and the SAE-T4. As a consequence, the overall distribution of pressure fluctuations on the side window, presented using the root-mean-square (RMS) of the pressure, is more pronounced for the SAE-T4 case as compared to the Hybrid case that has a smoother forebody curvature, as shown in Figure 3 (images on the right). Notably, this effect is not present in the Ahmed body, which does not have an A-pillar. In the Ahmed body case, the pressure fluctuations on the side window appear to be purely a contribution of the mirror, with an increased level of pressure fluctuations closer to the mirror and significantly reduced fluctuations closer to the upper portion of the side window than compared to the SAE-T4. This is the first study of its kind, and future work will examine additional forebody cases such as the Windsor body. It is envisaged that such a study shall provide a comprehensive understanding of the mechanism behind noise generation on the side window, as well as the noise radiated and aerodynamic forces from different forebody topologies. ## Acknowledgements The Authors wish to thank Dr Kevin Chow of Horiba-MIRA, UK, and Dr Hauke Reese of ANSYS, Germany for useful discussions with the SAE Type 4 modelling.
2302.01595
Deep Reinforcement Learning for Cyber System Defense under Dynamic Adversarial Uncertainties
Development of autonomous cyber system defense strategies and action recommendations in the real-world is challenging, and includes characterizing system state uncertainties and attack-defense dynamics. We propose a data-driven deep reinforcement learning (DRL) framework to learn proactive, context-aware, defense countermeasures that dynamically adapt to evolving adversarial behaviors while minimizing loss of cyber system operations. A dynamic defense optimization problem is formulated with multiple protective postures against different types of adversaries with varying levels of skill and persistence. A custom simulation environment was developed and experiments were devised to systematically evaluate the performance of four model-free DRL algorithms against realistic, multi-stage attack sequences. Our results suggest the efficacy of DRL algorithms for proactive cyber defense under multi-stage attack profiles and system uncertainties.
Ashutosh Dutta, Samrat Chatterjee, Arnab Bhattacharya, Mahantesh Halappanavar
2023-02-03T08:33:33Z
http://arxiv.org/abs/2302.01595v1
# Deep Reinforcement Learning for Cyber System Defense under ###### Abstract Development of autonomous cyber system defense strategies and action recommendations in the real-world is challenging and includes characterizing system state uncertainties and attack-defense dynamics. We propose a data-driven deep reinforcement learning (DRL) framework to learn proactive, context-aware, defense countermeasures that dynamically adapt to evolving adversarial behaviors while minimizing loss of cyber system operations. A dynamic defense optimization problem is formulated with multiple protective postures against different types of adversaries with varying levels of skill and persistence. A custom simulation environment was developed and experiments were devised to systematically evaluate the performance of four model-free DRL algorithms against realistic, multi-stage attack sequences. Our results suggest the efficacy of DRL algorithms for proactive cyber defense under multi-stage attack profiles and system uncertainties. ## Introduction Decision support for cyber system defense in real-world dynamic environments is a challenging research problem that includes the dynamical characterization of uncertainties in the system state and the incorporation of dynamics between attackers and defenders. Cyber defenders typically operate under resource constraints (e.g., labor, time, cost) and must update their strategies and tactics dynamically as the system evolves with/without attack influence from adaptive adversaries. Often, the defender may be unaware of system compromise and must adopt proactive strategies to maintain mission-critical operations. The past decade has seen a growing body of literature focused on the application of game-theoretic approaches for cybersecurity [13, 14]. These approaches typically involve resource allocation optimization with Markov decision process (MDP) models in non-cooperative settings. Cyber system attack and defense modeling methods also include: 1) probabilistic approaches for system reliability and attack outcome dependency [15, 16]; 2) Bayesian networks [17, 18]; and 3) fault/decision trees [20]. Although attack graph-based methods [19] for identifying system vulnerabilities and known exploits are valuable, within realistic settings, insufficient and imperfect information about system properties and attack goals are typically available to the defender. Recent advances in reinforcement learning (RL) approaches have led to the development of partially observable stochastic games (POSG) in partial information settings [10, 11]. Also, cyber system state-space modeling has generated interest in potential use of POSGs for cybersecurity problems [1, 12, 13]. However, POSGs tend to be general formulations and are often intractable problems. In addition, the state-of-the-art in cyber decision-support also includes the use of partially observable Markov decision process (POMDP) models, as well as distributed POMDPs, for solving a variety of problems such as: 1) cyber risk assessment [15], 2) uncertainty in penetration testing [15], 3) network task time allocation [14], 4) data exfiltration detection [16], and 5) effective deception resource allocation [17]. Most of these problems either focus only on specific attacks (or attack types), consider a limited defender action space, and assume that a system model is available. Recent work in deep RL (DRL) methods for cybersecurity primarily focus either on optimizing network operations or cyber defense against different threat types [16, 17]. While progress mostly includes focus on model-free DRL methods, there is a critical need for investigating further the role of different DRL algorithms for cyber defense trained under diverse adversarial settings. This paper focuses on the applicability of DRL in optimizing cybersecurity defense planning against strategic multi-stage adversaries. Specifically, the objective of a DRL defense agent is to compute context-aware defense plans by learning network and multi-stage attack behaviors while minimizing impacts to benign system operations. Generally, the dynamics of cyber systems depend on many correlated factors (e.g., traffic volume, network utilization) which may exhibit uncertain behaviors across time due to unanticipated physical link failures, sensor errors, and others. On the other hand, a stealthy cyber adversary may adapt their strategies (i.e., tactics and techniques) based on current network conditions and ongoing attack impact. For example, multi-stage attacks can propagate through the deployment of multiple software processes (e.g., Application Programming Interface (API) calls) to execute attack actions without inducing suspicion, instead of relying on a single process. However, a defender may not always be able to block a process due to its role in maintaining critical operations governed by interdependent processes. Thus, defense planning needs to consider the aggregated impact of different attack actions at different stages of its propagation. Figure 1 presents a multi-stage attack propagation template based on MITRE ATT&CK framework (MITRE 2022) that was leveraged in our study for evaluating multiple DRL-based defense mechanisms. Within an attack-defense interaction, an adversary can start from any technique of the _Reconnaissance/Initial Access_ tactic, and wins if they reach any technique of _Impact/Exfiltration_ tactic. Based on defense actions, the attacker may abort (i.e., move to the _Attack Terminated_ state or defender's win) or persist to move on to the next stage. The main contribution of this paper is to systematically evaluate the performance of multiple DRL algorithms for cyber defense that were trained under diverse adversarial uncertainties. The next section briefly describes DRL algorithms used in this study. Next, we describe the proposed autonomous cyber-defense framework and custom simulation environment. This is followed by experimental results and discussion, and some concluding remarks. ## Background Cyber defense DRL agents compute a policy that recommends optimal action at the current cyber-network state. At time-sequence \(t\), the agent executes an action \(a_{t}\) at current state \(s_{t}\) and receives reward \(R(s_{t},a_{t})\) that is used to update the policy. One of the objective of DRL agent is to balance the trade-off between exploration (i.e., executing random actions to understand consequences) and exploitation (i.e., executing optimal actions based on previous exploration knowledge). This research uses \(\epsilon\)-greedy approaches, where it executes random actions with probability \(\epsilon\). Notably, our \(\epsilon\) decays with the passage of time. In this research, we evaluate four different DRL approaches: Deep Q-Network (DQN)Deep Q-Network is a model-free DRL approach that maximizes the Bellman equation: \[\begin{split} Q_{t}(s_{t},a_{t})&=Q_{t}(s_{t},a_{ t})+\alpha(R_{t}(s_{t},a_{t})+\\ &\gamma\max_{a}Q_{t}(s_{t+1},a)-Q_{t}(s_{t},a_{t}))\end{split} \tag{1}\] where, \(Q_{t}(s_{t},a_{t})\) is the approximate Q-value (i.e., expected accumulated reward), \(R_{t}(s_{t},a_{t})\) is the reward (\(r_{t}\)) at \(t\), and \(\alpha\) is the learning rate. At each \(t\), the agent executes an action and stores the \((s_{t},a_{t},r_{t},s_{t+1})\) in a replay buffer. After every \(n\) steps, the agent calculates the loss using the Eqn. 1 for random batches from the buffer. Here, we use a neural network as a universal function approximator of the Q-function \(Q_{t}(s_{t},a_{t})\)Riedmiller (2005); Sutton and Barto (2018). Consequently, the neural network parameters are updated by minimizing the loss in Eqn. 2 via stochastic gradient descent. Extensions include the Double DQN method that predicts \(Q_{t}(s_{t+1},a)\) and \(Q_{t}(s_{t},a)\) with two different neural networks for more stable Q-function updates. \[Loss=R_{t}(s_{t},a_{t})+\gamma\max_{a}Q_{t}(s_{t+1},a)-Q_{t}(s_{t},a_{t}). \tag{2}\] Actor-CriticActor-critic approaches combine both the policy and value iteration methods using the following components: * _Critic_ is responsible for policy evaluation and uses a deep neural network (DNN) to estimate the Q-value. Based on the loss in Eqn. 2, the critic updates the parameters of the DNN and sends the computed gradients to the actor. * _Actor_ recommends the optimal action for the current state of a critic using a DNN, whose parameters are updated based on gradients received from the critic. Specifically, the actor searches for the optimal parameters of the DNN, \(\theta^{*}\) (i.e., weights of DNN), that maximize the expected accumulated reward Sutton and Barto (2018): \[J(\pi_{\theta})=E_{\pi_{\theta}}[\sum_{t=1}^{T}G(s_{t},a_{t})]\] (3) where, \(T\) is the number of decision epochs, and \(G(s_{t},a_{t})\) is the total accumulated reward. The DNN parameters are updated using the following equation: \[\theta=\theta+\alpha\nabla J(\theta)\] (4) where, \(\alpha\) is the learning rate. In Eqn. 4, \(\nabla J(\theta)\) is the gradient descent derived as follows Sutton and Barto (2018): \[\begin{split}\nabla J(\pi_{\theta})&=\nabla E_{ \pi_{\theta}}[\sum_{t=1}^{T}G(s_{t},a_{t})]\\ &=\nabla E_{\pi_{\theta}}(\sum_{t=1}^{T}\nabla_{\theta}log\pi_{ \theta}(a_{t}|s_{t})G(s_{t},a_{t}))\end{split}\] (5) where, \(\pi(a_{t}|s_{t})\) is the probability of taking action \(a_{t}\) in state \(s_{t}\) subject to the current policy parameters \(\theta\). We implement three different variants of actor-critic algorithms described as follows. * _Advantage Actor-Critic Approach (A2C)_ This method uses an advantage function, \(A(s_{t},a_{t})\), defined as \[A(s_{t},a_{t})=G(s_{t},a_{t})-V(s_{t})\] (6) to replace the \(G(s_{t},a_{t})\) of Eqn. 5. The advantage function reduces the high variance to make the policy network more stable. Note that \(V(s_{t})\) in Eqn. 6 is the baseline value achieved at \(s_{t}\). * _Asynchronous Advantage Actor Critic Approach (A3C):_ A3C is different from A2C in that the actor updates the policy parameters asynchronously as soon as any gradient update from any critic (and does not wait for all critics to finish). * _Proximal Policy Optimization (PPO)_ This method clips the divergence of new policy when it is out of the region \([1-e,1+e]\) (\(e\) is called clip parameter), in order to avoid drastic deviation from an older evaluated policy. This may be beneficial against sensor noises and errors. PPO optimizes the following clipped surrogate objective function, \(L^{CLIP}(\theta)\): \[\begin{split} L_{t}^{CLIP}(\theta)=\mathbb{E}[\min\{r_{t}( \theta)A_{t},\\ \text{clip}\left(\frac{\pi_{\theta}(a_{t}|s_{t})}{\pi_{old}(a_{t} |s_{t})},1-e,1+e\right)A_{t}\}]\end{split}\] (7) where, \(\pi_{\theta}\) and \(\pi_{old}\) represents new and old stochastic policies respectively. ## Autonomous Cyber Defense Framework Figure 2 presents our proposed autonomous cyber defense framework. The core element of our framework is a custom OpenAI Gym [1] simulation environment that we developed, where at each time-sequence, a DRL defense agent executes a _Defense Action_ and observes _Attack Position_ and _Reward_ as feedback. Each episode (i.e., multiple time-sequences) considers an attack path (per Figure 1), where the red node in figure 2 represents the current _Attack Position_. Next, we describe adversary and defense models in our framework. ecute next attack technique fails due to successful defense actions. Hence, there are two attack consequences: (i) adversary wins if they can successfully execute any technique under the _Impact/Exfiltration_ position; (ii) adversary loses if they move to the _Attack Terminated_ position. It is important to note that the adversary's strategy is implicitly captured in the description of the system's state evolution. Defense ModelThe main objective of the defender (DRL agent) is to proactively prevent the adversary to reach the Impact/Exfiltration tactic phase, while minimizing loss due to interrupting benign operations. To determine the optimal defense, the agent needs to infer the current attack position and predict the next attack action. However, lack of domain-specific information on system complexities and adversarial behavior results in the defender facing the following uncertainties that prevents the creation of an apriori system dynamics model: * _Uncertain Next Attack Technique:_ The defense agent cannot predict the next attack technique due to two reasons. First, the defense agent has no prior knowledge of the attack graph in Fig. 1 as it requires domain-specific real-world attack sequences that are hard to obtain. Second, an adversary does not always follow the same strategy (attack sequence). For example, after _User Execution_, an adversary can either try to _Modify Registry_ or _Modify System Process_. * _Uncertain Next Attack Procedure:_ The defense agent does not know the next attack procedure (i.e., attack actions required to implement an attack technique) or its likelihood due to lack of domain data. * _Imperfect and Incomplete Observations:_ Deployed alert mechanisms may not prove the current position of the adversary due to two reasons: (1) limited observability of processes, and (2) uncertain mapping from observations to attack techniques. For our DRL experimentation, we assume that an adversary is at an initial access position at the start of each episode, and the defender leverages alert systems that monitor API calls to understand the current attack position. As mentioned before, this alert information is not only incomplete due to not observing all API calls but also imperfect due to probability of errors in mapping aggregated API calls to MITRE tactic and technique. Moreover, the defender only partially knows attack position but does not know which process is malicious, and therefore, seeks to learn the adversary's dynamic attack strategy governed by attack action sequences. ## Defense Optimization Problem We formulate the cyber defender's optimization problem using a Sequential Decision Process (SDP) [1] construct, where the next attack technique depends on the current attack position and defense action. Note here that the adversary's optimal strategy is not learned in this work; instead, we train the defense agent against different adversarial strategies corresponding to distinct attack behaviors. Our SDP model is a tuple with four parameters: \((S,A,R,\gamma)\), where \(S\) is the cyber system state space, \(A\) is the defense action space, \(R\) is the reward function, and \(\gamma\) is the discount factor. The agent only knows about \(S\), \(A\), and \(\gamma\) which define its interaction with the environment. Though the agent is unaware of the mathematical form of \(R\). The objective in the SDP model is to compute an optimal action, \(a^{*}\in A\), for any current state, \(s\in S\) that maximizes the cumulative reward over a finite attack horizon. At the start of each time \(t\), the agent executes a defense action, \(a_{t}\in A\), based on current environment state, \(s_{t}\in S\), and receives feedback from the environment. Such feedback consists of current attack position, \(s_{t+1}\in S\), as observation and defense payoff, \(r_{t}\), as a reward. The payoff of \(a_{t}\) depends on its effectiveness on whether \(a_{t}\) prevented the next attack technique or not. To clarify, \(a_{t}\) is an effective defense action at \(s_{t}\) if it forces the attacker to stay at the previous position (\(s_{t+1}=s_{t}\)) or move to _Attack Terminated_ node. Thus, this model also implicitly integrates the expected attack behavior into decision-model through learning effectiveness of defense actions at particular environment condition. Next, we describe the optimization model elements in detail. State Space (\(S\))Our state space, \(S\), consists of 17 states, where each state, \(s\in S\), is a sparse vector that represents an unique attack position. Hence, the current state at any time \(t\) specifies the position of adversary at \(t\), based on which, the defense agent aims to choose the optimal defense action for that time-sequence. We consider three types of states: (1) 15 attack-technique states corresponding to each attack technique in Fig. 1; (2) _Attack Initiated_ state, and (3) _Attack Terminated_ state. The initial state is the _Attack Initiated_ state, and the adversary moves to _Attack Terminated_ state if they abort the attack due to failure or detection. Moreover, the adversary moves to a new state by successfully executing the associated attack technique. We consider two types of goal states: * _Attack Goal State:_ Adversary wins if they reach (1) _Automated Exfiltration_, (2) _Data Encryption/Obstruction_, or (3) _Endpoint Denial of Service (DoS)_ state of _Impact/Exfiltration_ tactic; * _Defense Goal State:_ Defender wins if the adversary moves to _Attack Terminated_ state. Defense Action Space (\(A\))The defender's action space, \(A\), has three different modes of operation: 1. [leftmargin=*] 2. _Inactive_: The defender remains silent and does nothing; 3. _Reactive_: The defender removes all processes that called or executed actions related to current attack position or last attack action; 4. _Proactive_: The defender blocks a specific set of API calls or operations to prevent the next attack action. The defender remains _Inactive_ for strategic reasoning or to avoid termination of critical benign processes. Whereas, for both _reactive_ and _proactive_ defense approaches, benign operations may be interrupted. We consider only one reactive action assuming that it can remove all processes associated with last attack action (whether successful or not). For proactive defense, we consider 21 distinct defense actions that block unique sets of API calls and operations or adopt particular methods/measurements to prevent specific attack techniques. For instance, a proactive defense action: Restrict Registry Permission blocks the ability to change certain keys to prevent adversary's autostart execution (_Persistence_ tactics) or registry modification (_Defense Evasion_ tactic). Another proactive action such as Restrict File and Directory Permission (only certain set of sensitive files) can also stop autostart execution. However, success likelihood of different defense actions vary in both defense effectiveness (i.e., preventing attack action) and expected false positive rate (i.e., terminating benign operations). Hence, the defense agent must choose the best action considering defense effectiveness and expected false positive rate. Moreover, this research also assumes that the defender can execute one action at a time-sequence. Hence, for proactive defense approach, the defender's decision-model needs to understand what the adversary may do next to execute an effective mitigation action. The defender wins if the adversary moves to _Attack Terminated_ position due to failure to execute new attack techniques or being detected and removed. Reward Function (\(R\))We consider the following reward function for the defense agent: \[R=-p_{g}(s)\times I_{g}-\mathcal{I}_{v}\times I_{g}-C_{f} \tag{8}\] where, \(\mathcal{I}_{v}=-1\) if the defender wins, and 0 otherwise. Note that \(p_{g}(s)\) is the probability of the adversary reaching the _Impact/Exfiltration_ tactic from state \(s\), and \(I_{g}\) is the impact/loss of a successful attack execution. Equation 8 comprise of three terms: (1) \(p_{g}(s)\times I_{g}\) quantifies the risk at state \(s\) due to the probability (\(p_{g}(s)\)) of attacker's reaching to goal; (2) \(I_{v}\times I_{g}\) quantifies the penalty or incentive to the defense agent when the adversary wins or loses, respectively; and (3) \(C_{f}\) is the cost of executing any defense action in \(A\). The cost \(C_{f}\) depends on the aggregated loss due to interrupting benign operations, and defense implementation or operational cost. We assume that the defense implementation cost is same for all mitigation actions and zero cost for _Inactive_ action. ## Experiments To implement our framework and solve the cyber defense optimization problem, an experimental plan was established to evaluate the performance of four DRL approaches (i.e., DQN, A2C, A3C, and PPO). The experimental setup, training and testing scenarios, adversary types, and simulation environment are described next. ### Experimental Setup We designed our experiments using Python 3.7 [23], and used _RLlib_ library [1] for implementing DRL algorithms. We have simulated our experiments using a Dell Alienewar machine with 16-core 3GHz Intel Core i7-5960X processor, 64GB RAM, and three 4GB NVIDIA GM200 graphics cards. Although MITRE ATT&CK framework contains 11 tactics and many more techniques, we consider 7 tactics and 15 techniques for our experiments. Our defense action space \(A\) consists of 23 mitigation actions, including 21 proactive actions. Table 1 presents learning parameters that we used for all our experiments. ### Training and Testing Scenarios We generated all possible distinct attack paths from initial attack position (i.e., initial attack state) to any state of the last attack tactic (i.e., _Impact/Exfiltration_ tactic). We used 80% attack paths for training and 20% attack paths for testing, where each attack propagation path is a unique sequence of attack techniques executed by an adversary to achieve their objective. During training, each unique episode contains one attack propagation path, which ends if the defender/adversary wins or loses. The adversary wins or the defender loses in an episode if the adversary successfully executes all attack techniques across the attack path to satisfy _Impact/Exfiltration_ tactic. However, an adversary loses or defender wins if they move to _Attack Terminated_ state due to failing at least a number of times, \(n\), across a path. ### Adversary Types For our experiments, we determine adversary types based on two attack parameters: (1) _skill_, and (2) _persistence_. With greater attack skill, success rate in executing next attack technique is higher due to increased capability in exploiting vulnerabilities. We define \(\rho\) as an attack skill parameter, representing attack success rate in exploiting a vulnerability. We assume that all vulnerabilities have same exploitability (i.e., impact and complexity). On the other hand, greater attack persistence indicates that the adversary does not abort their objective or cannot be readily detected despite their failed attempts. We define \(\tau\) as an attack persistence parameter, representing the number of failed attempts before moving to attack terminated state (i.e., defense win). By tuning \(\rho\) and \(\tau\), we consider three different attack profiles/strategy: (1) Attack profile 1 (\(Av_{1}\)): \(\rho=0.75\) and \(\tau=4\), (2) Attack profile 2 (\(Av_{2}\)): \(\rho=0.85\) and \(\tau=5\), and (3) Attack profile 3 (\(Av_{3}\)): \(\rho=0.95\) and \(\tau=7\). For example, \(Av_{2}(\rho=0.85,\tau=5)\) indicates that the adversary with profile 2 has 85% success likelihood in exploiting a specific vulnerability and tolerates \begin{table} \begin{tabular}{|c|c|} \hline Parameter Name & Value \\ \hline \hline Entropy Coefficient & 0.05 \\ Initial exploration probability & 1.0 \\ Final exploration probability & 0.04 \\ Exploration delay period & 300,000 \\ Number of workers & 4 \\ Rollout fragment length & 12 \\ Batch size & 48 \\ PPO clip value & 0.4 \\ Training epochs & 100 \\ Steps per training epoch & 25000 \\ \hline \end{tabular} \end{table} Table 1: Parameters used for training and testing simulation experiments. (Note: _Number of workers_ refers to the number of parallel processes, and _Exploration delay period_ refers to the number of training steps that decays exploration probability from 1.0 to 0.04.) maximum 5 failed attempts in an episode. Thus, attack profile 3 represents the most sophisticated adversary, and attack profile 1 represents a naive adversary. It is important to note that the adversary may change attack procedures if their previous attempt in exploiting a specific attack technique fails. ### Simulation Environment We developed a custom OpenAI Gym simulation environment (not in the public domain at this time) for the autonomous cyber defense framework illustrated in Figure 2. The DRL defense agent has no knowledge of the environment and attack behavior. The defense agent receives observations about next attack position and reward as feedback. The agent then determines the current attack position based on recent observation. We assume that system alerts can be translated to a specific attack position with 85%, 75%, and 65% accuracy against \(Av_{1}\), \(Av_{2}\), and \(Av_{3}\), which indicates that the adversary is more stealthy with increased level of sophistication. Within the simulation environment, at the start of each episode, an attack propagation path is selected, and the environment is set to the initial attack position. At the start of each timestep, a _Defense Success_ module determines whether the current defense action stopped the current attack technique or not, based on the correlations among attack procedures and defense actions. Another module, _Bemign Process Interrupter_, sends the list of benign processes/operations interrupted due to the recent defense action to the _Reward Generation_ module. We assume a power-law distribution function to generate the number of interrupted benign operations, which gradually becomes lower towards the last attack tactic. If the defense action is successful, the adversary stays at their current position. Otherwise, if the defense action fails, _Attack Success_ determines the success of the current attack technique based on attack skill (\(\rho\)) of attack profile. If the attack is successful, the adversary moves to next position; otherwise, they remain at the current position or move to _Attack Terminated state_ (end of episode). The reward is generated based on attack position, interrupted benign operations, and others using Eqn. 8. At the end of the timestep, reward and next observation are sent to the DRL defense agent that is used to update and refine their policy. ## Evaluation We assess the performance of different DRL algorithms using the _Defense-Win Ratio_ (DWR) metric, which evaluates the fraction of episodes where the defender won in a single batch, where each batch comprises of 200 episodes. Using the DWR ratio, we analyze (i) the convergence of the DRL algorithms during training, and (ii) the performance of the trained defense models against unseen attack sequences during testing. We evaluate all four DRL algorithms against three attack profiles (as described in Section ). ### Hyperparameter Optimization and Training For brevity, here we only illustrate the training performance of DQN and A2C using the DWR metric by varying two hyper-parameters: discount factor (\(\gamma\)) and learning rate (\(\alpha\)). In the following sections, each figure has two rows and three columns; the columns refer to the three attack profiles, while the rows correspond to the hyperparameters (the first row is for \(\gamma\) and the second row is for \(\alpha\)). In this research, for all DRL algorithms, we have used fully connected neural network with 2 hidden layers where each of them has 256 neurons. We have used _tanh_ as activation function in all the cases. A2cFigure 2(a) illustrates the sensitivity of A2C to different values of \(\gamma\), keeping \(\alpha\) fixed to 0.005. Against attack profile 1, all A2C instances converge at DWR value of 0.95 within 40 iterations (\(40\times 200\) episodes). Against attack profile 2, the A2C instance with \(\gamma=0.8\) achieves the highest DWR ratio, while all other discount factors induce poor performance. For attack profile 3, \(\gamma=0.8\) reaches the highest DWR of 0.8 while requiring 200 iterations. Thus, attack sophistication not only increases the convergence time but also reduces the defense agent's success rate. Averaging across all attack profiles, A2C performs best with \(\gamma=0.8\), which shows that the defense agent must balance the trade-off between the current reward and possible future payoff. Fig. 2(b) exhibits the sensitivity of A2C (with optimal \(\gamma\) set to 0.8) for different values of \(\alpha\). We observed that the algorithm performance does not change significantly against attack profile 1. Interestingly, against both attack profiles 2 and 3, A2C with \(\alpha=0.0005\) shows the best performance; this is possibly because the algorithms get stuck to local minima for larger values of \(\alpha\). Therefore, for A2C we set \(\alpha=0.005\) and \(\gamma=0.8\) for the test experiments. DqNWe followed the same training approach for DQN as that of A2C. From Fig. 3(a), we observe that DQN converges to optimal policy within 10 iterations, which is much faster compared to A2C; in fact, it was better than all of the DRL algorithms we tested. Moreover, DWR was higher than other DRL methods for each attack profile, which showed the superior performance of DQN for environments with discrete states and actions. Moreover, Fig. 3(b) illustrated that DQN-performance does not change significantly with change in \(\alpha\). For testing, we set \(\gamma=0.8\) and \(\alpha=0.01\) for DQN. ### Testing Results We discuss the testing performance of the DRL algorithms against the three attack profiles with unseen attack sequences not used during training. Fig. 5 illustrates the cumulative result in defending against the adversary at different phases. Here, each phase corresponds to a distinct adversary tactic. _Tactic ID: 0_ specifies the initial attack position before _Reconnaissance_, _Tactic ID: 6_ specifies the _Collection_, and all other tactics in Fig. 1 are numbered sequentially from _Tactic ID: 1_ to _Tactic ID: 5_. Note that _Impact/Exfiltration_ (tactic ID 7) is not shown in Fig. 5 as the defender loses if the attacker reaches that state. Table 2 reports how many attack sequences were stopped at the corresponding attack tactic. For example, the column corresponding to _Tactic ID: 3_ specifies how many attacks were stopped at _Defense Evasion_; similarly for the other columns. The _Mean Reward_ column specifies how much reward is achieved compared to the best reward (i.e., when stopping all attacks at Tactic ID: 0). The objective of the defense DRL agent is not only to stop the adversary from moving to _Impact/Exfiltration_ state but also to stop the attack progression as early as possible. Against all attack profiles, we observed that DQN exhibited the best performance in stopping the adversary as soon as possible. As can be seen in Table 2 and Fig. 5, DQN stops the three attack profiles within the _Defense Evasion_ (tactic ID 3) in 79.6%, 82%, and 57.3% of the cases respectively, while achieving corresponding defense success rates of 93.3%, 95%, and 84.1%. Against \(Av_{3}\), DQN has a lower Figure 4: Sensitivity of DQN for different values of \(\gamma\) and \(\alpha\) against attack profiles \(Av_{1}\), \(Av_{2}\), and \(Av_{3}\) (left to right) Figure 5: DWR performance during testing against the three attack profiles \(Av_{1}\), \(Av_{2}\), and \(Av_{3}\) (from left to right). Figure 3: Sensitivity of A2C for different values of \(\gamma\) and \(\alpha\) against attack profiles \(Av_{1}\), \(Av_{2}\), and \(Av_{3}\) (left to right). success rate than other attack profiles, as the sophisticated adversary hardly fails in exploiting a vulnerability. Besides, \(Av_{3}\) is persistent and does not give up easily in spite of failed attempts, which leads to lower DWR values for the defense agent. Note that other DRL algorithms did not consistently perform well against all three attack profiles. In fact, the next-best performing algorithm against \(Av_{3}\) had approximately 50% success rate. This is possibly due to the fact that actor-critic methods typically require more training samples than DQN to show better test accuracy. ## Conclusion Application of DRL methods for cyber system defense are promising, especially under dynamic adversarial uncertainties and limited system state information. Evaluating multiple DRL algorithms trained under diverse adversarial settings is an important step toward practical autonomous cyber defense solutions. Our experiments suggest that model-free DRL algorithms can be effectively trained under multi-stage attack profiles with different skill and persistence levels, yielding favorable defense outcomes in contested settings. However, some practical challenges that need to be addressed further in using model-free DRL include [10]: (i) explainability of the black-box DRL policies, (ii) vulnerability to adversarial noise and data poisoning, and (iii) convergence for large state-action spaces. Future work will include developing DRL-based transfer learning approaches within dynamic environments for distributed multi-agent defense systems. ## Acknowledgments This research was supported by the U.S. Department of Energy, through the Office of Advanced Scientific Computing Research's "Data-Driven Decision Control for Complex Systems (DnC2S)" project. Part of this research was supported by the Mathematics for Artificial Reasoning in Science (MARS) initiative at Pacific Northwest National Laboratory (PNNL) under the Laboratory Directed Research and Development (LDRD) program. PNNL is multiprogram national laboratory operated by Battelle for the U.S. Department of Energy under contract DE-AC05-76RL01830.
2306.14309
Vacuum polarization and induced Maxwell and Kalb-Ramond effective action in very special relativity
This work investigates the implications of very special relativity (VSR) on the calculation of vacuum polarization for fermions in the presence of Maxwell and Kalb-Ramond gauge fields in four-dimensional spacetime. We derive the $SIM(2)$-covariant gauge theory associated with an Abelian antisymmetric 2-tensor and its corresponding field strength. We demonstrate that the free VSR-Kalb-Ramond electrodynamics is equivalent to a massive scalar field with a single polarization. Furthermore, we determine an explicit expression for the effective action involving Maxwell and Kalb-Ramond fields due to fermionic vacuum polarization at one-loop order. The quantum corrections generate divergences free of nonlocal terms only in the VSR-Maxwell sector. At the same time, we observe UV/IR mixing divergences due to the entanglement of VSR-nonlocal effects with quantum higher-derivative terms for the Kalb-Ramond field. However, in the lower energy limit, the effective action can be renormalized like in the Lorentz invariant case.
Roberto V. Maluf, Gonzalo J. Olmo
2023-06-25T18:41:43Z
http://arxiv.org/abs/2306.14309v1
# Vacuum polarization and induced Maxwell and Kalb-Ramond effective action in very special relativity ###### Abstract This work investigates the implications of very special relativity (VSR) on the calculation of vacuum polarization for fermions in the presence of Maxwell and Kalb-Ramond gauge fields in four-dimensional spacetime. We derive the \(SIM(2)\)-covariant gauge theory associated with an Abelian antisymmetric 2-tensor and its corresponding field strength. We demonstrate that the free VSR-Kalb-Ramond electrodynamics is equivalent to a massive scalar field with a single polarization. Furthermore, we determine an explicit expression for the effective action involving Maxwell and Kalb-Ramond fields due to fermionic vacuum polarization at one-loop order. The quantum corrections generate divergences free of nonlocal terms only in the VSR-Maxwell sector. At the same time, we observe UV/IR mixing divergences due to the entanglement of VSR-nonlocal effects with quantum higher-derivative terms for the Kalb-Ramond field. However, in the lower energy limit, the effective action can be renormalized like in the Lorentz invariant case. Very special relativity, Kalb-Ramond field, Effective action, Quantum corrections Introduction In 2006, Cohen and Glashow proposed a modification to the theory of special relativity that preserves the familiar energy-momentum dispersion relation while breaking the invariance under the complete Lorentz group, \(SO(1,3)\)[1]. They recognized that specific subgroups of the Lorentz group can still produce conservation laws and reproduce the well-known effects of special relativity (SR). Among these subgroups, the \(HOM(2)\) and \(SIM(2)\) groups meet these requirements. The former subgroup, known as the Homothety group, consists of the boost generator \(K_{z}\) and the generators \(T_{1}=K_{x}+J_{y}\), \(T_{2}=K_{y}-J_{x}\). These generators together form a group that is isomorphic to the group of translations in the plane. The second subgroup, known as the similitude group \(SIM(2)\), is an enhanced version of the \(HOM(2)\) group obtained by including the generator \(J_{z}\). These subgroups do not admit invariant tensors that can act as constant background tensor fields, as seen in other theories of Lorentz violations [2]. Hence, the breakdown of \(SO(3,1)\) into either \(HOM(2)\) or \(SIM(2)\) cannot be explained through local symmetry-breaking operators. This intriguing theory of relativity came to be known as "very special relativity" (VSR) [3]. A key attribute of VSR is that the \(SIM(2)\) generators preserve both the speed of light and the null 4-vector \(n_{\mu}=(1,0,0,1)\), thereby establishing a preferred direction in space. However, VSR algebras do not support discrete symmetry operators such as \(P\), \(T\), \(CP\), and \(CT\). Including any of these operators would result in the full Lorentz algebra [4]. In this way, the lack of discrete symmetries may result in violations of unitarity and causality in quantum field theories. To circumvent this issue, Cohen and Glashow proposed the inclusion of non-local operators containing ratios of contractions of \(n_{\mu}\) in order to construct a unitary \(SIM(2)\)-invariant field theory. The non-local nature of VSR gives rise to a remarkable phenomenon of mass generation. Indeed, when a non-local operator like \(n_{\mu}/(n\cdot\partial)\) is added in the momentum operator for a massless fermion, it results in a corresponding Klein-Gordon equation with a mass term proportional to the non-local coupling constant. This impressive property has led to proposals suggesting a VSR explanation for neutrino mass [5] and dark matter [6]. Since its formulation, a large number of investigations in VSR theories have been developed in recent years, addressing distinct physical scenarios: fermion systems [7; 8; 9], Maxwell [10; 11], Maxwell-Chern-Simons [12] and axion [13] electrodynamics, Lorentz violation [14], curved spacetime [15], noncommutativity [16; 17], linearized gravity [18], Finsler geometry [19; 20], supersymmetry [21], radiative corrections [22; 23; 24; 25]. These several studies have served both to elucidate the effects engendered by VSR non-local terms and to set up upper bounds on the VSR mass coefficients. In particular, the VSR contributions to the induced effective action in the context of Maxwell-Chern-Simons electrodynamics have been studied in previous works [24; 25]. In these studies, the authors employed the Mandelstam-Leibbrandt prescription [26; 27; 28], adapted to the VSR case by Alfaro [29; 30], to handle the UV/IR mixing divergences that arise in the loop integrals. Another notable gauge field theory in four-dimensional Minkowski spacetime is built from an antisymmetric 2-tensor known as the Kalb-Ramond field [31]. The Kalb-Ramond field is relevant in several contexts and has been extensively explored in the literature, including string theories [32], quantum field theory [33; 34; 35], supersymmetry [36; 37], Lorentz symmetry violation [38; 39; 40], dualities [41; 42], black hole and wormhole solutions [43; 44], cosmology [45], and brane world scenarios [46; 47; 48]. The study of the classical action for antisymmetric tensor fields in the context of VSR was initially addressed in Ref. [49]. However, a comprehensive analysis of the role of VSR in the quantum corrections of this field is still lacking in the literature. In this work, we propose a systematic procedure to construct an antisymmetric 2-tensor gauge field that incorporates VSR non-local operators and use it to derive the associated \(SIM(2)\)-invariant classical action. Subsequently, we obtain the free equation of motion in the VSR-Kalb-Ramond electrodynamics, and explicitly determine the resulting degrees of freedom. Furthermore, we explore, for the first time, the induced corrections to the effective action of the Maxwell and Kalb-Ramond gauge fields in the context of VSR. We obtain the effective Lagrangian density at the one-loop order by integrating the fermionic fields and calculating the vacuum polarization Feynman diagrams. We demonstrate that the divergent terms can be renormalized in the low-energy limit by appropriately rescaling the fields, masses, and coupling parameters in the model. Moreover, our results recover those obtained in the literature for the Lorentz invariant limit [50]. The present work is organized as follows. In Sec. II, we review the main aspects of VSR applied to the vector gauge field. In Sec. III, we propose a procedure to derive the \(SIM(2)\)-invariant Lagrangian density for the Kalb-Ramond field and analyze its free physical modes. In Sec. IV, we calculate the induced effective actions of the Maxwell and Kalb-Ramond fields, evaluate the Feynman diagrams for the two-point gauge functions, and examine the general tensorial form of the finite and divergent induced terms as well as the renormalization issues. Our final comments are presented in Section V. ## II SIM(2)-Invariant Maxwell gauge theory To establish a consistent framework for our upcoming calculations, we will review the \(SIM(2)\)-invariant gauge vector theory described in Refs. [10; 11; 23]. The VSR-modified Maxwell electrodynamics is a \(U(1)\) gauge theory that involves a 1-form gauge potential \(A_{\mu}(x)\) and a matter field \(\psi(x)\) that acts as the source of \(A_{\mu}(x)\). This theory obeys the following gauge transformations: \[A_{\mu}(x) \to A_{\mu}(x)+\tilde{\partial}_{\mu}\Lambda(x), \tag{1}\] \[\psi(x) \to\exp\left\{ie\Lambda(x)\right\}\psi(x), \tag{2}\] where \(\Lambda(x)\) is an arbitrary 0-form field. The wiggle derivative operator is defined by \[\tilde{\partial}_{\mu}=\partial_{\mu}+\frac{1}{2}\frac{m_{A}^{2}}{n\cdot \partial}n_{\mu}, \tag{3}\] where \(m_{A}\) is a constant parameter with mass dimension, and \(n^{\mu}=(1,0,0,1)\) is a fixed null vector present in VSR theories and select a preferred direction. The covariant derivative for VSR-Maxwell electrodynamics is given by \[\mathcal{D}_{\mu}=\partial_{\mu}-ie\left(A_{\mu}-\frac{1}{2}m_{A}^{2}n_{\mu} \left(\frac{1}{(n\cdot\partial)^{2}}n\cdot A\right)\right) \tag{4}\] and it is constructed by demanding the fundamental property of transforming as \(\psi\) does under infinitesimal gauge transformations [11]: \[\delta(\mathcal{D}_{\mu}\psi)=ie\Lambda\mathcal{D}_{\mu}\psi. \tag{5}\] We can compute the field strength related to \(\mathcal{D}_{\mu}\) as follows: \[[\mathcal{D}_{\mu},\mathcal{D}_{\nu}]\psi=-ie\mathcal{F}_{\mu\nu}\psi. \tag{6}\] This gives us the expression for the field strength tensor: \[\mathcal{F}_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+\frac{1}{2}m _{A}^{2}N_{\mu}\partial_{\nu}(N\cdot A)-\frac{1}{2}m_{A}^{2}N_{\nu}\partial_{ \mu}(N\cdot A). \tag{7}\] where we introduce the notation \(N_{\mu}\equiv n_{\mu}/n\cdot\partial\) for the nonlocal vector operator. It is noteworthy that \(\mathcal{F}_{\mu\nu}\) is invariant under the gauge transformation \[\tilde{A}_{\mu}\rightarrow\tilde{A}_{\mu}+\partial_{\mu}\Lambda, \tag{8}\] where we define the wiggle gauge vector \(\tilde{A}_{\mu}\) by \[\tilde{A}_{\mu}=A_{\mu}-\frac{1}{2}m_{A}^{2}N_{\mu}\left(N\cdot A\right), \tag{9}\] such that \(\mathcal{F}_{\mu\nu}=\partial_{\mu}\tilde{A}_{\nu}-\partial_{\nu}\tilde{A}_{\mu}\). We observe that by applying a field redefinition \(A_{\mu}\to A_{\mu}+\frac{1}{2}m_{A}^{2}N_{\mu}\left(N\cdot A\right)\), one can eliminate the modification by the VSR-nonlocal terms. After this redefinition, we obtain the same field strength tensor \(\mathcal{F}_{\mu\nu}\to F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) and the same covariant derivative gauge transformation as in standard electrodynamics, given by \[\mathcal{D}_{\mu}\psi\to D_{\mu}\psi=\partial_{\mu}\psi-ieA_{\mu}\psi. \tag{10}\] On the other hand, we can define a new \(SIM(2)\)-gauge invariant field strength \(\tilde{F}_{\mu\nu}\), which is constructed using the wiggle derivative, namely, \[\tilde{F}_{\mu\nu}=\tilde{\partial}_{\mu}A_{\nu}-\tilde{\partial}_{\nu}A_{\mu}, \tag{11}\] and it can be expressed explicitly as \[\tilde{F}_{\mu\nu}=\mathcal{F}_{\mu\nu}+\frac{1}{2}m_{A}^{2}\left[n_{\mu} \frac{1}{(n\cdot\partial)^{2}}(n^{\alpha}\mathcal{F}_{\alpha\nu})-n_{\nu} \frac{1}{(n\cdot\partial)^{2}}(n^{\alpha}\mathcal{F}_{\alpha\mu})\right], \tag{12}\] which shows that \(\tilde{F}_{\mu\nu}\) is also invariant under the gauge transformation (8). The present analysis allows us to construct a \(SIM(2)\)-invariant Lagrangian density for the field \(A_{\mu}\) that is also invariant under standard gauge transformations. As pointed out in Ref. [23], this is an important result because this Lagrangian generates a mass term for the field \(A_{\mu}\) without breaking the original gauge symmetry of the theory. According to the definition (11), \(\tilde{F}_{\mu\nu}\) is not Lorentz invariant but instead \(SIM(2)\) invariant. Moreover, as shown in result (12), it is also invariant under transformation (8). So, we can construct a VSR gauge-invariant Lagrangian density as follows: \[\mathcal{L}_{gauge}=-\frac{1}{4}\tilde{F}_{\mu\nu}\tilde{F}^{\mu\nu}. \tag{13}\] Therefore, from Eq. (12), this Lagrangian takes the form \[\mathcal{L}_{gauge}=-\frac{1}{4}\mathcal{F}_{\mu\nu}\mathcal{F}^{\mu\nu}+\frac{1} {2}m_{A}^{2}n_{\mu}\left(\frac{1}{n\cdot\partial}\mathcal{F}^{\mu\nu}\right)n^{ \alpha}\left(\frac{1}{n\cdot\partial}\mathcal{F}_{\alpha\nu}\right). \tag{14}\] Finally, by applying a field redefinition from \(A_{\mu}\to A_{\mu}+\frac{1}{2}m_{A}^{2}n_{\mu}((n\cdot\partial)^{-2}(n\cdot A))\), so that \(\mathcal{F}_{\mu\nu}\to F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\), we obtain the desired result \[\mathcal{L}_{gauge}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\frac{1}{2}m_{A}^{2}n_{ \mu}\left(\frac{1}{n\cdot\partial}F^{\mu\nu}\right)n^{\alpha}\left(\frac{1}{n \cdot\partial}F_{\alpha\nu}\right). \tag{15}\] It is interesting to notice that if we start from \(-1/4\mathcal{F}_{\mu\nu}\mathcal{F}^{\mu\nu}\) to define our Lagrangian, the above field redefinition will withdraw the VSR effects. Furthermore, as shown in Ref. [23], we can apply the Lorentz gauge \(\partial_{\mu}A^{\mu}=0\) plus the subsidiary gauge condition \(N\cdot A=0\) into the equation of motion obtained from (15) and we find \[(\partial^{2}+m_{A}^{2})A^{\nu}=0. \tag{16}\] Hence, in the VSR scenario, we obtain a massive gauge field with two physical degrees of freedom, which is in contrast to the Proca case where the mass term \(m_{A}^{2}A^{\mu}A_{\mu}\) is not gauge invariant and has three degrees of freedom. ## III Kalb-Ramond electrodynamics in VSR In this section, we investigate the issue of constructing a \(SIM(2)\)-invariant action for the Kalb-Ramond field. As we will see, this is possible even when the Kalb-Ramond field does not carry matter charge, i.e., when it is not minimally coupled to matter fields and does not possess any associated covariant derivatives. ### Setup Let us start by defining the Lagrangian density that describes the dynamics for an antisymmetric 2-tensor \(B_{\mu\nu}\) in \(4D\) Minkowski spacetime, \[\mathcal{L}=-\frac{1}{12}H_{\mu\nu\alpha}H^{\mu\nu\alpha}-\frac{1}{2}B_{\mu \nu}J^{\mu\nu}, \tag{17}\] where \[H_{\mu\nu\alpha}=\partial_{\mu}B_{\nu\alpha}+\partial_{\alpha}B_{\mu\nu}+ \partial_{\nu}B_{\alpha\mu}, \tag{18}\] is the field strength tensor associated with \(B_{\mu\nu}\), and \(J^{\mu\nu}\) is an antisymmetric conserved current due to the coupling to the matter [38]. The field strength \(H_{\mu\nu\alpha}\) corresponds to the components of an exact 3-form field \(H\), which is constructed using the exterior derivative from the 2-form \(B\) associated with \(B_{\mu\nu}\). This field strength satisfies the identity \[\partial_{\kappa}H_{\lambda\mu\nu}-\partial_{\lambda}H_{\mu\nu\kappa}+\partial _{\mu}H_{\nu\kappa\lambda}-\partial_{\nu}H_{\kappa\lambda\mu}=0, \tag{19}\] which follows from the fact that an exact 3-form is closed [51]. The Lagrangian (17) is the simplest which can be constructed by demanding parity-even and invariance under the \(U(1)\) gauge transformation: \[B_{\mu\nu}(x)\to B_{\mu\nu}(x)+\partial_{\mu}\Sigma_{\nu}(x)-\partial_{\nu} \Sigma_{\mu}(x), \tag{20}\] where \(\Sigma_{\mu}\) is an arbitrary vector field. The field \(\Sigma_{\mu}\) also exhibits an extra gauge invariance given by \[\Sigma_{\mu}(x)\rightarrow\Sigma_{\mu}(x)+\partial_{\mu}\phi(x), \tag{21}\] with \(\phi\) being an arbitrary scalar field. This latter transformation leaves Eq. (20) unchanged. In general, the current \(J_{\mu\nu}\) is constructed from other dynamical fields which involve extended objects of the type found in the string theory [31]. For the sake of simplicity, we will not consider a string matter field for the source of \(B_{\mu\nu}(x)\) in this work. In what follows, our attention will be focused only on the kinetic part of the Lagrangian density (17), such that the matter coupling, represented by \(J_{\mu\nu}\), will be turned off. ### \(Sim(2)\)-covariant Kalb-Ramond gauge theory For the \(SIM(2)\)-invariant generalization of the Kalb-Ramond Lagrangian (17), we expect that the gauge symmetry (20) modified by the nonlocal vector operator \(N_{\mu}\equiv n_{\mu}/n\cdot\partial\) will play a crucial role. As we saw in the Maxwell case, both \(\mathcal{F}_{\mu\nu}\) and \(\tilde{F}_{\mu\nu}\) are invariant under the standard gauge transformation, and the connection between the two kinds of field strengths is made through the vector potential \(\tilde{A}_{\mu}\). Hence, motivated by our earlier analysis, we will construct a \(\tilde{B}_{\mu\nu}\) field that satisfies the following requirements: (i) \(\tilde{B}_{\mu\nu}\) is a linear function that is first-order in \(B_{\mu\nu}\) and second-order in \(N_{\mu}\); (ii) \(\tilde{B}_{\mu\nu}\) has mass dimension one in \(4D\) spacetime; (iii) \(\tilde{B}_{\mu\nu}\) transforms by \(\tilde{B}_{\mu\nu}\rightarrow\tilde{B}_{\mu\nu}+\partial_{\mu}\tilde{\Sigma}_ {\nu}-\partial_{\nu}\tilde{\Sigma}_{\mu}\) when \(B_{\mu\nu}\) changes by \(B_{\mu\nu}\to B_{\mu\nu}+\tilde{\partial}_{\mu}\Sigma_{\nu}-\tilde{ \partial}_{\nu}\Sigma_{\mu}\). After imposing these requirements, we arrive at the \(\bar{B}\)-ansatz given by \[\tilde{B}_{\mu\nu}=B_{\mu\nu}-\frac{m^{2}}{2}\left(N_{\mu}N^{\alpha}B_{\alpha\nu }-N_{\nu}N^{\alpha}B_{\alpha\mu}\right), \tag{22}\] and it changes under the gauge transformation \(B_{\mu\nu}\to B_{\mu\nu}+\tilde{\partial}_{\mu}\Sigma_{\nu}-\tilde{ \partial}_{\nu}\Sigma_{\mu}\) as follows: \[\tilde{B}_{\mu\nu}\rightarrow\tilde{B}_{\mu\nu}+\partial_{\mu}\left(\Sigma_{ \nu}-\frac{m^{2}}{2}N_{\nu}(N\cdot\Sigma)\right)-\partial_{\nu}\left(\Sigma_{ \mu}-\frac{m^{2}}{2}N_{\mu}(N\cdot\Sigma)\right). \tag{23}\] It is worth noting that the gauge parameter \(\tilde{\Sigma}_{\mu}=\Sigma_{\mu}-\frac{m^{2}}{2}N_{\mu}(N\cdot\Sigma)\) has the same form as \(\tilde{A}_{\mu}\) in Eq. (9), which was obtained in the Maxwell case. This is expected since \(\Sigma_{\mu}\) has the additional gauge symmetry (21), similar to the \(A_{\mu}\) field. Additionally, it is interesting to note that for a 0-form field, our prescription implies that \(\tilde{\phi}=\phi-\frac{m^{2}}{2}N^{\mu}N_{\mu}\phi=\phi\), because \(N^{2}=0\). Once we have found \(\tilde{B}_{\mu\nu}\), we can define the tensor \[\mathcal{H}_{\mu\nu\alpha}= \partial_{\mu}\tilde{B}_{\nu\alpha}+\partial_{\alpha}\tilde{B}_{ \mu\nu}+\partial_{\nu}\tilde{B}_{\alpha\mu}, \tag{24}\] whose explicit form is given by: \[\mathcal{H}_{\mu\nu\alpha} = H_{\mu\nu\alpha}+\frac{1}{2}m^{2}\left[N_{\mu}N^{\sigma}\left( \partial_{\nu}B_{\sigma\alpha}-\partial_{\alpha}B_{\sigma\nu}\right)+N_{\nu} N^{\sigma}\left(\partial_{\alpha}B_{\sigma\mu}-\partial_{\mu}B_{\sigma\alpha} \right)+N_{\alpha}N^{\sigma}\left(\partial_{\mu}B_{\sigma\nu}-\partial_{\nu}B _{\sigma\mu}\right)\right]. \tag{25}\] Also, the \(SIM(2)\)-covariant field strength tensor \(\tilde{H}_{\mu\nu\alpha}\) can be defined as \[\tilde{H}_{\mu\nu\alpha}\equiv\tilde{\partial}_{\mu}B_{\nu\alpha}+\tilde{ \partial}_{\alpha}B_{\mu\nu}+\tilde{\partial}_{\nu}B_{\alpha\mu}. \tag{26}\] Taking the difference between the two kinds of field strengths \(\tilde{H}_{\mu\nu\alpha}-\mathcal{H}_{\mu\nu\alpha}\), we obtain \[\tilde{H}_{\mu\nu\alpha}-\mathcal{H}_{\mu\nu\alpha}=\frac{1}{2}m^{2}\left[N_{ \mu}N^{\sigma}H_{\sigma\nu\alpha}+N_{\nu}N^{\sigma}H_{\sigma\alpha\mu}+N_{ \alpha}N^{\sigma}H_{\sigma\mu\nu}\right]. \tag{27}\] Furthermore, we can rewrite \(\tilde{H}_{\mu\nu\alpha}\) solely in terms of \(\mathcal{H}_{\mu\nu\alpha}\), thereby guaranteeing the invariance of \(\tilde{H}_{\mu\nu\alpha}\) under both \(SIM(2)\) and the wiggle gauge transformations, as required in condition (iii). To this end, it is easy to check the following identity: \[N_{\mu}N^{\sigma}H_{\sigma\nu\alpha}+N_{\nu}N^{\sigma}H_{\sigma\alpha\mu}+N_{ \alpha}N^{\sigma}H_{\sigma\mu\nu}=N_{\mu}N^{\sigma}\mathcal{H}_{\sigma\nu \alpha}+N_{\nu}N^{\sigma}\mathcal{H}_{\sigma\alpha\mu}+N_{\alpha}N^{\sigma} \mathcal{H}_{\sigma\mu\nu}, \tag{28}\] where using the properties of the operator \(N_{\mu}\) we have that [10] \[N\cdot N=0,\ \ N\cdot\partial=1,\ \ [N^{\mu},N^{\nu}]=[N^{\mu},\partial^{ \nu}]=0, \tag{29}\] and the integration by parts rule holds: \[\int d^{4}xf(x)\left(N^{\mu}g(x)\right)=-\int d^{4}x\left(N^{\mu}f(x)\right)g(x). \tag{30}\] Besides, it would also be consistent to set \(N^{\mu}\phi(x)\equiv 0\) if \(\phi\) is a constant. With all the above results, the \(SIM(2)\)-covariant tensor \(\tilde{H}_{\mu\nu\alpha}\) can be cast as \[\tilde{H}_{\mu\nu\alpha}=\mathcal{H}_{\mu\nu\alpha}+\frac{1}{2}m^{2}\left[N_{ \mu}N^{\sigma}\mathcal{H}_{\sigma\nu\alpha}+N_{\nu}N^{\sigma}\mathcal{H}_{ \sigma\alpha\mu}+N_{\alpha}N^{\sigma}\mathcal{H}_{\sigma\mu\nu}\right]\, \tag{31}\] which is a natural generalization of the relation (12). Finally, the \(SIM(2)\)-invariant action of the Kalb-Ramond field \(B_{\mu\nu}\) is represented by \[S=\int d^{4}x\left[-\frac{1}{12}\tilde{H}_{\mu\nu\alpha}\tilde{H}^{\mu\nu \alpha}\right], \tag{32}\] and with help the of Eq. (31) we can write it as \[S=\int d^{4}x\left[-\frac{1}{12}H_{\mu\nu\alpha}H^{\mu\nu\alpha}+\frac{1}{4}m ^{2}n^{\alpha}\left(\frac{1}{n\cdot\partial}H_{\alpha\mu\nu}\right)n_{\sigma} \left(\frac{1}{n\cdot\partial}H^{\sigma\mu\nu}\right)\right], \tag{33}\] where we performed an additional change of field variables \(B_{\mu\nu}\to B_{\mu\nu}+\frac{m^{2}}{2}\left(N_{\mu}N^{\alpha}B_{\alpha\nu}- N_{\nu}N^{\alpha}B_{\alpha\mu}\right)\). Therefore, similar to the Maxwell case, the \(SIM(2)\)-modified Kalb-Ramond action (32) is invariant under the standard gauge transformation (20). The equation of motion follows from the action (33) by varying with respect to \(B_{\nu\alpha}\), Explicitly, we find \[\partial_{\mu}H^{\mu\nu\alpha}+m^{2}\left(N_{\sigma}H^{\sigma\nu\alpha}+N^{ \nu}N_{\sigma}\partial_{\mu}H^{\sigma\alpha\mu}+N^{\alpha}N_{\sigma}\partial_ {\mu}H^{\sigma\mu\nu}\right)=0. \tag{34}\] By contracting Eq. (34) with \(N_{\nu}\), we obtain the following constraint: \[N_{\nu}\partial_{\mu}H^{\mu\nu\alpha}=0. \tag{35}\] Inserting this constraint back into the equation of motion, we find \[\partial_{\mu}H^{\mu\nu\alpha}+m^{2}N_{\sigma}H^{\sigma\nu\alpha}=0. \tag{36}\] To find the physical modes, we must fix the gauge freedom. We can chose, analogous to the Maxwell case, the Lorentz gauge \[\partial_{\mu}B^{\mu\nu}=0. \tag{37}\] Then the equation of motion (36) and the constraint (35) become, respectively: \[\left(\Box+m^{2}\right)B^{\nu\alpha}+m^{2}\left(N_{\sigma}\partial^{\alpha}B^ {\sigma\nu}+N_{\sigma}\partial^{\nu}B^{\alpha\sigma}\right)=0, \tag{38}\] \[\Box N_{\nu}B^{\nu\alpha}=0. \tag{39}\] The form of Eq. (38) still contains redundant degrees of freedom. Indeed, the gauge condition (37) is insufficient to fix the gauge freedom completely since we can construct a solution \(B^{\prime\nu\alpha}=B^{\nu\alpha}+\partial^{\nu}\Sigma^{\alpha}-\partial^{ \alpha}\Sigma^{\nu}\), which preserves the Lorentz gauge (37) and satisfies the equation of motion (38). So, we can impose an additional condition on the field \(B^{\prime\nu\alpha}\), namely, \[N_{\nu}B^{\prime\nu\alpha}=0, \tag{40}\] by choosing the gauge parameter \(\Sigma^{\alpha}\) as \[\Sigma^{\alpha}-\partial^{\alpha}N_{\nu}\Sigma^{\nu}=-N_{\nu}B^{\nu\alpha}. \tag{41}\] However, the last relation is invariant under the residual gauge symmetry (21), and we can use this fact to impose the condition \[N_{\nu}\Sigma^{\nu}=0, \tag{42}\] by fixing the scalar gauge parameter as \(\phi=N_{\nu}\Sigma^{\nu}\). Thus, the gauge parameter is given by \[\Sigma^{\alpha}=-N_{\nu}B^{\nu\alpha}. \tag{43}\] Now, we can show that the Lorentz gauge condition is valid to \(B^{\prime\nu\alpha}\). From (37) we find \[\partial_{\nu}B^{\prime\nu\alpha}=\Box\Sigma^{\alpha}-\partial^{\alpha} \partial_{\nu}\Sigma^{\nu}. \tag{44}\] On the other hand, the constraint condition (39) implies that \[\Box N_{\nu}B^{\prime\nu\alpha}=\Box\Sigma^{\alpha}=0, \tag{45}\] where we used the relations \(N_{\nu}B^{\prime\nu\alpha}=0\) and \(N_{\nu}\Sigma^{\nu}=0\). Immediately, from (44) it follows that \[\partial_{\nu}B^{\prime\nu\alpha}=-\partial^{\alpha}\partial_{\nu}\Sigma^{\nu}, \tag{46}\] and by Eq. (43), we obtain the claimed result \[\partial_{\nu}B^{\prime\nu\alpha}=\partial^{\alpha}\partial_{\nu}N_{\mu}B^{ \mu\nu}=0, \tag{47}\] where in the last step we use again the Lorentz gauge \(\partial_{\mu}B^{\mu\nu}=0\). Finally, by applying the subsidiary condition \(N_{\nu}B^{\nu\alpha}=0\) to the equation of motion (38), it takes the simple form \[\left(\Box+m^{2}\right)B^{\nu\alpha}=0, \tag{48}\] which represents a wave equation for a particle of mass \(m\). At the end, the Kalb-Ramond field in VSR satisfies the standard Klein-Gordon equation under two gauge conditions \[\partial_{\nu}B^{\nu\alpha}=0,\quad N_{\nu}B^{\nu\alpha}=0. \tag{49}\] The general solution to Eq. (48) takes the form: \[B^{\mu\nu}(x)=\int\frac{d^{3}\mathbf{p}}{\sqrt{(2\pi)^{3}2\omega_{\mathbf{p}}}} \left[B^{\mu\nu}(\mathbf{p})e^{ip\cdot x}+B^{\mu\nu*}(\mathbf{p})e^{-ip\cdot x }\right], \tag{50}\] where \(\omega_{\mathbf{p}}=\sqrt{\mathbf{p}^{2}+m^{2}}\), and the associated 4-momenta \(p^{\mu}\) are on shell such that \(p^{\mu}=(\omega_{\mathbf{p}},\mathbf{p})\). The Fourier coefficients \(B^{\mu\nu}(\mathbf{p})\) can be expanded over a basis of polarization antisymmetric 2-tensors, labeled by \(\lambda=1,\cdots,6\): \[B^{\mu\nu}(\mathbf{p})=a_{\mathbf{p},\lambda}\epsilon^{\mu\nu}(\mathbf{p}, \lambda). \tag{51}\] To find the physical polarization states, it is convenient to analyze the solution (50) in the rest frame where \(k^{\mu}=(m,\mathbf{0})\). The solution for a general \(p^{\mu}\) can then be obtained by applying a VSR boost, i.e., \(p^{\mu}=L(p)^{\mu}_{\ \nu}k^{\nu}\), with \[L(p) =T_{1}(\beta_{1})T_{2}(\beta_{2})L_{3}(\xi)\] \[=e^{i\beta_{1}\mathcal{T}^{1}}e^{i\beta_{2}\mathcal{T}^{2}}e^{i \xi\mathcal{K}^{3}}, \tag{52}\] where \(\mathcal{T}^{1}\), \(\mathcal{T}^{2}\), and \(\mathcal{K}^{3}\) are the generators of the \(SIM(2)\) group in the vector representation [4]. In the rest frame, the gauge conditions (49) become \[k_{\mu}\epsilon^{\mu\nu}(\mathbf{k},\lambda)=0,\quad\ n_{\mu}\epsilon^{\mu\nu }(\mathbf{k},\lambda)=0. \tag{53}\] The first condition implies that \(\epsilon^{0i}(\mathbf{k},\lambda)=0\) with \(i=1,2,3\), which eliminates three polarizations. The second condition gives \(\epsilon^{3j}(\mathbf{k},\lambda)=0\) with \(j=1,2\), which eliminates two more polarizations. Therefore, there is only one non-zero polarization \(\epsilon^{12}(\mathbf{k},\lambda)\), which means that the free Kalb-Ramond field in VSR has only one degree of freedom, equivalent to a single massive scalar field. Maxwell-Kalb-Ramond Vacuum Polarization in VSR In this section, we calculate the effective action for the case of a fermion field interacting with Maxwell and Kalb-Ramond fields within the context of VSR, which has been developed in the preceding sections. As we proceed, we will obtain exact solutions to the one-loop vacuum polarization amplitudes involving the external gauge fields. Let us start by recalling that Kalb-Ramond quantum electrodynamics is a \(U(1)\) gauge theory that involves a 2-form gauge potential \(B_{\mu\nu}(x)\) and a string matter field \(\psi(x(\sigma))\) serving as the source for \(B_{\mu\nu}(x)\)[31]. Considering the complexities inherent in string theory-based systems, let us focus on exploring a simplified scenario within the framework of VSR. Specifically, we investigate an interaction model involving a point-like fermion field and the Maxwell and Kalb-Ramond fields in four-dimensional Minkowski spacetime. As we will see below, this type of interaction is only viable if \(\psi\) does not carry any Kalb-Ramond charge and couples nonminimally with it [50]. We consider \(SIM(2)\)-covariant gauge theories under the \(U(1)\) gauge transformations \[B_{\mu\nu} \to B_{\mu\nu}+\overset{\ast}{\partial}_{\mu}\Sigma_{\nu}- \overset{\ast}{\partial}_{\nu}\Sigma_{\mu}, \tag{54}\] \[A_{\mu} \to A_{\mu}+\tilde{\partial}_{\mu}\Lambda,\] (55) \[\psi \rightarrow\exp\left\{ie\Lambda\right\}\psi, \tag{56}\] where we have defined the wiggle operators as \[\tilde{\partial}_{\mu}=\partial_{\mu}+\frac{1}{2}\frac{m_{A}^{2}}{n\cdot \partial}n_{\mu},\hskip 14.226378pt\text{and}\hskip 14.226378pt\overset{\ast}{ \partial}_{\mu}=\partial_{\mu}+\frac{1}{2}\frac{m_{B}^{2}}{n\cdot\partial}n_ {\mu}, \tag{57}\] where \(m_{A}\) and \(m_{B}\) represent the VSR-mass associated with the Maxwell and Kalb-Ramond fields, respectively. The simplest Lagrangian density that can be constructed, invariant under the aforementioned gauge transformations, is given by [50]: \[\mathcal{L} =-\frac{1}{4}\tilde{F}_{\mu\nu}\tilde{F}^{\mu\nu}-\frac{1}{12} \overset{\ast}{H}_{\mu\nu\alpha}\overset{\ast}{H}^{\mu\nu\alpha}\] \[+\bar{\psi}\left(i\not{D}+i\frac{m_{\psi}^{2}}{2}\frac{\not{\! \kappa}}{n\cdot D}+\frac{1}{12}\frac{g}{m}\sigma_{\mu\nu\lambda}\overset{\ast }{H}^{\mu\nu\lambda}-m\right)\psi, \tag{58}\] where \(m\) represents the usual fermion mass, \(m_{\psi}\) the VSR-mass associated to the \(\psi\) field, and \(g\) is a coupling constant with mass dimension \([g]=M^{-2}\) (in natural units). Also, the operator \(D_{\mu}\) denotes the standard covariant derivative, given by \[D_{\mu}=\partial_{\mu}-ieA_{\mu}, \tag{59}\] and \(\sigma_{\mu\nu\lambda}\) represents the fully antisymmetrized product of two gamma matrices normalized to unit strength, defined as \[\sigma_{\mu\nu\lambda}=i\epsilon_{\mu\nu\lambda\alpha}\gamma_{5}\gamma^{\alpha}, \tag{60}\] where \(\epsilon_{\mu\nu\lambda\alpha}\) is the Levi-Civita symbol. It satisfies the commutation relation \[[\sigma_{\mu\nu\lambda},\gamma_{\sigma}]=2i\epsilon_{\mu\nu\lambda\sigma} \gamma_{5}. \tag{61}\] It is important to note that the Lagrangian density (58) involves a nonminimal coupling of the Kalb-Ramond field to point-like fermions. We could interpret this kind of coupling as similar to that involving neutral particles, such as the neutron, interacting with the electromagnetic field through their anomalous magnetic moments. In order to determine the effective Lagrangian resulting from fermionic vacuum polarization at the one-loop level, we consider the generating functional defined as \[e^{iS_{\text{eff}}[A,B]}=\mathcal{N}\int\mathcal{D}\bar{\psi}\mathcal{D}\psi e ^{i\int d^{4}x\left[-\frac{1}{4}\tilde{F}_{\mu\nu}\tilde{F}^{\mu\nu}-\frac{1}{1 2}\widetilde{\widetilde{H}}_{\mu\nu\alpha}\widetilde{H}^{\mu\nu\alpha}+\bar{ \psi}\left(i\not{D}+i\frac{m_{\psi}^{2}}{2}\frac{\not{M}}{n\cdot D}+\frac{1}{ 12}\frac{\alpha}{m}\sigma_{\mu\nu\lambda}\widetilde{H}^{\mu\nu\lambda}-m \right)\psi\right]}, \tag{62}\] where \(\mathcal{N}\) is a normalization constant which will be used to absorb field-independent factors. By performing the fermionic integration, we obtain (up to a field-independent factor that can be absorbed in the normalization): \[S_{\text{eff}}[A,B] =\int d^{4}x\left[-\frac{1}{4}\tilde{F}_{\mu\nu}\tilde{F}^{\mu \nu}-\frac{1}{12}\widetilde{\widetilde{H}}_{\mu\nu\alpha}\widetilde{\widetilde{ H}}^{\mu\nu\alpha}\right]\] \[-i\text{Tr}\ln\left[i\not{D}+i\frac{m_{\psi}^{2}}{2}\frac{\not{ \mathcal{N}}}{n\cdot D}+\frac{1}{12}\frac{g}{m}\sigma_{\mu\nu\lambda} \widetilde{\widetilde{H}}^{\mu\nu\lambda}-m\right], \tag{63}\] where Tr stands for the trace over Dirac matrices as well as the trace over the integration in coordinate space. At this point, it is worth mentioning that the Lorentz covariant calculation of vacuum polarization for the model under study was carried out in Ref. [50] for the simplified case of constant fields using Schwinger's approach. In this work, we extend this calculation to the \(SIM(2)\)-covariant case using Feynman techniques without the restriction to constant external gauge fields. Furthermore, we can verify whether our results can reproduce those obtained in the literature by taking the limit \(m_{A,B,\psi}\to 0\) and assuming constant \(F\) and \(H\) fields. To evaluate the trace in Eq. (63), we notice that the term \(1/(n\cdot D)\) is a non-local operator that depends on the vector gauge field \(A_{\mu}\). Since our focus is on perturbative computations, we treat the non-local term as a perturbative expansion in the fermionic Lagrangian density. This is accomplished by using the following matrix identity: \[\frac{1}{A+B}=\frac{1}{A}-\frac{1}{A}B\frac{1}{A+B}=\frac{1}{A}-\frac{1}{A}B\frac {1}{A}+\frac{1}{A}B\frac{1}{A}B\frac{1}{A+B}=\cdots \tag{64}\] Therefore, we can obtain the corresponding new types of vertices involving more than one external gauge vector field from the perturbative Lagrangian density: \[\mathcal{L}_{\text{fermion}} =\bar{\psi}\left[\left(i\not{\partial}+i\frac{m_{\psi}^{2}}{2} \frac{\not{\pi}}{n\cdot\partial}-m\right)\right.\] \[+e\left(\not{\mathcal{A}}-\frac{m_{\psi}^{2}}{2}\frac{\not{\pi}} {n\cdot\partial}(n\cdot A)\frac{1}{n\cdot\partial}\right)+\frac{1}{12}\frac{g }{m}\sigma_{\mu\nu\lambda}\overset{\approx}{H}^{\mu\nu\lambda}\] \[-ie^{2}\left(\frac{m_{\psi}^{2}}{2}\frac{\not{\pi}}{n\cdot \partial}(n\cdot A)\frac{1}{n\cdot\partial}(n\cdot A)\frac{1}{n\cdot\partial} \right)\] \[\left.e^{3}\left(\frac{m_{\psi}^{2}}{2}\frac{\not{\pi}}{n\cdot \partial}(n\cdot A)\frac{1}{n\cdot\partial}(n\cdot A)\frac{1}{n\cdot\partial} (n\cdot A)\frac{1}{n\cdot\partial}\right)+\cdots\right]\psi. \tag{65}\] Thus, we can write the nontrivial part of the effective action as \[S_{\text{eff}}^{(n)}[A,B]=i\text{Tr}\sum_{n=1}^{\infty}\frac{1}{n}\left[\frac{ i}{i\not{\partial}+i\frac{m_{\psi}^{2}}{2}\frac{\not{\pi}}{n\cdot\partial}-m}i \hat{\mathcal{O}}\right]^{n}, \tag{66}\] where the operator \(\hat{\mathcal{O}}\) can be determined from Eq. (65). The formal contributions of this formula will give rise to the \(n\)-point vertex functions of the fields \(A_{\mu}\) and \(B_{\mu\nu}\). At this point, a graphical representation may be helpful. Following the conventions depicted in Fig. 1, the contributions to the tadpole and the self-energy are illustrated in Figs. 2 and 3 up to one-loop order. For \(n=1\) the expression (66) gives rise to three contributions (67) up to second order in the coupling constant \(e\), as graphically indicated in Fig. 2. \[S_{\text{eff}}^{(1)}=S_{\text{eff}}^{(1,a)}+S_{\text{eff}}^{(1,b)}+S_{\text{ eff}}^{(1,c)}. \tag{67}\] Figure 1: Feynman rules. Continuous, wave, and double wave lines represent the fermion propagator, the gauge vector \(A^{\mu}\), and the field strength \(\overset{\approx}{H}^{\mu\nu\lambda}\), respectively. We found that the tadpole diagrams 1(a) and 1(b) vanish, as expected by Furry's theorem [24]. The remaining contribution, i.e., the self-energy 1(c), gives \[S_{\rm eff}^{(1,c)}=i{\rm Tr}\left[i\frac{i}{\not{\partial}+i\frac{m_{\psi}^{2}} {2}\frac{\not{n}}{n\cdot\partial}-m}e^{2}\left(\frac{m_{\psi}^{2}}{2}\frac{ \not{n}}{n\cdot\partial}(n\cdot A)\frac{1}{n\cdot\partial}(n\cdot A)\frac{1}{ n\cdot\partial}\right)\right], \tag{68}\] and we can write it in momentum space. The result is \[S_{\rm eff}^{(1,c)}=\frac{i}{2}\int\frac{d^{4}q}{(2\pi)^{4}}\Pi_{\mu\nu}^{(1,c )}A^{\mu}(q)A^{\nu}(-q), \tag{69}\] where \[\Pi_{\mu\nu}^{(1,c)}=-ie^{2}m_{\psi}^{2}{\rm Tr}\int\frac{d^{4}p}{(2\pi)^{4}} \frac{i(\not{p}+m)}{p^{2}-\mu^{2}}\frac{\not{m}_{\mu}n_{\nu}}{(n\cdot p)^{2}n \cdot u}, \tag{70}\] with \(u=p+q\), and \(\mu^{2}=m_{\psi}^{2}+m^{2}\) representing the modified fermion mass. Besides, we define the wiggle momentum by \[\tilde{p}_{\mu}=p_{\mu}-\frac{m_{\psi}^{2}}{2}\frac{n_{\mu}}{n\cdot p}. \tag{71}\] Figure 3: The contributions corresponding to \(n=2\) include: (a) Vacuum polarization \(AA\), (b) Vacuum polarization \(HA\), (c) Vacuum polarization \(AH\) and (d) Vacuum polarization \(HH\). Figure 2: The contributions corresponding to \(n=1\) include: (a) Tadpole-\(A\), (b) Tadpole-\(H\), and (c) Vacuum polarization \(AA\). After calculating the traces of the Dirac matrices in four dimensions, the expression (70) yields \[\Pi^{(1,c)}_{\mu\nu}=\int\frac{d^{4}p}{(2\pi)^{4}}\frac{4e^{2}m_{\psi}^{2}n_{\mu} n_{\nu}}{(p^{2}-\mu^{2})(n.p)(n.u)}. \tag{72}\] This integral is ultraviolet divergent and also exhibits an infrared divergence when \((n\cdot p)=0\). To deal with these types of divergences in VSR, we use the Mandelstan-Leibbrant prescription [26; 27; 28] \[\frac{1}{n\cdot p}=\lim_{\epsilon\to 0}\frac{(\bar{n}\cdot p)}{(n\cdot p)( \bar{n}\cdot p)+i\epsilon}, \tag{73}\] where \(\bar{n}_{\mu}\) is a extra null vector which obeys \((n\cdot\bar{n})=1\). Moreover, we employ a useful decomposition formula \[\frac{1}{n\cdot(p+q_{i})n\cdot(p+q_{j})}=\frac{1}{n\cdot(q_{i}-q_{j})}\left( \frac{1}{n\cdot(p+q_{j})}-\frac{1}{n\cdot(p+q_{i})}\right), \tag{74}\] to isolate factors as \(1/(n\cdot p)\) in each momentum integration. The resulting loop integrals can be evaluated using the formulas introduced by Alfaro in Ref. [29]. Here, we will quote the most basic of them: \[\int dp\frac{1}{\left[p^{2}+2p\cdot q-m^{2}\right]^{a}}\frac{1}{ \left(n\cdot p\right)^{b}} =(-1)^{a+b}i\pi^{\omega}(-2)^{b}\frac{\Gamma(a+b-\omega)}{\Gamma (a)\Gamma(b)}\left(\bar{n}\cdot q\right)^{b}\] \[\times\int_{0}^{1}dtt^{b-1}\frac{1}{\left(m^{2}+q^{2}-2(n\cdot q )(\bar{n}\cdot q)t\right)^{a+b-\omega}}, \tag{75}\] where \(dp\) is the integration measure in \(d\)-dimensional space with \(\omega=d/2\). Following the calculation procedure mentioned above to evaluate the integral in Eq. (72), we obtain: \[\Pi^{(1,c)}_{\mu\nu}=-i\frac{e^{2}m_{\psi}^{2}}{n\cdot q}\int_{0}^{1}dt\left[ \frac{\Gamma(2-\omega)(\bar{n}\cdot q)n_{\mu}n_{\nu}}{2^{2\omega-3}\pi^{ \omega}\left(\mu^{2}-2t(n\cdot q)(\bar{n}\cdot q)\right)^{2-\omega}}\right]. \tag{76}\] As we can note, the Mandelstam-Leibbrandt prescription (73) adopted in the formula for the loop integral (75) introduces a new null vector \(\bar{n}_{\mu}\), which _a priori_ could break the \(SIM(2)\) symmetry. To preserve the \(SIM(2)\) covariance in this calculation, Alfaro proposes in Ref. [30] to fix the vector \(\bar{n}_{\mu}\) as a linear combination of the original null vector \(n_{\mu}\) and the external momentum of the diagram. By imposing certain conditions, such as reality, appropriate scaling \((n,\bar{n})\rightarrow(\lambda n,\lambda^{-1}\bar{n})\), and being dimensionless [24; 25], the vector \(\bar{n}_{\mu}\) can be expressed in the following form: \[\bar{n}_{\mu}=\frac{q_{\mu}}{n\cdot q}-\frac{q^{2}n_{\mu}}{2(n\cdot q)^{2}}, \tag{77}\] which leads to \(\bar{n}\cdot q=\frac{q^{2}}{2(n\cdot q)}\). Hence, we can substitute this result into Eq. (76), and after integrating over the variable \(t\), we find \[\Pi_{\mu\nu}^{(1,c)}=-\frac{ie^{2}m_{\psi}^{2}}{4\pi^{2}}\left\{\frac{q^{2}n_{ \mu}n_{\nu}}{(2-\omega)(n\cdot q)^{2}}+\frac{n_{\mu}n_{\nu}}{(n\cdot q)^{2}} \left[\mu^{2}\ln\left(1-\frac{q^{2}}{\mu^{2}}\right)+q^{2}\left(1-\gamma_{\rm E }-\ln\left(\frac{\mu^{2}-q^{2}}{4\pi}\right)\right)\right]\right\}, \tag{78}\] where we have performed an expansion around \(\omega=2\) and retained only the \(1/(2-\omega)\) pole and the finite terms when \(\omega\to 2^{+}\). We note that this result is exclusively an effect of the VSR and goes to zero when we take the limit \(m_{\psi}^{2}\to 0\). For \(n=2\) the expression (66) yields four 2-point amplitudes up to one-loop order \[S_{\rm eff}^{(2)}=S_{\rm eff}^{(2,a)}+S_{\rm eff}^{(2,b)}+S_{\rm eff}^{(2,c)}+ S_{\rm eff}^{(2,d)}, \tag{79}\] as depicted in Fig. 3. We have explicitly verified that graphs 2(b) and 2(c) vanish after momentum integration. The only nontrivial contributions come from graphs 2(a) and 2(d). The Feynman diagram 2(a) corresponds to the usual vacuum polarization of the photon observed in QED, with the additional VSR nonlocal corrections incorporated into the fermion propagator and vertex. Its analytical expression is given by \[S_{\rm eff}^{(2,a)} = \frac{i}{2}{\rm Tr}\left[\frac{i}{i\not{\partial}+i\frac{m_{\psi} ^{2}}{2}\frac{\not{n}}{n\cdot\partial}-m}ie\left(\not{\cal A}-\frac{m_{\psi}^{2 }}{2}\frac{\not{n}}{n\cdot\partial}(n\cdot A)\frac{1}{n\cdot\partial}\right)\right. \tag{80}\] \[\times \left.\frac{i}{i\not{\partial}+i\frac{m_{\psi}^{2}}{2}\frac{\not{ n}}{n\cdot\partial}-m}ie\left(\not{\cal A}-\frac{m_{\psi}^{2}}{2}\frac{\not{n}}{n \cdot\partial}(n\cdot A)\frac{1}{n\cdot\partial}\right)\right], \tag{81}\] which written in momentum space result in \[S_{\rm eff}^{(2,a)}=\frac{i}{2}\int\frac{d^{4}q}{(2\pi)^{4}}\Pi_{\mu\nu}^{(2,a )}A^{\mu}(q)A^{\nu}(-q), \tag{82}\] where \[\Pi_{(2,a)}^{\mu\nu}=-e^{2}{\rm Tr}\int\frac{d^{4}p}{(2\pi)^{4}}\frac{i(\not{ p}+m)}{p^{2}-\mu^{2}}\left(\gamma^{\mu}+\frac{m_{\psi}^{2}}{2}\frac{\not{n}}{n \cdot p}n^{\mu}\frac{1}{n\cdot u}\right)\frac{i(\vec{\mathbf{a}}+m)} {u^{2}-\mu^{2}}\left(\gamma^{\nu}+\frac{m_{\psi}^{2}}{2}\frac{\not{n}}{n\cdot u }n^{\nu}\frac{1}{n\cdot p}\right). \tag{83}\] Following the procedure outlined earlier, we find \[\Pi_{\mu\nu}^{(2,a)} = \frac{-ie^{2}}{12\pi^{2}(2-\omega)}\left[q_{\mu}q_{\nu}-q^{2} \left(\eta_{\mu\nu}+3m_{\psi}^{2}\frac{n_{\mu}n_{\nu}}{(n\cdot q)^{2}}\right)\right] \tag{84}\] \[+ \frac{ie^{2}}{4\pi^{2}}\left\{\frac{1}{3}\left(q_{\mu}q_{\nu}-q^{ 2}\eta_{\mu\nu}\right)\left[\gamma_{\rm E}-\ln 4\pi+6{\cal I}_{1}\right]\right.\] \[- \left.\frac{m_{\psi}^{2}q^{2}}{n\cdot q}\left[n_{\mu}q_{\nu}+n_{ \nu}q_{\mu}-(n\cdot q)\eta_{\mu\nu}\right]{\cal I}_{2}\right.\] \[- \left.\frac{m_{\psi}^{2}q^{2}n_{\mu}n_{\nu}}{(n.q)^{2}}\left[ \gamma_{\rm E}-\ln 4\pi+2q^{2}{\cal I}_{3}-{\cal I}_{4}\right]\right\},\] where we have defined the integrals: \[\mathcal{I}_{1}\equiv\int_{0}^{1}dxx(1-x)\ln\left(\mu^{2}-q^{2}x(1-x)\right), \tag{85}\] \[\mathcal{I}_{2}\equiv\int_{0}^{1}dx\int_{0}^{1}dt\frac{x}{\mu^{2}-q^{2}x(1-x+tx)}, \tag{86}\] \[\mathcal{I}_{3}\equiv\int_{0}^{1}dx\int_{0}^{1}dt\frac{x^{2}(t-2)(1-x+tx)}{\mu^ {2}-q^{2}x(1-x+tx)}, \tag{87}\] \[\mathcal{I}_{4}\equiv\int_{0}^{1}dx\int_{0}^{1}dt(1-6x+4tx)\ln\left(\mu^{2}-q^{ 2}x(1-x+tx)\right). \tag{88}\] Expression (84), by itself, does not have a transverse structure as required by \(U(1)\) gauge invariance of the photon field. To bring the photon self-energy into its desired form, we must include the full diagram contributions \(\Pi^{\rm(Total)}_{\mu\nu}=\Pi^{(1,c)}_{\mu\nu}+\Pi^{(2,a)}_{\mu\nu}\). Let us separate this sum into two parts. The one with the simplest structure is the divergent contribution, namely, \[\Pi^{\rm(Total)}_{\mu\nu}\Big{|}_{\rm div} =\left.\Pi^{(1,c)}_{\mu\nu}\right|_{\rm div}+\left.\Pi^{(2,a)}_{ \mu\nu}\right|_{\rm div} \tag{89}\] \[=\frac{-ie^{2}}{12\pi^{2}(2-\omega)}\left(q_{\mu}q_{\nu}-q^{2} \eta_{\mu\nu}\right), \tag{90}\] which has the same form present in the usual QED. The second part is UV finite and exhibits a more intricate structure. To simplify the integrals \(\mathcal{I}_{i}\), we expand the corresponding contributions as a power series in the external momenta before integrating over the parameters \(x\) and \(t\). The resulting expression can be written in the following form: \[\Pi^{\rm(Total)}_{\mu\nu}\Big{|}_{\rm finite} =\left.\Pi^{(1,c)}_{\mu\nu}\right|_{\rm finite}+\left.\Pi^{(2,a)} _{\mu\nu}\right|_{\rm finite}\] \[=\frac{ie^{2}}{12\pi^{2}}\left\{\left(\gamma_{\rm E}+\ln(\frac{ \mu^{2}}{4\pi})-\frac{1}{5}\frac{q^{2}}{\mu^{2}}-\frac{3}{140}\frac{q^{4}}{\mu ^{4}}-\frac{1}{315}\frac{q^{6}}{\mu^{6}}-\cdots\right)\left(q_{\mu}q_{\nu}-q^ {2}\eta_{\mu\nu}\right)\right.\] \[-\left.m_{\psi}^{2}\left(\frac{3}{2}\frac{q^{2}}{\mu^{2}}+\frac{ 5}{8}\frac{q^{4}}{\mu^{4}}+\frac{19}{60}\frac{q^{6}}{\mu^{6}}+\cdots\right) \left(\frac{n_{\mu}q_{\nu}+n_{\nu}q_{\mu}}{n\cdot q}-\frac{q^{2}n_{\mu}n_{\nu }}{(n\cdot q)^{2}}-\eta_{\mu\nu}\right)\right\}, \tag{91}\] where \(\cdots\) indicates terms of higher order in the power of \(q^{2}/\mu^{2}\). In the end, the full one-loop vacuum polarization of the photon in the VSR-QED is manifestly transverse, as required by the Ward identity \(q_{\mu}\Pi^{\mu\nu}(q)=0\). But only the UV finite part receives corrections due to the VSR-nonlocal terms. Note that a similar result was obtained recently in Ref. [25] for the low-energy limit, \(q^{2}\ll\mu^{2}\), in the context of Maxwell-Chern-Simons electrodynamics within the VSR framework. The results obtained so far allow us to write the VSR effective Lagrangian for the gauge field \(A_{\mu}\) as follows: \[\mathcal{L}_{\text{eff}}[A] =\frac{1}{Z_{A}}\left(-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}\right)+ \frac{1}{2}m_{A}^{2}n_{\mu}\left(\frac{1}{n\cdot\partial}F^{\mu\nu}\right)n^{ \alpha}\left(\frac{1}{n\cdot\partial}F_{\alpha\nu}\right)\] \[-\frac{e^{2}}{12\pi^{2}}\left\{-\frac{1}{4}F_{\mu\nu}\left(\frac {1}{5}\frac{\Box}{\mu^{2}}-\frac{3}{140}\frac{\Box^{2}}{\mu^{4}}+\frac{1}{315} \frac{\Box^{3}}{\mu^{6}}-\cdots\right)F^{\mu\nu}\right.\] \[\left.-\frac{m_{\psi}^{2}}{2}n_{\mu}\left(\frac{1}{n\cdot\partial }F^{\mu\nu}\right)\left(\frac{3}{2}\frac{\Box}{\mu^{2}}-\frac{5}{8}\frac{\Box ^{2}}{\mu^{4}}+\frac{19}{60}\frac{\Box^{3}}{\mu^{6}}+\cdots\right)n^{\alpha} \left(\frac{1}{n\cdot\partial}F_{\alpha\nu}\right)\right\}, \tag{92}\] where \[\frac{1}{Z_{A}}=1+\frac{e^{2}}{12\pi^{2}}C_{\text{div}}, \tag{93}\] with \[C_{\text{div}}=\frac{1}{2-\omega}-\gamma_{\text{E}}-\ln\frac{\mu^{2}}{4\pi}. \tag{94}\] In order to yield a divergence-free effective Lagrangian we perform a suitable change of scale, by defining the renormalized quantities: \[A_{R}^{\mu}=Z_{A}^{-\frac{1}{2}}A^{\mu}, \tag{95}\] \[e_{R}=Z_{A}^{\frac{1}{2}}e, \tag{96}\] \[m_{AR}=Z_{A}^{\frac{1}{2}}m_{A}, \tag{97}\] Using the above scale transformations we get \[\mathcal{L}_{\text{eff}}[A] =-\frac{1}{4}F_{R\mu\nu}F_{R}^{\mu\nu}+\frac{1}{2}m_{AR}^{2}n_{ \mu}\left(\frac{1}{n\cdot\partial}F_{R}^{\mu\nu}\right)n^{\alpha}\left(\frac {1}{n\cdot\partial}F_{R\alpha\nu}\right)\] \[-\frac{e_{R}^{2}}{12\pi^{2}}\left\{-\frac{1}{4}F_{R\mu\nu}\left( \frac{1}{5}\frac{\Box}{\mu^{2}}-\frac{3}{140}\frac{\Box^{2}}{\mu^{4}}+\frac{1 }{315}\frac{\Box^{3}}{\mu^{6}}-\cdots\right)F_{R}^{\mu\nu}\right.\] \[\left.-\frac{m_{\psi}^{2}}{2}n_{\mu}\left(\frac{1}{n\cdot\partial }F_{R}^{\mu\nu}\right)\left(\frac{3}{2}\frac{\Box}{\mu^{2}}-\frac{5}{8}\frac{ \Box^{2}}{\mu^{4}}+\frac{19}{60}\frac{\Box^{3}}{\mu^{6}}+\cdots\right)n^{ \alpha}\left(\frac{1}{n\cdot\partial}F_{R\alpha\nu}\right)\right\}. \tag{98}\] This effective Lagrangian retains the exact form of its classical counterpart, incorporating finite terms encompassing higher-derivative and VSR-nonlocal corrections. In particular, no additional counterterms were required to cancel the divergences, and the tensorial structure of these finite terms preserves both VSR and gauge symmetry. Now we calculate the Kalb-Ramond self-energy from diagram 2(d) in Fig. 3. Once again, expression (66) furnishes the analytical form of the corresponding effective action: \[S_{\text{eff}}^{(2,d)} =\frac{i}{2}\text{Tr}\left[\frac{i}{i\not{\partial}+i\frac{m_{\psi}^ {2}}{2}\frac{\not{\mu}}{n\cdot\partial}-m}i\frac{g}{12m}\sigma_{\mu\nu\lambda} \stackrel{{\sim}}{{H}}^{\mu\nu\lambda}\right. \tag{99}\] \[\times\left.\frac{i}{i\not{\partial}+i\frac{m_{\psi}^{2}}{2}\frac{ \not{\mu}}{n\cdot\partial}-m}i\frac{g}{12m}\sigma_{\alpha\beta\delta} \stackrel{{\sim}}{{H}}^{\alpha\beta\delta}\right]. \tag{100}\] Thus, we can write it in the momentum space as \[S_{\text{eff}}^{(2,d)}=\frac{i}{2}\int\frac{d^{4}q}{(2\pi)^{4}}\Pi_{\mu\nu \lambda;\alpha\beta\delta}^{(2,d)}\stackrel{{\sim}}{{R}}^{\mu\nu \lambda}(q)\stackrel{{\sim}}{{H}}^{\alpha\beta\delta}(-q), \tag{101}\] where \[\Pi_{\mu\nu\lambda;\alpha\beta\delta}^{(2,d)}=\left(\frac{ig}{12m}\right)^{2} \text{Tr}\int\frac{d^{4}p}{(2\pi)^{4}}\frac{i(\not{p}+\not{}n)^{2}}{p^{2}-\mu ^{2}}\sigma_{\mu\nu\alpha}\frac{i(\vec{a}+m)}{u^{2}-\mu^{2}}\sigma_{\alpha \beta\delta}. \tag{102}\] After making the Dirac traces and the momentum integration, we obtain \[\Pi_{(2,d)}^{\mu\nu\lambda;\alpha\beta\delta} =\frac{ig^{2}\epsilon^{\mu\nu\lambda\sigma}\epsilon^{\alpha\beta \delta\rho}}{9m^{2}2^{2+2\omega}\pi^{\omega}}\Gamma(2-\omega)\left\{2q_{\sigma} q_{\rho}\int_{0}^{1}dx\frac{x(1-x)}{\left(\mu^{2}-q^{2}x(1-x)\right)^{2-\omega}}\right.\] \[-\frac{m_{\psi}^{2}q^{2}n_{\sigma}n_{\rho}}{(n\cdot q)^{2}}\int_ {0}^{1}dx\int_{0}^{1}dt\frac{m_{\psi}^{2}x(2-\omega)+\mu^{2}-q^{2}x(1-x+tx)}{ \left(\mu^{2}-q^{2}x(1-x+tx)\right)^{3-\omega}} \tag{103}\] \[+\frac{m_{\psi}^{2}\left(n_{\sigma}q_{\rho}+n_{\rho}q_{\sigma} \right)}{n\cdot q}\int_{0}^{1}dx\int_{0}^{1}dt\frac{\mu^{2}+q^{2}x(1-x+tx)(1- \omega)}{\left(\mu^{2}-q^{2}x(1-x+tx)\right)^{3-\omega}}\] \[-\left.\eta_{\sigma\rho}\int_{0}^{1}dx\int_{0}^{1}dt\left[\frac{m _{\psi}^{2}q^{2}x(2-\omega)}{\left(\mu^{2}-q^{2}x(1-x+tx)\right)^{3-\omega}}- \frac{2\left(\mu^{2}-m_{\psi}^{2}-q^{2}x(1-x)\right)}{\left(\mu^{2}-q^{2}x(1- x)\right)^{2-\omega}}\right]\right\}.\] As in the previous case, we can split the above result into divergent and finite parts as follows: \[\Pi_{(2,d)}^{\mu\nu\lambda;\alpha\beta\delta}\Big{|}_{\text{div}}=\frac{ig^{2 }\epsilon^{\mu\nu\lambda\sigma}\epsilon^{\alpha\beta\delta\rho}}{576\pi^{2}m^{ 2}(2-\omega)}\left[\frac{1}{3}\left(q_{\sigma}q_{\rho}+(6\mu^{2}-q^{2})\eta_{ \sigma\rho}\right)+m_{\psi}^{2}\left(\frac{n_{\sigma}q_{\rho}+n_{\rho}q_{ \sigma}}{n\cdot q}-\frac{q^{2}n_{\sigma}n_{\rho}}{(n\cdot q)^{2}}-2\eta_{ \sigma\rho}\right)\right], \tag{104}\] and \[\Pi_{(2,d)}^{\mu\nu\lambda;\alpha\beta\delta}\Big{|}_{\text{finite}}=\] \[=\frac{ig^{2}\epsilon^{\mu\nu\lambda\sigma}\epsilon^{\alpha\beta \delta\rho}}{576\pi^{2}m^{2}}\left\{\frac{1}{3}q^{2}\eta_{\sigma\rho}-\frac{1} {3}(\gamma_{\text{E}}+\ln\frac{\mu^{2}}{4\pi})\left(q_{\sigma}q_{\rho}+(6\mu^ {2}-q^{2})\eta_{\sigma\rho}\right)\right.\] \[-m_{\psi}^{2}\left[(\gamma_{\text{E}}+\ln\frac{\mu^{2}}{4\pi}) \left(\frac{n_{\sigma}q_{\rho}+n_{\rho}q_{\sigma}}{n\cdot q}-\frac{q^{2}n_{ \sigma}n_{\rho}}{(n\cdot q)^{2}}-2\eta_{\sigma\rho}\right)\right.\] \[\left.+\frac{q^{2}}{\mu^{2}}\left(\frac{q^{2}n_{\rho}n_{\sigma}}{3( n\cdot q)^{2}}-\frac{2\left(n_{\sigma}q_{\rho}+n_{\rho}q_{\sigma}\right)}{3n \cdot q}+\frac{5\eta_{\sigma\rho}}{6}\right)+\frac{q^{4}}{\mu^{4}}\left(\frac{3 q^{2}n_{\rho}n_{\sigma}}{40(n\cdot q)^{2}}-\frac{9\left(n_{\sigma}q_{\rho}+n_{\rho}q_{ \sigma}\right)}{40n\cdot q}+\frac{29\eta_{\sigma\rho}}{120}\right)+\cdots\right]\] \[-m_{\psi}^{4}\left(\frac{1}{2}\frac{q^{2}}{\mu^{2}}+\frac{5}{24} \frac{q^{4}}{\mu^{4}}+\cdots\right)\frac{n_{\rho}n_{\sigma}}{(n\cdot q)^{2}}+ \frac{q^{2}}{\mu^{2}}\left(\frac{q_{\rho}q_{\sigma}}{15}-\frac{q^{2}\eta_{ \sigma\rho}}{30}\right)+\frac{q^{4}}{\mu^{4}}\left(\frac{q_{\rho}q_{\sigma}}{1 40}-\frac{q^{2}\eta_{\sigma\rho}}{420}\right)+\cdots\right\}. \tag{105}\] It should be noted that both the divergent and finite parts receive radiative corrections due to VSR-nonlocal terms, in contrast to the pure Maxwell case. To determine the tensor structure of the quantum effective action and the necessary counterterms to renormalize it, we can utilize the identity \[H^{\star}_{\mu}H^{\star}_{\nu}=\frac{1}{2}H_{\mu\alpha\beta}H_{\nu}^{\ \alpha\beta}- \frac{1}{6}\eta_{\mu\nu}H_{\alpha\beta\lambda}H^{\alpha\beta\lambda}, \tag{106}\] where \(H^{\star}_{\mu}\) is the dual tensor corresponding to \(H_{\mu\nu\lambda}\) and is given by \[H^{\star}_{\mu}\equiv\frac{1}{3!}\epsilon_{\mu\alpha\beta\lambda}H^{\alpha \beta\lambda}. \tag{107}\] So we can write the effective Lagrangian density associated with the Kalb-Ramond field as \[\mathcal{L}_{\text{eff}}[B] =\frac{1}{Z_{B}}\left(-\frac{1}{12}\overset{\sim}{H}_{\mu\nu \alpha}\overset{\sim}{H}^{\mu\nu\alpha}\right)\] \[+\frac{g^{2}}{16\pi^{2}m^{2}}C_{\text{div}}\left\{\frac{1}{12} \overset{\sim}{H}_{\mu}^{\ \alpha\beta}(\partial^{\mu}\partial^{\nu})\overset{ \sim}{H}_{\nu\alpha\beta}\right.\] \[\left.+\frac{3}{12}m_{\psi}^{2}\left[\overset{\sim}{H}_{\mu}^{ \ \alpha\beta}\left(\frac{n^{\mu}n^{\nu}\Box}{(n\cdot\partial)^{2}}\right) \overset{\sim}{H}_{\nu\alpha\beta}-\overset{\sim}{H}_{\mu}^{\ \alpha\beta}\left(\frac{n^{\mu}\partial^{\nu}+n^{\nu} \partial^{\mu}}{n\cdot\partial}\right)\overset{\sim}{H}_{\nu\alpha\beta} \right]\right\}\] \[+\text{finite terms}, \tag{108}\] where \[\frac{1}{Z_{B}}=1-\frac{g^{2}\mu^{2}}{8\pi^{2}m^{2}}C_{\text{div}}, \tag{109}\] with \(C_{\text{div}}\) defined in Eq. (94). We note that from the second and third terms in Eq. (108), it is necessary to introduce new counterterms into the classical action in order to renormalize it within the minimal subtraction scheme. This is an expected result since it is well-known that nonminimal couplings can lead to higher-derivative divergences in the quantum effective action. Hence, at this point, we may interpret our model as an effective theory that is valid in the low-momentum limit, where \(q^{2}\ll m^{2}\). In this regime, we can make the approximations \(\mu^{2}\approx m^{2}\) with \(m_{\psi}^{2}\ll 1\). Thus we have \[\frac{1}{Z_{B}}\approx 1-\frac{g^{2}}{8\pi^{2}}C_{\text{div}} \tag{110}\] and by defining a renormalized field \(B_{R}^{\mu\nu}=Z_{B}^{-\frac{1}{2}}B^{\mu\nu}\) and a renormalized coupling constant \(g_{R}=Z_{B}^{\frac{1}{2}}g\), we get \[\mathcal{L}_{\text{eff}}[B]\approx-\frac{1}{12}\overset{\sim}{H}_{R\mu\nu \alpha}\overset{\sim}{H}_{R}^{\ \mu\nu\alpha}+\text{finite terms}, \tag{111}\] which has same form as the classical free Kalb-Ramond Lagrangian in the VSR context. Moreover, in the limit of \(m_{B}\to 0\), we recover the Lorentz covariant result obtained in Ref. [50] at leading order in \(H^{2}\), with the same values for the renormalized quantities. ## V Conclusions In this work, we propose an extension of Maxwell and Kalb-Ramond electrodynamics in the presence of fermionic matter fields in a \(SIM(2)\)-gauge invariant manner, incorporating both minimal and nonminimal couplings. In the free case, the VSR-Kalb-Ramond field is equivalent to a single massive real scalar field with one polarization. Moreover, we have calculated the VSR-effective action for the Maxwell and Kalb-Ramond field strengths using the Alfaro-Mandelstam-Leibbrandt prescription [29], which accounts for UV/IR mixing divergences in the one-loop Feynman integrals. The induced quantum corrections include higher-derivative terms that preserve the VSR-nonlocal tensor structure. Specifically, the finite terms in the Maxwell sector exhibit the VSR-nonlocal tensor structure, while the divergent part retains Lorentz covariance. On the other hand, in the Kalb-Ramond sector, both finite and divergent terms incorporate VSR-nonlocal corrections. In the latter case, the counterterms necessary to cancel the divergences are absent in the bare Lagrangian, and additional counterterms must be introduced into the classical Lagrangian to absorb these divergences and render the theory well-defined. However, a renormalized effective action can be achieved in the weak-energy limit, similar to what has been obtained in the literature for the Lorentz-covariant case of constant \(F\) and \(H\) field strengths [50]. ###### Acknowledgements. The authors thank the Fundacao Cearense de Apoio ao Desenvolvimento Cientifico e Tecnologico (FUNCAP), the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES), and the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), Grant no. 200879/2022-7 (RVM). This work is also supported by the Spanish Agencia Estatal de Investigacion (grant PID2020-116567GB-C21 funded by MCIN/AEI/10.13039/501100011033 and ERDF A way of making Europe) and by the project PROMETEO/2020/079 (Generalitat Valenciana). R. V. Maluf thanks the Departament of Theoretical Physics & IFIC of the University of Valencia - CSIC for the kind hospitality.
2302.11786
Elliptic flow of inclusive charged hadrons in Au+Au collisions at $E_{lab} = 35\mathrm{~A~GeV}$ using the PHSD model
Elliptic flow ($v_2$) measurements of inclusive charged hadrons at mid-rapidity ($|\eta| <$ 1.0) in Au+Au collisions at $E_{lab} = 35\mathrm{~A~GeV}$ using the Parton Hadron String Dynamics (PHSD) model are presented as a function of centrality, transverse momentum ($p_\mathrm{T}$) and pseudo-rapidity ($\eta$). The $v_2$ results are obtained using the $\eta$-sub event plane method with respect to event plane angle ($\psi_{2}$) and participant plane angle ($\psi_{2}^{PP}$). $p_\mathrm{T}$-integrated charged hadron $v_2$ shows a strong centrality dependence in Au+Au collisions at $E_{lab} = 35\mathrm{~A~GeV}$. The eccentricity scaled elliptic flow ($v_2/\varepsilon_2$) also shows centrality dependence. The higher values of $v_2/\varepsilon_2$ in central collisions suggest development of stronger collectivity. The measurements are compared with the results from Au+Au collisions at $\sqrt{s_{\mathrm{NN}}} = 7.7\mathrm{~GeV}$ published by the STAR experiment at RHIC. We also compare results of HSD and PHSD modes of the model to investigate contribution of hadronic and partonic phases of the medium on the measured $v_2$. The current results serve as a prediction of the collective behavior of the matter produced in baryon rich and moderate temperature conditions for the upcoming Compressed Baryonic Matter (CBM) experiment at the Facility for Antiproton and Ion Research (FAIR). These predictions are also useful for the interpretation of data measured at RHIC Beam Energy Scan (BES) program.
Waseem Bhat, M. Farooq Mir, Vipul Bairathi, Towseef Bhat, Sonia Kabana, Shabir Bhat
2023-02-23T05:41:04Z
http://arxiv.org/abs/2302.11786v1
Elliptic flow of inclusive charged hadrons in Au+Au collisions at \(E_{lab}=35\) A GeV using the PHSD model ###### Abstract Elliptic flow (\(v_{2}\)) measurements of inclusive charged hadrons at mid-rapidity (\(|\eta|<1.0\)) in Au+Au collisions at \(E_{lab}=35\) A GeV using the Parton Hadron String Dynamics (PHSD) model are presented as a function of centrality, transverse momentum (\(p_{\rm T}\)) and pseudo-rapidity (\(\eta\)). The \(v_{2}\) results are obtained using the \(\eta\)-sub event plane method with respect to event plane angle (\(\psi_{2}\)) and participant plane angle (\(\psi_{2}^{PP}\)). \(p_{\rm T}\)-integrated charged hadron \(v_{2}\) shows a strong centrality dependence in Au+Au collisions at \(E_{lab}=35\) A GeV. The eccentricity scaled elliptic flow (\(v_{2}/\varepsilon_{2}\)) also shows centrality dependence. The higher values of \(v_{2}/\varepsilon_{2}\) in central collisions suggest development of stronger collectivity. The measurements are compared with the results from Au+Au collisions at \(\sqrt{s_{\rm NN}}=7.7\) GeV published by the STAR experiment at RHIC. We also compare results of HSD and PHSD modes of the model to investigate contribution of hadronic and partonic phases of the medium on the measured \(v_{2}\). The current results serve as a prediction of the collective behavior of the matter produced in baryon rich and moderate temperature conditions for the upcoming Compressed Baryonic Matter (CBM) experiment at the Facility for Antiproton and Ion Research (FAIR). These predictions are also useful for the interpretation of data measured at RHIC Beam Energy Scan (BES) program. * February 2023 _Keywords_: Heavy-ion collisions, Elliptic flow, PHSD model ## 1 Introduction Lattice quantum chromodynamics (lQCD) predicts a phase transition from normal nuclear matter to a novel state of matter composed of de-confined quarks and gluons, called the quark gluon plasma (QGP) [1, 2, 3]. The experimental facilities like Relativistic Heavy Ion Collider (RHIC) [4, 5, 6, 7] and Large Hadron Collider (LHC) [8, 9, 10] are designed to collide heavy-ions at high temperatures and low net baryon densities to study properties of the QGP. The upcoming CBM experiment at FAIR aims to operate at moderate temperatures and high net baryon densities for studying the QGP medium created in heavy-ion collisions. It will collide Au-ions at beam energies from 2 A GeV to 35 A GeV [11]. In these collisions, the densities are expected to reach 6 to 12 times the ordinary nuclear matter density at the point of collision, which could result in formation of the QGP medium [12, 13]. Various observables have been studied experimentally as well as theoretically to probe the QGP medium created in relativistic heavy-ion collisions. Collective flow is one such observable, which plays a vital role in understanding properties of the QGP medium [14, 15]. In non-central nucleus-nucleus collisions, the initial spatial anisotropy transforms into momentum space anisotropy due to multi-particle interactions among the constituents of the medium. The momentum space anisotropy can be measured by azimuthal angle distribution of the produced particles with respect to the reaction plane. The reaction plane is the plane formed by the impact parameter vector and the beam direction. The azimuthal angle distribution of the produced particles can be expanded in terms of a Fourier series as, \[\frac{dN}{d(\phi-\Psi_{R})}\propto 1+2\sum_{n=1}^{\infty}v_{n}\cos\left[n\left( \phi-\Psi_{R}\right)\right]. \tag{1}\] The \(2^{nd}\)-order Fourier coefficient \(v_{2}\), known as elliptic flow, provides a strong evidence for the formation of QGP matter [14, 16, 17, 18]. The charged hadron \(v_{2}\) measurements suggest a hydrodynamic behavior of the QGP with a very low shear viscosity to entropy density ratio (\(\eta/s\)) [19, 20]. Further, it is also sensitive to the early phases of the collision and level of thermalization achieved by the system created in heavy-ion collisions [18, 19, 21, 22]. Therefore, the charged hadron \(v_{2}\) as a function of centrality, transverse momentum (\(p_{T}\)) and pseudo-rapidity (\(\eta\)) can provide information about the medium created in heavy-ion collisions. In this paper, we report measurements of inclusive charged hadron elliptic flow at mid-rapidity (\(|\eta|<1.0\)) in Au+Au collisions at \(E_{lab}\) = 35 A GeV (\(\sqrt{s_{NN}}\approx 8.0\) GeV) using the PHSD model [23, 24, 25]. These \(v_{2}\) measurements are the first predictions for the FAIR energy (\(E_{lab}\) = 35 A GeV) using the PHSD model. The \(v_{2}\) results are compared with the published STAR experimental data from Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7 GeV [26]. We discuss centrality dependence of \(p_{T}\)-integrated elliptic flow (\(\langle v_{2}\rangle\)). The differential elliptic flow as a function of \(p_{T}\) and \(\eta\) are studied for different centrality classes. We also study eccentricity scaled elliptic flow (\(v_{2}/\varepsilon_{2}\)) and discuss collectivity in Au+Au collisions at \(E_{lab}\) = 35 A GeV. We study effect of partonic and hadronic interactions by comparing the \(v_{2}\) results from partonic (PHSD) and hadronic (HSD) modes of the PHSD model. The paper is organized in the following way. In section 2, we describe the PHSD model in brief. The analysis method is presented in section 3. In section 4, we present the results on integrated and differential \(v_{2}\). Eccentricity scaled elliptic flow (\(v_{2}/\varepsilon_{2}\)) is discussed. A comparison of \(v_{2}(p_{T})\) between the HSD and PHSD modes of the PHSD model is also investigated. Finally, we summarize and conclude the results in section 5. ## 2 PHSD model PHSD model is a microscopic covariant transport method that describes highly interacting hadronic and partonic matter produced in heavy-ion collisions [23, 24, 25]. It is a consistent dynamical approach formulated on the basis of Kadanoff-Baym (KB) equations [27] or off-shell transport equations in phase-space representation. The Dynamical Quasi-Particle Model (DQPM) [28, 29, 30] is designed to describe the strongly interacting non-perturbative nature of the QCD matter. The DQPM is mapped to replicate lQCD results [31, 32] for a quark-gluon plasma in thermodynamic equilibrium and serves as the foundation for the transport of theoretical description of the QGP phase in the PHSD model. The DQPM is thermodynamically consistent (contradictory to the massless pQCD partons) in which strongly interacting quarks and gluons represent the degrees of freedom. The hadronization procedure complies with all applicable conservation laws (flavor current and four-momentum) because of off-shell nature of hadrons and partons in each event [33]. The off-shell Hadron-String Dynamics (HSD)[34, 35] dynamics, which incorporates self-energies for the hadronic degrees of freedom, governs the hadronic system. The low-energy hadron-hadron collisions are modeled in accordance with experimental cross sections, while as inelastic hadron-hadron collisions with energies exceeding \(\sqrt{s_{NN}}\geq 2.6\) GeV are represented by using FRITIOF 7.02 model [36] and PYTHIA 6.4 [37]. We have simulated 50 million Au+Au collision events at \(E_{lab}=35\) A GeV using the PHSD model. We have employed 4.1 version of the PHSD model which incorporates both partonic (PHSD) and hadronic (HSD) modes. Impact parameter range from 0 to 15 \(fm\) has been used to generate the events. We have taken the hadronic cascade time of 500 \(fm/c\) for the elliptic flow calculation. All the measurements are done in the mid-rapidity (\(|\eta|<1.0\)) region. In this study, charged particle multiplicity within \(|\eta|<0.5\) is selected for the determination of centrality. The reference multiplicity distribution is shown in figure 1 for Au+Au collisions at \(E_{lab}=35\) A GeV using the PHSD model. The multiplicity is divided into nine different centrality classes from \(0-80\%\). ## 3 Flow Analysis Method ### Event Plane Method The event plane method is generally used for the measurement of elliptic flow (\(v_{2}\)) in heavy-ion collisions [38, 39]. In this method, \(v_{2}\) is calculated with respect to \(2^{nd}\) harmonic event plane angle \(\psi_{2}\) which is given by, \[\psi_{2}=\frac{1}{2}\tan^{-1}\frac{\sum_{i}w_{i}sin(2\phi_{i})}{\sum_{i}w_{i} \cos(2\phi_{i})}, \tag{2}\] where, \(\phi_{i}\) and \(w_{i}\) represent azimuthal angle and weight for the \(i^{th}\) particle, respectively. We have taken transverse momentum as the weight to optimize the event plane resolution. After the reconstruction of the event plane angle, the \(v_{2}\) is calculated using the equation, \[v_{2}=\left\langle\cos\left[2(\phi-\psi_{2})\right]\right\rangle. \tag{3}\] The angle brackets denote an average over all the produced particles in all events. Due to limited number of particles used in the calculation of the event plane angle, the estimated event plane is different than the reaction plane. In order to account for this effect, the observed \(v_{2}\) is divided by the event plane resolution. For eliminating the auto-correlation and non-flow effects, the \(\eta\)-sub event plane method is used [38]. In this method, each event is split into two equal multiplicity sub events with an \(\eta\) gap of 0.15 between them. For each event, the sub-event plane angles are calculated in the positive (\(1.0<\eta<0.075\)) and negative (\(-0.075<\eta<-1.0\)) pseudo-rapidity regions. Additionally, we have also calculated \(v_{2}\) with respect to the participant plane angle (\(\psi_{2}^{PP}\)) and \(\psi_{2}=0\). ### Event plane resolution The event plane angle resolution as a function of centrality at mid-rapidity (\(|\eta|<1.0\)) in Au+Au collisions at \(E_{lab}\) = 35 A GeV using the PHSD model is shown in figure 2. For comparison with the STAR experimental data, we have converted the sub event plane angle resolution into full event plane angle resolution (\(R_{Full}=\sqrt{2}\times R_{\eta-sub}\)), where \(R_{Full}\) is the full event plane resolution and \(R_{\eta-sub}\) represents the eta-sub event plane resolution. The event plane resolution from the PHSD model are matching well with the experimental data from Au+Au collisions at \(\sqrt{s_{NN}}=7.7\) GeV for all centralities except for the most central collisions, where a small deviation is observed. The resolution peaks at the 20-30% centrality class and decreases towards the central and peripheral collisions. This is due to the two competing effects of low number of produced particles Figure 1: (Color online) Reference multiplicity distribution in Au+Au collisions at \(E_{lab}\) = 35 A GeV using the PHSD model. in peripheral collisions and small \(v_{2}\) values in central collisions. ## 4 Results ### Inclusive charged hadron elliptic flow Integrated elliptic flow \(\langle v_{2}\rangle\) for inclusive charged hadrons as a function of centrality at mid-rapidity (\(|\eta|<1.0\)) in Au+Au collisions at \(E_{lab}\) = 35 A GeV using the PHSD model is shown in figure 3. The obtained \(\langle v_{2}\rangle\) shows a clear centrality dependence where the values increase from central to mid-central collisions and then decrease slowly for peripheral collisions. The maximum value of \(\langle v_{2}\rangle\) is around 30-40% centrality. The \(\langle v_{2}\rangle\) results are compared with the published STAR results from Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7 GeV [26]. We observed the \(\langle v_{2}\rangle\) values follow a similar centrality dependence and are in good agreement with the published STAR experimental data within the statistical uncertainties. We compare \(\langle v_{2}\rangle\) of charged hadrons calculated with respect to \(\psi_{2}\) = 0 in Au+Au collisions at \(E_{lab}\) = 35 A GeV using the PHSD model as shown in figure 3. The values of \(\langle v_{2}\rangle\) for \(\psi_{2}\) = 0 are similar to \(\langle v_{2}\rangle\) values calculated with respect to \(\psi_{2}\) in central collisions (0-20%) but there are differences in mid-central and peripheral collisions. The difference further increases from mid-central to peripheral collisions, which could be due to the assumption of \(\psi_{2}\) = 0 for each event. We also compare \(\langle v_{2}\rangle\) calculated with respect to the participant plane angle (\(\psi_{2}^{PP}\)) as shown in the figure 3. We observed the magnitude of \(\langle v_{2}\rangle\) with respect to \(\psi_{2}^{PP}\) is lower than the \(\langle v_{2}\rangle\) with Figure 2: (Color online) Event plane angle resolution as a function of centrality at mid-rapidity (\(|\eta|<1.0\)) in Au+Au collisions at \(E_{lab}\) = 35 A GeV using the PHSD model. The resolution from the STAR experiment in Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7 GeV is also shown [26]. all the centralities. This difference might be due to event-by-event fluctuations in the positions of nucleons used for the calculation of participant plane angle. We also report transverse momentum (\(p_{T}\)) dependence of inclusive charged hadron \(v_{2}\) at mid-rapidity (\(|\eta|<1.0\)) in Au+Au collisions at \(E_{lab}\) = 35 A GeV using the PHSD model as shown in figure 4. There is a monotonic increase in \(v_{2}\) with \(p_{T}\) for all centrality classes. We observed a centrality dependence of \(v_{2}(p_{T})\), where the values increase from central to peripheral collisions for a given \(p_{T}\). The measured \(v_{2}(p_{T})\) is compared with the experimental data from Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7 GeV for three centrality classes 10-20%, 20-30%, and 30-40% [26]. We observed the \(v_{2}(p_{T})\) agrees well with the experimental data within the statistical uncertainties for all the three centrality classes. Additionally, we discuss \(v_{2}(p_{T})\) measured with respect to \(\psi_{2}^{PP}\) and \(\psi_{2}=0\). We observed that \(v_{2}(p_{T})\) increases monotonically for all the centrality classes. However, the magnitude of \(v_{2}(p_{T})\) calculated with respect to \(\psi_{2}^{PP}\) and \(\psi_{2}=0\) increases with \(p_{T}\) from central to mid-central collisions and then decreases towards peripheral collisions. ### Eccentricity scaling In this section, we discuss eccentricity-scaled elliptic flow (\(v_{2}/\varepsilon_{2}\)) of charged hadrons as a function of \(p_{T}\) in Au+Au collisions at \(E_{lab}\) = 35 A GeV using the PHSD model as shown in figure 5. The obtained \(v_{2}\) is divided by the average participant eccentricity (\(\varepsilon_{2}\)) in each centrality bin. This scaling will remove the contribution of geometry of the initial overlap region. We observe \(v_{2}/\varepsilon_{2}\) increases monotonically with increasing \(p_{T}\) for all centrality classes. For a given \(p_{T}\), the value of \(v_{2}/\varepsilon_{2}\) is more in central collisions (10 Figure 3: (Color online) Integrated elliptic flow (\(\langle v_{2}\rangle\)) for inclusive charged hadrons as a function of centrality at mid-rapidity (\(|\eta|<1.0\)) in Au+Au collisions at \(E_{lab}\) = 35 A GeV using the PHSD model. The results for the inclusive charged hadron \(\langle v_{2}\rangle\) from the STAR experimental data in Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7 GeV is also shown [26]. 20%) compared to peripheral collisions (50-60%). This observation suggests a stronger collectivity in central collisions. The \(v_{2}/\varepsilon_{2}\) is compared with the STAR results from Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7 GeV [26]. The results are consistent with the STAR experimental data within the statistical uncertainties for the measured \(p_{T}\) range. ### Pseudo-rapidity dependence of \(v_{2}\) The inclusive charged hadron \(v_{2}\) as a function of pseudo-rapidity (\(\eta\)) is measured in 10-40% central Au+Au collisions at \(E_{lab}\) = 35 A GeV using the PHSD model as shown in figure 6. The \(v_{2}\) values are symmetric about \(\eta\sim 0\) in the measured pseudo-rapidity range (\(-1.0<\eta<1.0\)). The \(v_{2}\) does not change significantly over the measured \(\eta\) region. The \(v_{2}(\eta)\) results from the PHSD model are compared with the STAR experimental data from 10-40% central Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7 GeV [26]. We observe the values of \(v_{2}\) follow a similar \(\eta\) dependence as in the experimental data. However, the Figure 4: (Color online) Differential elliptic flow (\(v_{2}\)) as a function of \(p_{T}\) for charged hadrons at mid-rapidity (\(|\eta|<1.0\)) in Au+Au collisions at \(E_{lab}\) = 35 A GeV using the PHSD model. Charged hadron \(v_{2}(p_{T})\) from the STAR experimental data in Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7 GeV is also shown [26]. magnitude of \(v_{2}(\eta)\) in Au+Au collisions at \(E_{lab}\) = 35 A GeV from the PHSD model is lower compared to the STAR experimental data. Figure 5: (Color online) \(v_{2}/\varepsilon_{2}\) as a function of \(p_{T}\) for charged hadrons at mid-rapidity (\(|\eta|<1.0\)) in Au+Au collisions at \(E_{lab}\) = 35 A GeV using the PHSD model. \(v_{2}/\varepsilon_{2}\) for charged hadrons from the STAR experiment in Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7 GeV is also shown [26]. Figure 6: (Color online) Inclusive charged hadron \(v_{2}\) as a function of \(\eta\) in 10-40% central Au+Au collisions at \(E_{lab}\) = 35 A GeV using the PHSD model. Charged hadron \(v_{2}(\eta)\) from the STAR experiment in 10-40% central Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7 GeV is also shown [26]. ### Mode comparison In this sub-section, we compare \(v_{2}(p_{T})\) between the hadronic (HSD) and partonic (PHSD) modes of the PHSD model. Figure 7 shows the inclusive charged hadron \(v_{2}\) as a function of \(p_{T}\) at mid-rapidity (\(|\eta|<1.0\)) in Au+Au collisions at \(E_{lab}=35\) A GeV. The HSD mode incorporates only the hadronic interactions, while the PHSD mode incorporates both hadronic as well as partonic interactions. We calculate the ratio of \(v_{2}(p_{T})\) between the HSD and PHSD modes as shown in lower panels of figure 7. The \(v_{2}(p_{T})\) increases with \(p_{T}\) in both the modes for all centralities. However, the ratio between the two modes is less than unity which shows \(v_{2}(p_{T})\) from the HSD mode is small as compared to the PHSD mode. This difference could be attributed to the partonic interactions in addition to the hadronic interactions in the PHSD mode. This observation indicates the formation of QGP in Au+Au collisions at \(E_{lab}=35\) A GeV in the PHSD model. ## 5 Summary and conclusions We have reported elliptic flow measurements of inclusive charged hadrons at mid-rapidity in Au+Au collisions at \(E_{lab}=35\) A GeV using the PHSD model. The obtained \(v_{2}\) is calculated for nine centrality intervals from 0 to 80% using the \(\eta\)-sub event plane method. These \(v_{2}\) measurements of inclusive charged hadrons are the first predictions in Au+Au collisions at FAIR energy (\(E_{lab}=35\) A GeV) using the PHSD model. The integrated charged hadron \(\langle v_{2}\rangle\) shows a clear centrality dependence and is consistent with the published STAR experimental data [26]. The differential elliptic flow increases with \(p_{T}\) for all the centrality classes studied. The observed \(p_{T}\) dependence of \(v_{2}\) for 10-20%, 20-30%, and 30-40% centrality classes is found to agree well with the STAR experimental results within the statistical uncertainties [26]. The magnitude of Figure 7: (Color online) Inclusive charged hadron \(v_{2}(p_{T})\) at mid-rapidity (\(|\eta|<1.0\)) in Au+Au collisions from the HSD and PHSD modes of the PHSD model at \(E_{lab}=35\) A GeV. The bottom panels show the ratio of \(v_{2}(p_{T})\) between the HSD and PHSD modes. eccentricity scaled elliptic flow (\(v_{2}/\varepsilon_{2}\)) is more in central collisions than mid-central and peripheral collisions which suggests a stronger collectivity in central collisions. Charged hadron \(v_{2}\) as function of \(\eta\) shows weak dependence in the measured \(\eta\) range. However, there is a significant difference between the obtained \(v_{2}(\eta)\) calculated from the PHSD model and the published STAR experimental data [26]. In addition, a comparison of \(v_{2}(p_{T})\) between the HSD and PHSD modes show that the magnitude of \(v_{2}(p_{T})\) is larger in the PHSD mode compared to HSD mode. This observation suggests the formation of QGP in the initial stages of the collision at \(E_{lab}=35\) A GeV. The collision energy in this study has been selected to match the energy that will be available in the future CBM experiment at FAIR. Our results would be helpful to predict the collective behaviour of particles emerging from baryon-rich fireballs created in heavy-ion interactions at this FAIR energy. These results are also useful for understanding of data measured at the RHIC Beam Energy Scan (BES) program. ## 6 Acknowledgement Sonia Kabana acknowledge the financial support received by ANID PIA/APOYO AFB220004. This research was supported in part by the cluster computing resource provided by the IT Division at the GSI Helmholtzzentrum fur Schwerionenforschung, Darmstadt, Germany. The authors acknowledge helpful advices from the PHSD group members E. L. Bratkovskaya, V. Voronyuk, W. Cassing, P. Moreau, O. E. Soloveva, and L. Oliva.
2306.03675
Potential of the Julia programming language for high energy physics computing
Research in high energy physics (HEP) requires huge amounts of computing and storage, putting strong constraints on the code speed and resource usage. To meet these requirements, a compiled high-performance language is typically used; while for physicists, who focus on the application when developing the code, better research productivity pleads for a high-level programming language. A popular approach consists of combining Python, used for the high-level interface, and C++, used for the computing intensive part of the code. A more convenient and efficient approach would be to use a language that provides both high-level programming and high-performance. The Julia programming language, developed at MIT especially to allow the use of a single language in research activities, has followed this path. In this paper the applicability of using the Julia language for HEP research is explored, covering the different aspects that are important for HEP code development: runtime performance, handling of large projects, interface with legacy code, distributed computing, training, and ease of programming. The study shows that the HEP community would benefit from a large scale adoption of this programming language. The HEP-specific foundation libraries that would need to be consolidated are identified
J. Eschle, T. Gal, M. Giordano, P. Gras, B. Hegner, L. Heinrich, U. Hernandez Acosta, S. Kluth, J. Ling, P. Mato, M. Mikhasenko, A. Moreno Briceño, J. Pivarski, K. Samaras-Tsakiris, O. Schulz, G. . A. Stewart, J. Strube, V. Vassilev
2023-06-06T13:38:50Z
http://arxiv.org/abs/2306.03675v2
# Potential of the Julia programming language for high energy physics computing ###### Abstract Research in high energy physics (HEP) requires huge amounts of computing and storage, putting strong constraints on the code speed and resource usage. To meet these requirements, a compiled high-performance language is typically used; while for physicists, who focus on the application when developing the code, better research productivity pleads for a high-level programming language. A popular approach consists of combining Python, used for the high-level interface, and C++, used for the computing intensive part of the code. A more convenient and efficient approach would be to use a language that provides both high-level programming and high-performance. The Julia programming language, developed at MIT especially to allow the use of a single language in research activities, has followed this path. In this paper the applicability of using the Julia language for HEP research is explored, covering the different aspects that are important for HEP code development: runtime performance, handling of large projects, interface with legacy code, distributed computing, training, and ease of programming. The study shows that the HEP community would benefit from a large scale adoption of this programming language. The HEP-specific foundation libraries that would need to be consolidated are identified. ## 1 Introduction High throughput computing plays a major role in high energy physics (HEP) research. The field requires the development of sophisticated computing codes, which are continuously evolving in the course of the research work. Computing grids, connecting computer centers all around the world, are required to process the experiments' data [1]. Computer algebra systems and high performance computers are used to build new models and to calculate particle production cross sections. Fig. 1 shows the expected needs for the ATLAS and CMS experiments [2, 3] at the Large Hadron Collider (LHC) [4] and its successor, the high-luminosity LHC (HL-LHC) [5], together with the estimated planned resources. A data processing improvement from R&D is required for HL-LHC to fit within the planned resources, which total \(20\cdot 10^{9}\) HS061 units of CPU resource. Footnote 1: [https://www.spec.org/cpu2006/](https://www.spec.org/cpu2006/) The need to reconcile high performance with fast development has led to the development of a C++ interpreter [8] that provides the convenience of a read-eval-print-loop (REPL) interactive experience, also known as programming shell, that supports just-in-time compilation, and allows the use of the same programming language for compiled and interpreted code. The same analysis framework ROOT[9, 10] can then be used with compiled code and interactively. In addition to the REPL, ROOT supports Jupyter notebooks, which are another convenient method for interactive use. The shortcoming of this approach is that the use of a complex programming language is not optimal for easy and fast coding. For this reason, another approach that consists of using two languages, one optimal for fast development, typically Python, and one optimal for high performance, typically C++, is often adopted. Using two languages is not ideal: it expands the required area of expertise; it forces the reimplementation, in the high-performance language, of pieces of code originally written with the fast-development language when they do not meet the required performance; and it reduces the reusability of code. In 2009, J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah imagined a new programming language to address this "two language problem" by providing high performance and ease of programming [11, 12, 13] simultaneously. It has been a successful approach. The new language, Julia, which has evolved year upon year, is now used by many users. Julia is a dynamic language, similar to Python, yet with a performance similar to C/C++. As of October 11, 2022, 8387 packages were registered in the Julia general registry [14], which are accessible to Julia's integrated package manager. Fig 2 shows the rapid growth in the number of packages. As demonstrated by M. Stanitzki and J. Strube [15], the Julia language is a good alternative to the combination of C++ and Python for HEP data analysis and it fulfils its promise to be an easy, high performance language. This report extends that study. It explores the possible benefits of the adoption of Julia as the main programming language for HEP, in place of C++-Python, in a similar way as happened with the switch from Fortran to C++ in the late 90's. Figure 1: Estimated CPU required by the CMS (top) and ATLAS (bottom) experiments for LHC and HL-LHC [6, 7]. ### The programming language community More important than a list of technical features, however, are the culture and interests of a programming language's community, because the language and its implementation will evolve to satisfy those interests. For instance, the Haskell community is focused on language theory, and is unlikely to put much effort into optimization for high performance computing, and the Go community is so focused on language simplicity that they have resisted try-catch logic [16]. The Julia community's interests are well aligned with HEP, and many Julia users are in the sciences. We see this in some design choices inherited from existing technical languages (Fortran, R, MATLAB, Wolfram), like 1-based indexing, column-major arrays, and built-in N-dimensional arrays, and also in the effort placed on interoperability with other languages: ccall, PyCall.jl, RCall.jl, MathLink.jl, and JavaCall.jl. Julia is supported by NumFOCUS (like many Python data science projects), and many of its most prominent applications are in numerical computing: NASA spacecraft modeling [17]; climate science [18]; and the Celeste project [19], which achieved 1.54 Petaflops on the Cori II supercomputer [20], a first for a dynamic language. ### Key features of Julia To locate Julia in the space of programming languages, its primary features are characterized below. * A single implementation, rather than an abstract language specification with multiple slightly incompatible implementations. The Julia computing platform is primarily implemented in Julia (most parts), C, and C++ (LLVM), and it has a built-in REPL. * Every function, including those entered interactively, is compiled just-in-time (JIT) using LLVM as the back end. Julia has no virtual machine, and the JIT is eager (always compiles before execution), unlike the tracing/hot spot JIT seen in LuaJIT [21] or metatracing [22] like PyPy[23]. * Partly thanks to dynamic typing, and also being a JIT language, Julia fully supports: type reflection, source code as a built-in data type, which enables Lisp-like (hygienic) macros2. Footnote 2: [https://en.wikipedia.org/wiki/Hygienic_macro](https://en.wikipedia.org/wiki/Hygienic_macro) * Fast N-dimensional arrays that store elements in-place. * Multiple dispatch (MD): a function-call invokes the most specific method that matches the type of _all_ arguments. While many languages support (opt-in) multiple dispatch (C#, Common Lisp), Julia is the first language that uses MD as the central paradigm3 while focusing on performance. MD allows for a surprising amount of code reuse and composition among packages that do not know about each other (often Figure 2: Number of packages registered in the Julia general repository that can be installed by the integrated package manager (top) and of Julia language code GitHub stars (bottom) as function of time. The trend of the star counts is compared with Numba and Jax. a problem with OOP languages), this is further discussed in Sec. 1.3. * Apart from the lack of classes, Julia has a fairly standard mix of imperative and functional programming styles. Immutability is encouraged by default, but mutable structs and arrays are allowed and are frequently used. * Built-in parallel processing support. Any piece of a program can be marked for execution in parallel. Threads are scheduled globally--allowing a multithreaded function to call other multithreaded function--on available resources without oversubscribing, saving the developer from the burden of taking care of the number of threads. Computing distributed on several computers is supported. Julia code can run natively on GPUs. * Objects are not reference counted, but are garbage collected. The garbage collector is standard mark-and-sweep (like Python), generational (like Java), but non-compacting, so pointers to objects are valid as long as the objects remain in scope. The manner in which polymorphism is supported is the most noticeable difference with CBOO programming languages, like C++ and Python, and it merits a dedicated discussion. ### Polymorphism in C++, Python, and Julia Polymorphism is the "ability to provide a single interface to entities of different types" [24, 25]: a polymorphic function will accept arguments of different types. Here we compare Julia with C++ and Python due to their prevalence in HEP. We can distinguish two classes of polymorphism [26, 27]: ad-hoc polymorphism where a different implementation is provided for each set of types, and universal polymorphism, where a single generic implementation is provided for all the sets. Function overloading is an example of ad-hoc polymorphism, while C++ templates are an example of universal polymorphism. Ad-hoc polymorphism can be combined with universal polymorphism using template specialization: several implementations are provided, while each implementation can be generic, either partially or totally. A particular universal polymorphism is based on subtypes: the function scope is extended to all subtypes of its argument. Polymorphism can be static, i.e., resolved at compile time, or dynamic, i.e., resolved at runtime. It can apply to a single entity e.g., one of the arguments of a function, or multiple entities e.g., all the arguments of a function. In a function call, the mechanism that selects the implementation to execute according the passed argument types is called dispatch. In the following of this subsection, we will compare polymorphism provided by Julia, C++ and Python. Code examples illustrating our statements can be found in Appendix A. Polymorphism is provided in C++ by two paradigms: one based on class inheritance, function overloading, and function overriding; the other based on templates. The first provides ad-hoc and subtype polymorphisms over functions, while the second provides universal and ad-hoc polymorphisms over both functions and types. By exploiting the concepts feature introduced by C++ 20, subtyping polymorphism support can be added to the templates. The functionalities of the two paradigms overlap. In C++, class non-static member functions take a special argument, the class instance reference (x) or pointer (ptr), through a dedicated syntax, x.f() and ptr->f(). Both static and dynamic polymorphism are supported over this argument, while only static polymorphism is provided for the other arguments. Object copy with implicit type conversion can make difficult to follow the polymorphism flow of an object. Static ad-hoc polymorphism is supported over the arguments of global functions and static member functions. C++ class templates provide static universal polymorphism. One notable usage is the containers of the standard template library. The class inheritance is twofold, it provides inheritance of the interface through the subtype polymorphism previously described and inheritance of the data fields, a type is an aggregation of its fields and all the fields of its supertypes. The bounding of the two inheritances can result in breaking encapsulation [28] and it is often advocated to prefer composition to inheritance for the fields, as dictated by the "second principle of object-oriented design" of Ref. [29]. Python provides single dynamic dispatch for the class instance argument of member functions. It does not provide polymorphism for other arguments. Multiple dispatch emulation can be implemented in the function using conditions on the argument types or using a decorator [30]. Julia provides ad-hoc and universal polymorphism, including subtype polymorphism [31], within a consistent multiple-dispatch paradigm. Extending the dynamic dispatch of C++ to all arguments of a functions makes it extremely powerful, especially in terms of code re-usability. The Julia multiple dispatch exploits JIT compilation and the classification into static and dynamic dispatches is less relevant here: a specialized function is compiled only before its use, although the behavior is always consistent with dynamic dispatch; inlining and other compile-time optimizations can be performed despite the dynamic behavior. Nevetheless, this optimisation is subject to the ability of the compiler to infer the type of the passed arguments and requires some attention from the developer. In particular, when code performance is important, the developer must make sure that the return type of a function can be inferred from the types of the its arguments. Julia ad-hoc and universal polymorphism uses a simple syntax similar to function overloading, but with argument types specified only when required, either to extend a function or to enforce the types of arguments. The implementation of a two-argument function to be called by default, in absence of a more specialized implementation fitting better with the types of the passed arguments, will be defined as function(x, y)...end. Its specialization for a first argument of type A or of a subtype of A will be defined by suffixing the first argument x with ::A. It can be further specialized for a first argument of type A or a subtype of A and second argument of type B or of a subtype B, by annotating both arguments respectively with ::A and ::B, which will read as, function f(x::A, y::B). Universal polymorphism for type definition is supported by parametric types. In the following example the type Point has two fields of the same type, that must be a subtype of the Number type. struct Point{T <:Number} x ::T y ::T end Contrary to C++, in Julia, subtyping does not involve field inheritance. Data aggregation must be done using composition, enforcing the "second principle of object-oriented design". Subtypes are used to define a type hierarchy for the subtype polymorphism. The hierarchy tree is defined with abstract types that do not contain data, only the leaves of the tree can be a concrete type. Because variable assignment and function argument passing do not trigger an object copy, Julia is not affected by the difficulty encountered in C++ mentioned before. This is demonstrated with the "King of Savannah" example discussed in the Appendix A. ## 2 HEP computing requirements Because the program codes used in HEP research are very large, with high interdependence, a code typically uses many open source libraries developed by other authors; thus the effort to change programming language is consequential. The adoption of a new language can happen only if it brings a substantial advantage over the already used paradigm. The key advantage of Julia that can make the language switch worthwhile is the simplification that will arise from using a single language in place of a combination of two, C++ and Python. HEP computing is wide and includes many use cases: automation for the controls of the experiment, data acquisition, phenomenology and physics event generation, simulation of the physical experiment, reconstruction of physics events4 from recorded data, analysis of the reconstructed events, and more. Footnote 4: In HEP experiments, we observe collisions of subatomic particles or atoms. The result of a collision that produces new particles is called an event. Detectors, that can be complex apparatus as large as \(46\,\mathrm{m}\times 25\,\mathrm{m}\) producing tens of millions of MBytes of data per second, are used to capture the event. We will review in this section the properties required for event analysis, event reconstruction, event simulation and event generation. We will start with general features, common to all the use cases. ### General features #### 2.1.1 An easy language The easy language is one-side of the high-level and high-performance coin property that would motivate the adoption of Julia as a programming language. It is easy at least in two ways: easy, imperative syntax, and free of strong typing when writing code. The surface syntax of Julia largely resembles Python, MATLAB (control flow, literal array), while also getting inspiration, such as the do-block, from Lua and Ruby. It has all the high-levelness one would expect from a language: higher-order functions (functions can be returned and passed as variables), native N-dimensional arrays, nested irregular arrays (arrays of different-size arrays), and a syntax for broadcasting over arrays. As a syntax comparison example, a for-loop will look like the following in Python, Julia, and C++. ``` #Python a=0. foriinrange(1,11): a+=i a/=10. #Julia a=0. foriin1:10 a+=i end a/=10. //C++ autoa=0.; for(autoi=0; i<11.;++i){ a+=i; } a/=10.; ``` We will note in this example that Julia is free of type declaration, just like Python. In this example, we use the C++ auto type declaration feature to achieve the same goal. It is worthy of mention that, as in C++, Julia code interpretation is not sensitive to changes in indentation: appending two spaces at the beginning of the last line, will change the behavior of the Python code only. Julia supports list comprehension, like Python, as illustrated in this example that creates a vector with the series \(1,1/2,\ldots,1/10\): ``` #Julia v=[1/xforxin1:10] #Python v=[1.0/xforxinrange(1,11)] ``` NumPy[32] function vectorization is provided nuly in Julia for all functions and operators, including those defined by the user, through the broadcast operator: a dot prefix is used to specify that the function must be applied to each element of a vector or of an array. The syntax is illustrated below. ``` #Julia v=[1,1,1]./[1,2,3] #Python importnumpyasnp v=np.array([1,1,1]) \ / np.array([1,2,3]) ``` The following example illustrates the native support of linear algebra and multi-dimensional arrays and highlight the concise syntax it provides. It solves the simple equation, \[\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}x=\begin{pmatrix}2\\ 0\end{pmatrix}\] #Julia m=[11;1-1]\[[2,0]\] #Python importnumpyasnp m=np.linalg.solve( np.array([[1,1],[1,-1]]), np.array([[2,0]])) ``` Because broadcasting has its own syntax, Julia is able to use mathematical operators "correctly" when they are not broadcast, instead of relying special names e.g., matrix multiplication (np.matmul) and exponentiation (np.ling.expm). Two additional language design choices are worth noting for their contribution to make the language easy to use without sacrificing performance. The first is the evaluation of the function default parameter values, done at each call, instead of once for all in Python. Thanks to this choice, a function f(x, v=[]) that appends the element x to vector v and returns the latter will always return a one-element when called as f(x) in Julia, while it will return a vector growing in size at each new call in Python. The second is the copy performed by the updating operators (+=, *=,..) instead of the in-place operation done by Python. In the Julia code, A = [0, 1]; B = A; B += [1, 1], the operator += will not modify the content of the vector A, it is syntactically strictly equivalent to B = B + [1, 1]), while it will in the Python code A = np.array([0, 1]); B = A; B += np.array([1, 1]). We judge these two Julia behaviors more natural and more likely to match to what a non-expert would expect when writing or reading the code. As we illustrated with few examples, the Julia language is as easy as Python and sometimes easier thanks to a native support of features provided by external packages in Python that allows for a more concise and natural syntax. #### 2.1.2 Performance The other major advantage to Julia is high performance. Julia provides performance similar to C++, and in some cases even surpassing C++, as can be seen in Fig 3. The shown comparison is obtained by repeating the microbenchmark5 for Julia version 1.9.0rc1 and Python 3.9.2. C and Julia implementations use Open BLAS for the matrix operations, while Python uses NumPy (version 1.24.2) and Open BLAS. This benchmark compares the run time of some short algorithms implemented in a similar way in the different languages. The results are divided by the time the C/C++ implementation takes. The GNU compiler gcc version 10.2.1 has been used. The test is performed on a laptop equipped with a 11th Gen Intel(R) Core(TM) [email protected] CPU and 16 GB of random access memory (RAM) running the Linux operating system compile for x86 64-bit architecture. The 64-bit flavor Open BLAS library version 0.3.21 is used. This setup is used for all performance tests described in this paper, if not specified otherwise. The score goes from 0.73 to 1.67 (smaller is better) for Julia and 1.12 to 107 for Python. C is doing the best with respect to the two other languages for the recursive Fibonacci algorithm, implemented in Julia as below. Footnote 5: [https://julialang.org/benchmarks/](https://julialang.org/benchmarks/) fib(n) = n < 2? n : fib(n-1) + fib(n-2) This benchmark tests the performance for recursive calls. While expert developers typically avoid it for performance reasons, a recursive expression is the easiest and most natural way to implement a recursive algorithm. The mathematical series \(u_{n+1}=f(u_{n})\) maps directly to a recursive computing function call. We compute the 20th Fibonacci series elements, which results in 21,891 nested calls, a good example of recursive calls. The C/C++ implementation is doing better because of a tail recursion optimization performed by the compiler, that removes one of the two nested calls, disabling this optimization leads to performance a little worse than with Julia. The gain from this optimization is far from the one obtained by using a for-loop implementation instead of the recursion. Such implementation runs \(\approx\)1000 times faster. The tail recursion optimization does not work for the recursive quicksort, leading to similar performance from C/C++ and Julia (7% difference in favor of C/C++). We can use LHC open data to test performance on HEP-oriented code. We make this test with a dimuon analysis on CMS data of LHC Run-1, from 2011 and 2012. The analysis consists of measuring the spectrum of the mass of the system made of a Figure 3: Comparison of C/C++, Python and Julia language performance for a set of short algorithms. Open BLAS, together with NumPy in the Python case are used for matrix operation. The score is defined as the time to run the algorithm divided by the time to run the C version of the same algorithm. muon and an antimuon, produced in proton-proton collisions at the center-of-mass energy \(\sqrt{s}=7\,\)TeV. It uses data in which the muons and antimuons are already reconstructed and identified. It does not correct for instrumental efficiencies, contrary to the published physics results. Different implementations have been compared: the for-loop based Julia implementation from Ref. [33], the equivalent for-loop based implementations in Python and C++, the ROOT data frame (RDataFrame) implementation from Ref. [34], its equivalent in C++ in two flavors, and a data frame based implementation done in Julia using the DataFrames.jl package [35]. In the data frame implementation, the table rows are first filtered to obtain a data frame with the di-muon events of interest, then a column with the dimuon mass is added to the data frame, and finally a histogram is filled. RDataFrames use lazy operations, and only the histogram is materialized, limiting the memory footprint. In the first flavor of the C++ implementation, the formula to compute the mass is provided as a character string and the code for this computation is compiled JIT. In the second flavor a user-defined C++ function is provided to compute this mass. The input data are read from a file stored in the ROOT format with compression turned off. The UnROOT.jl package [36] (version 0.9.2) is used to read the file with the Julia code. This package is written in pure Julia. The native ROOT library (version 6.26/10) serves to read the files from C++ and Python. The GNU gcc compiler (version Debian-10.2.1-6) is used with a level-three optimization (option -O3). When JIT compilation is involved (the cases of Julia and JIT RDataFrame) the event analysis function is first run on a ten-event data file to trigger compilations before performing the timing on 1 billion events. For the Julia implementations, subsequent compilations occur during the timing loop; they represent only 1.1% of the time. In the case of JIT RDataframe an overhead (time independent of number of processed events) of \(5.0\pm 0.2\,\)ms (C++ version) or \(11.2\pm 2\,\)ms (Python version) is present in spite of the warm-up. The overheard is subtracted from the measurement. The obtained numbers are provided in Table 1. We observe that slight changes of source code can change the runtime of the C++ for-loop and native RDataFrame implementations beyond the statistical uncertainties. This effect is estimated by varying the code outside if the timed loop (addition of a print-out statement, change of code statement order) and included in the quoted uncertainties. For the other implementations, no significant change is observed and the quoted uncertainty include the statistical component only (at 68% confidence level). In this example, the for-loop Julia implementation runs the fastest, the C++ for-loop implementation is slightly behind (11% slower). The Julia implementation using data frames takes 21% less time to run than with C++ RDataFrame. The Python for-loop implementation is 1000 times slower than with Julia. Delegating the loop to an underlying compiled library (in our case the ROOT library) is not sufficient to achieve good performance with Python: the RDataFrame python implementation is 2.2 (resp. 2.8) times slower than the Julia data frame (resp. for-loop) implementation. The C++ RDataFrame implementations are slower than the Julia and C++ for-loop implementations by a factor from 1.4 to 2.1 depending on the implementations we compare. The dimuon spectrum obtained with the Julia code is shown in Fig. 4. The data frame benchmark includes the insertion of a column in the data frame with the dimuon mass. In the Julia case, the insertion is not needed for the analysis itself, but keeping it is interesting for benchmark purpose. The data frame returned by UnROOT does not allow direct insertion and the selected rows are copied to a DataFrames.jl data frame supporting such an insertion. That leaves room for improvements; we estimate that improved tools that would allow such insertion with no copy would reduce the runtime by 16%. For Python, the pure python library Uproot[37] can be used instead of the native ROOT library to read the data. This library loads all the data of a file into the memory, similar to the Julia data frame implementation. The data can be provided as a set of Awkward Arrays [38], NumPy arrays, or as a Pandas data frame [39]. All these data structures support vectorized operations permitting a delegation of the event loop to underlying compiled libraries improving the running time. The results are shown in Table 2. The measurement is done with Uproot version 4.3.4 (with awkward package version 1.10.3). The implementation using Awkward Arrays operating on a vector of all events runs faster than the Python RDataFrame implementation and is only 1.6 times slower than with the Julia for-loop. We note that the Python's performance is highly dependent on the algorithm implementation: the time ratio with respect to the Julia for-loop goes up to 63 for a vectorized implementation using Pandas data frames and to 1200 with the event loop. Running on a 61.5 million event file shows that the for-loop and RDataFrame implementations scale well with larger input files with no penalty on the event throughput as we could have expected. The other implementations would require modifications in the code in order to process events in chunks and reduce the memory usage. The awkward array implementation requires 14.5 GiB, at the limit of 15 GiB available on the machine used for the measurement, while the panda and julia data frame versions exceed this limit. We see in this example that Julia is performing similar or better than C++ frameworks. For an event loop, Python is slower by three-orders-of-magnitude than Julia. Vectorization of event processing serves as a mitigation of Python's slowness by delegating the event loop to underlying compiled libraries and sacrifice flexibility, without achieving the performance of C++ and Julia6. Footnote 6: Recently, it became possible to use Numba + Awkward Array to enable fast loops, sacrificing some Python features due to the more strict compiling model #### 2.1.3 Interoperability with legacy code HEP computing is based on a heritage of program code written over decades. Interfacing to libraries developed in C++ and Fortran is unavoidable, apart from the last-step of analysis domain (and even here it would still be an attractive feature). Julia can natively call C and Fortran functions with no overhead compared to calling them from their native language. Examples of such calls are given in Listings 1 and 2. For convenience, a wrapper function written in Julia can be used to handle errors, as in the example in Listing 3. Bindings to Python are supported thanks to the PyCall package. The interface is very convenient and transparent in both directions, Python from Julia and Julia from Python, as we can see in the examples provided in Listings 4 and 5. In a Jupyter notebook, in addition to calling a Julia function from a notebook running a Python kernel using these interfaces and vice-versa, it is possible to write Julia code in cells of a notebook using a Python Kernel, and mix cells written in Julia and in Python languages, as illustrated in Fig. 5. The Cxwrap package [41] can be used to add Julia bindings to C++ libraries. Once bound, the library is accessed transparently from Julia as if it was a native Julia package. The object.method(args...) and object_ptr->method(args...) C++ like method calls translate \begin{table} \begin{tabular}{l c} \hline \hline Implementation & Time to process \\ & \(10^{6}\) events \\ \hline Vectorized with Awkward Arrays & \(0.2343\,\pm 0.0027\,\mathrm{s}\) \\ Vectorized with Panda dataframes & \(9.225\,\pm 0.081\,\mathrm{s}\) \\ For loop & \(177.2\,\pm 1.8\,\mathrm{s}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Runtime of the dimuon spectrum analysis for three Python implementations using the Uproot library to read the data. Figure 4: Dimuon spectrum obtained from the CMS open data of Run 2012 with the Julia implementation of the analysis. functioncompute_dot(DX::Vector{Float64}, DY::Vector{Float64}) @assertlength(DX) == length(DY) n = length(DX) incx = incy = 1 product = ccall((:dot_, "libLAPACK"), Float64, (Ref{Int32},Ptr{Float64}, Ref{Int32},Ptr{Float64}, Ref{Int32}), n, DX, incx, DY, incy) return product end ``` **Listing 1**: Example of call from Julia of a function implemented in Fortran [40] ``` path=ccall(:getenv,Cstring,(Cstring,), "SHELL") println(unsafe_string(path)) ``` **Listing 2**: Example of call of a function of a C-library from Julia [40] ``` functiongetenv(var::AbstractString) val=ccall(:getenv,Cstring, (Cstring,),var) ifval==C_NULL error("getenv: undefinedvariable:", var) end returnunsafe_string(val) end ``` **Listing 3**: Example of a wrapper in Julia to handle errors from a c-library function [40] ``` #EnablePythoncall: usingPyCall #Importapythonmodule: math=pimport("math") #Useitasajuliamodule: math.sin(math.pi/4) ``` **Listing 4**: Python function can be called transparently from Julia. Example of call of a function from the Python math package. [40] method(object, args...). The package philosophy is similar to Boost.Python[42] and Pybind11[43]: the bindings are produced with few lines of C++ code, one line per class and one line per method, which must be compiled as a shared library. The package provides all the flexibility to expose a different Julia interface to the C++ one, for instance to adapt it to the Julia context and style of programming. CxxWrap internally uses the built-in Julia-C interface, used for the interface between shared libraries and Julia. The C++ standard template library vectors and val_arrays are mapped to Julia Vectors with zero copy. The WrapIt project [44] has demonstrated that binding code can be generated automatically from a library's header files, which would make the process of adding Julia bindings to C++ libraries very Figure 5: Example of a Jupyter notebook mixing cells with Julia and Python code. easy. Automation of this Julia binding has been tested on the ROOT libraries, and we have been able to produce, draw, and fit histograms and graphs (TGraph class). The fit has been tested with both functions defined in ROOT and functions defined in Julia, demonstrating a perfect integration. Unlike direct calls into C or Fortran libraries via the ccall function, calls between C++ and Julia have to go through the intermediate layer created by the wrapper code. We perform several measurements to estimate the overhead from the C++-Julia interface. The measurement is performed on a call to the ROOT TH1D::Fill method, that adds a value to a histogram: we time a loop of 1 million call and average the result to get time per call. First, we create a shared library, that exports C functions, we call from Julia with the ccall method. The pointer to the histogram object is passed to the C function as a void* type. When compared with a direct call to the Fill method within the same C++ code, it shows an overhead of 0.23 ns from the call from Julia unexpectedly smaller than when calling the wrapper from C++ (0.74 ns). In the end, the call from Julia takes only 4% more time than a direct call from C++. It is 38 times faster than a call from Python. Measurement is also done for a binding based on CxxWrap. All results are shown in Table 3. We could imagine the Julia engine performing just-in-time compilation of C++ using the LLVM infrastructure it uses for the Julia code. The Cxx package [45] is providing this feature for Julia releases from 1.1.x to 1.3.x. With this package we can access to a C++ library without the need of a C++ wrapper. Nevertheless, a Julia wrapper is needed to provide the same transparency--calls to the C++ functions similar as calls to a Julia functions. Using this package the call to the Fill function in our example is found to be as fast as when using the C interface, as shown in Table 3. We used Julia version 1.3.1 to perform this measurement. The Cxx approach is a good alternative to CxxWrap. In Ref. [15], CxxWrap was used to interface to the LCIO C++ library [46] to read ILC [47] simulated events and to Fastjet[48, 49] to cluster hadronic jets. The loss of event throughput compared to a code uniformly written in C++ was 37%. #### 2.1.4 Support of standard HEP standard formats Different file formats are used to store HEP data and supporting them is crucial to a streamlined physics analysis experience. The file formats currently used to store physics events are mainly HepMC, LHE [50], LCIO, and ROOT. The packages LHE.jl[51] and LCIO.jl[52] provide supports for LHE and LCIO. Two packages are available to read ROOT files: UpROOT.jl[53] and UnROOT.jl[36]. The UpROOT.jl package uses the Uproot purePython library to provide read and write support. When using this package, a loop on events typically suffers of the same performance penalty as with Python. This has motivated the development of UnROOT.jl, a package written in pure Julia that provides a fast processing of events, as demonstrated in the performance measurements done in the previous section, which used this package. It leaves the flexibility to use an explicit event loop, with a small memory footprint, or works on vector of event quantities ("columnar analysis"). An event loop will look like the following code snippet, where Muon_pt is a vector (transverse momenta of the muons contained in the event). for event in mytree # Access to a single-event quantity event.Muon_pt \begin{table} \begin{tabular}{l c} \hline \hline & Mean time \\ & [ns] \\ \hline C++ & 5.74 \(\pm\)0.01 \\ C API from C & 6.48 \(\pm\)0.04 \\ C API from Julia & 5.97 \(\pm\)0.03 \\ Julia - CxxWrap & 8.21 \(\pm\)0.04 \\ Julia - Cxx & 5.97 \(\pm\)(\(<\) 0.01) \\ Python & 226 \(\pm\)5 \\ \hline \hline \end{tabular} \end{table} Table 3: Mean time to call the Fill method of a ROOT histogram from C++, Julia and Python. The time corresponding to a single call is averaged on \(10^{6}\) h–>Fill(1.) calls. Three cases are considered for Julia: use of the plain Julia C interface (“C API”), use of CxxWrap, and use of Cxx. For reference the time to call the same function from C/C++, within the same code (“C++”) and through the shared library developed the Julia C interface, is also measured. A columnar analysis will look like the following. # Access to a vector of event quantities themselves vectors of numbers) mytree.Muon_pt UnROOT.jl uses thread-local storage to maximize performance and maintain thread-safety. An event loop can be parallelized in several threads with little effort using the standard Julia @threads macro: julia> @threads for event in mytree #... Operate on event end The performance measurement presented in Sec. 2.1.2 are done in single-thread mode. There are limitations. First, this package does not support data write. Both UpRoot.jl and UnROOT.jl can access only to objects of a limited set of types, either stored as such in the file or in a TTree. The supported types covers already a large set of use cases, but not schemes where data is stored as object of serialized C++ classes. Using the genuine ROOT library via a Julia binding based on Cxwrap can be an alternative approach when required. We have successfully read and write histograms (TH1 objects) and graphs (TGraph objects) using this approach. We expect the implementation of the support of RNTuple to be easier than TTree that it is expected to replace, thanks to its design. Data are stored in column of fundamental types (float, int,...) [54], similar to Apache Arrow[55], which should ease support from programming languages other than C++ like Julia. In the neutrino physics community, the industry-standard HDF5 and Parquet have been used at scale, and these files can be readily read and write from Julia via their respective packages. #### Parallel computing Apart from having memory shared multi-threading, Julia also ships with out-of-core distributed computing capability as a standard library (Distributed). In fact, it is as easy to command an array of heterogeneous nodes in real-time via packages such as ClusterManagers.jl[56], which can mimic Dask's experience [57] with a fraction of the code. For more advanced features, such as building out-of-core computation graphs, Dagger.jl[58] provides facilities. While these libraries allow distribution of execution within the Julia code, parallelization can be also done, as with C++ and Python, by running parallel jobs of the same executable using commands of a batch processing system, like HTCondor [59, 60], typically used in computer cluster facilities. #### Platform supports and license Julia is supported on all major platforms, a list of which can be found on the Julia website7. Three different support tiers are provided. The platforms with full-fledged support (classified as tier 1) are, as of Oct, 2022: Footnote 7: [https://julialang.org/downloads/#supported_platforms](https://julialang.org/downloads/#supported_platforms) * macOS x86-64 * Windows x86-64 and CUDA * Linux x86-64 and CUDA * Linux i686 Worth noting that many platforms well on their way into tier 1, such as macOS with ARMv8 (M-series chips). Julia is distributed under the MIT license along with vast majority of the ecosystem, which guarantees free use, modification, and re-distribution for any use case. #### Reproducibility Julia includes a package manager and a general registry used by the whole community in an organized manner. In particular, each package contains a Project.toml file, that records the dependency and compatibility with other packages in an uniform way. Furthermore, any binary dependencies are also captured by the package system: they are distributed as "Artifact"--packages with names ending _jll--but still behave as normal packages when it comes to dependency and compatibility resolution. This eliminates a few problems, including running out of pip space just because you depend on a large library (e.g., CUDA). More details are giving in Sec. 3.1. On the end-user side, one can easily capture the an environment by working with the Manifest.toml file. While Project.toml records compatibility and dependencies, Julia would try to use the latest compatible packages when instantiating the environment. Manifest.toml, on the other hand, captures the exact versions of every package used (recursively) such that exact reproducibility can be guaranteed. #### Numerical optimization The statistical inference procedures relevant to HEP use numerical optimization heavily, from Maximal Likelihood Estimation (MLE) to scans over Parameters of Interest (POI) and obtaining the test statistics. Traditionally this is done by minuit2[61, 62] in ROOT, which uses the finite difference method to provide gradient information for some of its optimization. Julia has a solid ecosystem in numerical optimization (NLopt.jl[63], Optim.jl[64], and meta algorithm package such as Optimization.jl[65] that brings local and global optimization together). Julia's performance has lead to most libraries being written in pure Julia, which means that optimization tasks can often use better algorithms such as Broyden-Fletcher-Goldfarb-Shanno (BFGS) [66, 67, 68, 69] that rely on gradient provided by automatic differentiation. Support for automatic differentiation is further described in Sec. 3.3. Construction of a complex probability distribution function is a common problem in HEP. Description of continuous spectra often requires a multicomponent probability density function (PDF) e.g., a sum of a signal component and a background component. In addition, the convolution with the model PDF with the experimental resolution is an essential for the HEP applications. The RooFit framework is the standard tool for building complex high-dimensional parametric functions out of lower dimensional building blocks. As great convenience, the framework provides a homogeneous treatment of the PDF variables and parameters that can be fixed, restricted to a range, or constrained by a penalty to the likelihood functions. The framework is written in C++ and available in Python. A pure-python package zfit[70, 71] give an alternative solution to Python users that can better integrate with the scientific Python ecosystem. Julia ecosystem offers a large variety of standard density functions in the Distributions.jl package [72, 73]. The package largely exploits the properties of the standard density functions, such as moments and quantiles, which are computed using analytic expressions for the unbound PDFs. Moreover, flexible construction functionality is greatly missing. The mixture models of the Distributions.jl are the holder for the multi-component PDF, however, they cannot be used for fitting of the component fractions, the prior probabilities. Extension of the convolution functionality beyond a small set of low-level functions is required. The management of the distribution parameters is a key missing functionality in Julia modelling ecosystem. ### Specific needs for analysis of reconstructed events #### Tools to produce histograms and publication-quality plot The statistics community in Julia has support for N-dimensional histograms with arbitrary binning in StatsBase.jl[74], an extension to this basic histogram is implemented in FHist.jl[75], which added support for bin error and under/overflow and for filling the histograms in an event loop, as typically done in HEP analyses. Many libraries of high quality are available for plotting from Julia. In the interests of standardization, the Plots.jl[76] package provides a front-end interface to many plotting packages, allowing easy switching from one to another. It supports the concept of recipes used by packages processing data to specify how to visualize them, without depending on the Plots package: the dependency is limited to the RecipeBase.jl[77] package which has less than 350 lines of code. The package supports, currently, 7 backends. It supports themes, which are sets of default attributes and provide a similar feature to the ROOT TStyle class. The back end selected by default in Plots is GR[78], a rich visualization package providing both 2D and 3D plotting and supporting LaTeX for text. The GR package, or its GRUutils.jl[79] extension, can be used directly when a shorter warm-up time is needed before obtaining the first plot of a running session (see Sec. 5.2 for a discussion on the warm-up time). We should also mention the Makie.jl ecosystem[80], a rich plotting package targeting publication-quality plots, which is increasingly popular. This package supports the recipe and theme features, but is not itself supported by Plots.jl. For instance, the FHist.jl HEP-oriented histograming package mentioned before provides a recipe to plot the histograms. Make.jl suffers from a longer time to obtain the first plot, even larger than with the Plots.jl package with its default backend OpenGL. Use of LaTeX to generate high-quality plots has been popularized in HEP community with the plotting system of the Rivet Monte-Carlo event generator validation toolkit [81]. The PFGPlots [82] and PFGPlotsX [83] packages offer LaTeX-based plotting. They are both supported by the Plots.jl package. The Gaston.jl[84] package provides plotting using the popular Gnuplot.jl utility [85]. People used to the Python matplotlib.pyplot set of functions [86] can use the PyPlot.jl package that provides a Julia API to this package. Those who prefer plotly to matplotlib, can use the PlotlyJS.jl, a Julia interface to plotly. The high-level grammar of interactive graphics Vega-Lite[87] is also supported, thanks to the VegaLite.jl [88] package that supports exports to bitmap and vector image files, including the PDF format, which is convenient to include in papers written with LaTeX. Plotting can also be done on a text terminal, using the UnicodePlots.jl[89] package, supported by the Plot front end. The visualization tool ecosystem for Julia is rich, with the added benefit of staying in the same environment as the analysis and enabling an interactive workflow. #### Notebook support A computational notebook is an interface for literate programming that allows embedding calculations within text. Notebooks have been made popular by Mathematica [90], which has supported notebooks starting from its first version, 1.0, released in 1988. In HEP, notebooks are widely used by theoreticians for symbolic calculation e.g., with Mathematica, and by experimentalists, for data analysis, and plotting using Python or C++ as programming language. The notebook system used with Python, Jupyter, fully supports Julia. The "ju" of **J**upyter stands for Julia, while "py" stands for Python and "er" for the R language. The ROOT analysis framework brings C++ support to Jupyter. The notebook support for Julia is richer than for Python and C++. In addition to Jupyter, Pluto.jl[91] provides a new-generation notebook system for Julia. This system keeps track of the dependency of all calculations spread in the document and updates automatically any dependent results when a one of them is edited. Beyond being convenient, this automatic update provides _reproducibility_. Pluto.jl is also a very easy solution for interactive notebooks, where buttons, drop-down menus and slides can be included. This is useful for students. It can also be used to build a tool for experiments running shifters to analyze the data quality in quasi-realtime. With Pluto.jl, notebooks are normal executable Julia files. Notebook functionality is offered through special comments. This helps with version control. ### Specific needs for physics event reconstruction, simulation and data acquisition trigger software Physics event reconstruction, simulation and trigger software are typically large codes developed by the experiment and project collaborations. The software stack of the LHC experiments is particularly large and complicated, due to the complexity of their detectors. The software is developed collaboratively by many developers, with different levels of software skills. Tools for both collaborative development and quality assurance are essential for all experiment software. Software distribution and release management are also important. The complexity of the C++ language, used in most of these frameworks, can limit the integration of contributions developed by students. This is more and more true given the growing use of high-level language (e.g., Python) as the teaching language for computing in universities, especially among natural science departments. The Julia language and its ecosystem have been built using an open-source and community approach. Tools have been put in place and are widely adopted for efficient collaborative development. Julia comes with a standard and convenient package management system providing reproducibility, see Sec. 3.1. Julia has built-in unit testing, coverage measurement, and officially maintained continuous integration recipes and documentation generator. These are used in almost all of the Julia packages registered publicly, thanks to the streamlined experience and low barrier to entry. The simulation software of the experiments depends on external libraries to simulate the underlying physics, such as Monte Carlo event generators, and on some others, like Geant4 [92], to simulate the transport of the produced particles and their interaction with the detector. Interoperability with libraries written in C, C++, or Fortran, as discussed in Sec. 2.1.3, it is essential not to have to re-write all the external libraries in Julia. Simulation and reconstruction is compute intensive and therefore good performance is essential: performance has a direct impact on the computing infrastructure cost. We have seen in the previous section that Julia meets the C/C++ performance and sometimes surpasses it. Code parallelization and efficient use of single instruction multiple data (SIMD) vectorization features of CPUs is essential at the LHC and for HL-LHC to efficiently use current hardware resources, with a high density of computing cores, including accelerators (e.g., GPU) that can count tens of thousands core [6]. The Julia language provides a very good support for multi-threading: a loop can be parallelized by a single macro (@threads), an operation can be made atomic by prefixing it with @atomic, a more general lock mechanism is provided, asynchronous tasks, with distribution of tasks to different threads, is natively supported. Julia supports distributed computing, using its own communication mechanism but also using MPI [93, 94]. It is possible to use Julia's compiler to vectorize loops by using the @simd macro or the more advanced @turbo from the LoopVectorization.jl package [95]. Due to its effective metaprogramming capabilities, Julia has great support for running code on heterogeneous architectures, Julia code can be compiled for Nvidia (CUDA), AMD (ROC) and Intel (oneAPI) GPUs via compiler written in Julia8, without dependency on, for example, C++ CUDA or HIP library. Packages like GPUArrays.jl and KernelAbstractions.jl allow the use of exactly the same core algorithm written in Julia to be executed across different vendor platforms with minimal boilerplate code, which is a currently a unique feature among languages. Footnote 8: [https://github.com/JuliaGPU/GPUCompiler.jl](https://github.com/JuliaGPU/GPUCompiler.jl) On the more user-facing front, libraries such as Tullio.jl[96] combine metaprogramming and kernel programming ability to allow users to express tensor operation with Einstein notations regardless of whether the array lives on RAM or GPU VRAM. This is very relevant for data preservation and for unifying effort to write algorithms once and run them everywhere. The ability to run native Julia code on both CPUs and GPUs, combined with the support for automatic differentiation in Julia, makes Julia an excellent platform for machine learning (ML) research. This is especially true for advanced scientific machine learning that goes beyond combining conventional matrix-crunching ML-primitives/layers and uses physical/semantic models or mixes them with generic ML constructs. ### Specific needs for event generation and for phenomenology #### 2.4.1 Symbolic Calculations in Julia Julia is a fast, solid and reliable programming language with a well developed Computer Algebra System (CAS) such as Symbolics.jl[97], a language for symbolic calculations such as Symata.jl[98], and an interface to Mathematica such as MathLink.jl[99], that could be widely used in HEP, considering the advantages Julia has. Symbolics.jl[100] is a CAS written in pure Julia, which is developed by the SciML community [101] who also maintain the state-of-the-art differential equations ecosystem [102]. The package has scalable performance and integrates with the rest of Julia ecosystem thanks to its non-OOP design and multiple dispatch [97]. Some of the main features of Symbolics.jl include pattern matching, simplification, substitution, logical and boolean expressions, symbolic equation solving, support for non-standard algebras with non-commutative symbols, automatic conversion of Julia code to symbolic code and generation of high performance and parallel functions from symbolic expressions [103], which make it even more interesting for possible applications in HEP. At the heart of Symbolics.jl, we find ModelingToolkit.jl, a symbolic equation-based modeling system [104], and SymbolicUtils.jl, a rule-based rewrite system [105]. Symata.jl[98] is a language for symbolic computations in which some features, such as evaluation, pattern matching and flow control, are written in Julia, and symbolic calculations are developed by wrapping Sympy, a python library for symbolic mathematics. MathLink.jl[99] is a Julia language interface for the Wolfram Symbolic Transfer Protocol (WSTP) (this requires the installation of Mathematica or the free Wolfram Engine to run properly). The interface is a W"" string macro used to define Mathematica symbols. MathLinkExtras.jl[106] adds extra functionalities such as W2Mstr, which allows the conversion of Julia MathLink expressions into Mathematica expressions, and W2Tex which converts Julia MathLink.jl expressions into LaTeX format. And, finally, one can evaluate the expression in Mathematica using weval. #### Event Generators To be prepared for future needs for event generation [107], it is conceivable to rewrite parts of the existing event generators in Julia and making use of modern parallelisation technologies. One of the most demanding tasks in event generation is the evaluation of matrix elements and cross sections, where Julia provides several useful tools. The package Dagger.jl is a framework for out-of-core and parallel computing written in pure Julia. It is similar to the python library Dask and provides a scheduler for parallelized execution of computing tasks represented as a directed acyclic graphs (DAGs). Such DAGs could be used to represent the evaluation of matrix elements in terms of elementary building blocks, similar to HELAS-like functions in Madgraph4GPU (see e.g., [108]). Furthermore, Dagger.jl supports the selection of different processors as well, making it possible to be use for distributed computing on GPU as well (see e.g., DaggerGPU.jl[109]). ### Feature summary The Table 4 summarizes the programming language and ecosystem features we have identified as required for HEP. It is surprising how Julia language manages to fulfill almost all of these features. We should note that the interoperability is less transparent for C++ than with the other languages as it requires to write a code wrapper. Many HEP file format are already supported, including for ROOT files, without covering the full versatility offered by this format. ## 3 The bonuses In addition to solving the two-language problem and the mandatory features listed in the previous section, the Julia ecosystem will provide other advantages over the C++ and Python languages currently used. ### Packaging Julia comes with a built-in package manager, Pkg.jl. It builds on the experience of package managers in other ecosystems, and it can be used to install packages and manage "environments", similar to the concept of virtual environments in Python. A Julia environment is defined by two files: * Project.toml: this file records version and UUID of the current project, it also contains the \begin{table} \begin{tabular}{l|c} \hline \hline Requirement & \begin{tabular}{c} Fulfilled \\ by Julia \\ \end{tabular} \\ \hline Easy to learn for HEP physicists & ✓ \\ Performance & ✓ \\ Interoperability with legacy code & ✓ \\ Support for HEP standard formats & partial \\ Support for architectures and open license & ✓ \\ Cross-platform reproducibility & ✓ \\ Tools to perform optimization/minimization & ✓ \\ Histogramming & ✓ \\ Plotting with publication quality & ✓ \\ Notebook support & ✓ \\ Tooling for large project (unit tests, continuous integration, software distribution) & ✓ \\ SIMD and multi-threading & ✓ \\ Distributed parallel computing & ✓ \\ Native GPU support & ✓ \\ Machine learning libraries & ✓ \\ Computer Algebra System & ✓ \\ \hline \hline \end{tabular} \end{table} Table 4: Summary of features needed for HEP applications and their availability in the Julia ecosystem. list of direct dependencies of this project, as well as the compatibility bounds with these packages and Julia itself. Moreover, all Julia packages follow semantic versioning (semVer [110]): version numbers are composed of three parts, major, minor and patch numbers, and breaking changes can only be introduced in versions which change the left-most non-zero component (e.g., going from 0.0.2 to 0.1.0, or from 2.7.3 to 3.0.0 are considered breaking changes). * Manifest.toml: this file is automatically generated by the package manager when instantiating an environment, if not already present, and it captures all packages in the environment with their versions, including all indirect dependencies. When used together with Project.toml, Manifest.toml file describes an exact environment that can be recreated on any other platforms, which is particularly important for reproducibility of applications (e.g., analysis pipelines). Julia packages are organized in directories (usually also Git repositories) in which there is a Project.toml file to define its environment. Packages can be installed either via path (local path on a machine, or URL to a remote Git repository), or by name if they are present in a package registry. Pkg.jl is able to deal with multiple registries at the same time, which can be both public and private, in case there is a need to provide packages relevant only to a single group or collaboration. Because there is only one package manager and only one way to record package dependency, the Julia package registry simply records the dependency and compatibility metadata separately from package's source code. It allows a local resolver to correctly resolve compatibility in a short amount of time. Users can interact with the package manager either by using its programmatic API (useful for scripting) or an interactive REPL mode, which can be entered by typing the closing square bracket ] in Julia's REPL. The package manager can also be used to run the tests for packages with a single command (Pkg.test using the API, or the command test in the REPL). Since Pkg.jl is a standard library and has many capabilities, all users are familiar with it and do not need to resort to third-party, mutually incompatible tools. ### Multiple dispatch and Fast Development We group multiple dispatch and fast development (i.e., hot reloading) into the same section because of their direct impact on design of the packages and the quality of life of developers. A side-by-side comparison between OOP and multiple dispatch has been shown in Sec. 1.3. Here, we highlight that multiple dispatch is also a known solution to the _expression problem_[111, 112]. Essentially, in class-based OOP, one of the following is much less natural than the other: * add new methods to existing data type (class/struct). * add new data type (class/struct) for existing algorithm (method). In OOP, the second one is easy, think inheritance; but the first requires access to source code. In Julia, the first one is trivial since methods don't bind to data type (class/struct) to begin with. But the second one can also be easily done by sub-typing the upstream abstract type. Making it easy for developers to reuse existing packages is crucial in HEP also because libraries are sometimes under-maintained. If we can cleanly extend and reuse these libraries without making private forks, overall efficiency would be boosted. As a dynamic language, hot reloading should not come as a surprise. It is, in fact, crucial for Julia, due to the inevitable latency introduced by the JIT compiler. The go-to package for this is Revise.jl[113] which automatically detects file modifications and re-compiles the relevant functions on the fly. It can also reload the source code of any Julia Base module, saving a lot of time if (re)compiling Julia. ### Automatic differentiation The multiple dispatch system and the native speed of Julia eliminated the need for many specialized libraries to implement the same interface (e.g NumPy-interface in JAX[114], TensorFlow[115], PyTorch[116]). Instead, package maintainers only have to focus on providing rules for the built-in functions as they are fast already and downstream packages are mostly pure-Julia too, see ChainRules.jl[117]. A dedicated organization, JuliaDiff [118] collects all the packages and efforts regarding what each autodiff engine is good at. ## 4 Foundation HEP-specific libraries to be developed or consolidated ### Integration in the ROOT framework Because of the ubiquity of ROOT in HEP, a Julia interface to this framework, similar to the existing Python one and that will allow people familiar to it to find their way easily, is essential for the development of Julia in the HEP community. In addition, this will provide access to a large set of software used in HEP (storage support, statistics tools, unfolding, etc.) before their counterpart are implemented in Julia. ### HEP-specific data storage format It will be important to consolidate the support for the ROOT data format. The ROOT data format is very versatile and allows the storage of instances of arbitrary C++ classes (this is true of the current TTree and the new RNTuple format). Current Julia packages for ROOT I/O do not cover this whole versatility and do not allow for reading and writing files with objects of sophisticated types. ### Physics object types and histogram Packages to manipulate Lorentz vectors and to build histograms are already available [75, 119]. Leveraging multiple dispatch, these packages are relatively easy to implement, and compose well with rest of the Julia ecosystem (e.g., collection of 4-vectors can be stored and sorted efficiently without any special care). Defining a standard interface to Lorentz vectors and histogram data structures, with a similar approach as the Table interface [120] could be beneficial. ### HEP specific statistical tools Over years, HEP community has developed its statistical standard to assert a level of confidence of the experimental results, for measurements, limits and observation of new phenomena. The Julia ecosystem contains several high-quality packages for Bayesian statistics and inference. Two examples are BAT.jl (Bayesian Analysis Toolkit in Julia) BAT.jl[121] and Turing.jl[122], which already have been used in several physics analyses. Both packages are being actively developed with good communication of the authors across the development teams. Common interfaces [123, 124, 125] have been established to increase interoperability. More development is required for the frequentist CLs approach used at LHC [126, 127, 128, 129] and based on profiled likelihood fits. The method is derived from the hybrid method of the same name developed at LEP [130, 131] and used later at Tevatron [132, 133]. The C++ tools typically used by LHC experiments are the RooFit (originally developed for the BaBar experiment [134, 135]) and RooStats libraries included in the ROOT framework. For multinodal distributions these libraries are used through the HistFactory[136] or HiggsCombine[127, 137, 138] tools. The pyhf package [139] provides a pure-Python implementation of HistFactory that offers different computational backends to perform the likelihood maximization and is gaining popularity. The HistFactory, HiggsCombine, and pyhf are standalone tools, for which inputs are provided in text files (XML or JSON). Thanks to the transparent Julia-Python interface, pyhf can also be used in a Julia session or code. For a perfect integration and to exploit the language performance, a Julia implementation is desirable. An effort to implement pyhf in Julia has already started [140] and would need to be consolidated. Histogram unfolding [141] is another statistical tool widely used in HEP experiments. It is used to correct from the effect of the finite resolution of the particle detectors in differential cross section measurement. The TUnfold[142] and RooUnfold[143] are the most commonly used packages. The RooFitUnfold[144] package provides an extension of RooUnfold. New techniques to perform unbinned high-dimensional data unfolding has been recently developed [145]. Like for CLs, unfolding comes at the last step of a HEP data analysis, and a Julia implementation would be useful. ## 5 Limits of the Julia programming language ### Language popularity Despite its smaller user base than C++ and Python, we have found that it is extremely easy to find information on the web, either from Stack Overflow or from dedicated channels, on Discourse, Slack, and Zulip. The community is very collaborative. An annual conference JuliaCon 9 is boosting this collaboration. In particular, it encourages exchanges between different fields, both from Academia and Industry. The popularity of Julia is growing and it has been adopted by large academic projects, like the Climate Modeling Alliance (CliMA); and companies like ASML, the largest supplier of photolithography systems; Pharmacology actors like Pfizer, Moderna, and AztraZeneca [146, 147, 148]; finance actors like Aviva, one of the largest insurers, and the Federal Reserve Bank of New York [149, 150, 151]. Footnote 9: [https://juliancon.org/](https://juliancon.org/) ### Just-in-time compilation latency While applications written in Julia run faster than with an interpreted language, the first execution requires additional time to perform the just-in-time (JIT) compilation. In order to limit this overhead, the intermediate results of the compilation, called precompiled code, is cached on disk. The precompilation of a package code is typically performed in parallel at installation time, and the cached content includes, but is not limited to: lowered code, type inference result, etc.; but at the time of writing, Julia does not yet cache compiled machine code10. The latency is often called "time-to-first-plot". Footnote 10: Progress is being made, see [https://github.com/JuliaLang/julia/pull/44527](https://github.com/JuliaLang/julia/pull/44527) The JIT compilation latency has been improved from version-to-version, in particular with versions 1.5, 1.6, 1.8, and 1.9 by reducing the number of required recompilations. The various sources of latency have been studied extensively [152, 153] and the reduction of the time-to-first-plot is a high priority for the compiler team. Besides improvements coming from the compiler, following the general guidelines of Julia code style for performance [40], which ensure that the compiler can easily infer variable types, should reduce such latency [152]. At the same time, tools have been developed to both help "hunt" down unnecessary recompilation (SnoopCompile.jl[154]), as well as help precompile known common user routines at installation time (PrecompileTools.jl[155]). The latency can also be drastically reduced by preparing a custom system image: the system image contains cached machine code for a set of precompiled packages and past executions. It comes with the drawback that versions of the packages shipped in the system image take precedence over the ones installed via the package manager [156], which can be confusing and be a source of bugs. Updating these packages requires rebuilding the custom system image. The time to produce a first plot, consisting of a 2-D plot of 100 points, was measured to be 2.09\(\pm\)0.01 s with Julia 1.9.0-rc1 and the Plots.jl[76] package. The Makie.jl package took 7.57\(\pm\)0.02 s using the Cairo backend. Time is similar with the GL backend. Subsequent plots take less than a millisecond. Building a custom system image brings down the latency to below 50 ms for both packages. While for Plots.jl, the latency using the standard system image is acceptable, building a custom system image would make use of Makie.jl for an interactive session or for a short batch script much more convenient. To measure the improvement brought by the efforts of Julia developers, the measurement is repeated with the older long-term-support release 1.6.7. With this older release the result is 29.0\(\pm\)0.1 \(s\) for Makie.jl, showing an improvement larger than a factor of 4. The start-up time could be a concern for large HEP experiment simulation and reconstruction software. As an example of software size, the CMSW experiment software, CMSSW [157], totals more than 2 million of lines of C++ code. The assessment was done with release 12.3.5 and the number of lines of code was defined as the number of semi-colons contained in the code. In lieu of a similar sized HEP software package written in Julia, we have measured the start-up time on the relatively large package OrdinaryDiffEq, using its version 6.49.4. The package consists of about 125,000 lines of Julia code, excluding comments, and 390,000 when including the external packages. The lines of code have been counted with the Tokei software [158] version 12.1.2. and the extra time to run the Example 1 of the manual [159] the first time, compared to subsequent executions, was 5.91\(\pm\)0.01 s. It goes down to 826\(\pm\)2 ms when using a custom system image. We should also note that the precompilation happening on package installation for the package and its dependencies (120 packages) took 256 s only. For a large experiment software framework, attention will need to be paid to limit code invalidation by respecting the guidelines to ease type inference. This will also help the compiler to optimize the code. While minimizing start-up time may require some effort for large HEP project, we do not expect it to be a show stopper. At worst, it will require to use custom images, with a conciliation on the package management. In addition, development to improve the start-up time is ongoing and we should expect significant progress in the near future [160]. ### Application programming interface specification Julia lacks a single standard to define the application programming interface (API) of a package. The one with the best support is the use of the export directive to list the symbols exposed to the user. The directive is recognized by the language's introspection functions. The names function lists, by default, the exported symbols, with an option to list all symbols. The methodswith function, used to retrieve functions with an argument of a given type will list only functions from the export list. Nevertheless, the export directive has the side-effect that all the public symbols end up in the user's namespace, if the package is imported with the using statement. To quote the Julia manual [40], "it is common to export names which form part of the API. [...] Also, some modules don't export names at all. This is usually done if they use common words, such as derivative, in their API, which could easily clash with the export lists of other modules.". The Julia language itself uses the user manual to define the API, as explained in the "Frequently asked questions" of this document [40]. With such an approach, we lose the benefit of the introspection functions, themselves agnostic to the API information. A built-in unexport directive, that would allow listing public symbols that the using statement must keep in the module namespace and which would be recognized by the introspection functions and also by the documentation generator [161], would be very beneficial. ## 6 Training and language transition support Julia has been successfully introduced into existing teams, gradually replacing their C++ with Julia packages over time, for example in the LEGEND and BAT groups at the Max-Planck-Institute for Physics. Julia is also the official secondary language (after Python) of the whole LEGEND [162] collaboration. Observed experience is that students with a basic programming background (e.g., in Python or C++) do learn the language very quickly and become productive after just a few days. After exposure to the language for a few weeks, students are typically able to make contributions to larger software packages as well. No problems have been found using Julia for short-term thesis work (e.g., three-month bachelor theses) and even two-week internship, with students and interns who were new to the language. The reaction of these students has been uniformly positive. Master and PhD theses that used Julia as the primary language have resulted in very positive experience for both students and supervisors. Students who use Julia in longer-term projects not only become very proficient in the language, but also gain a lower-level understanding of computing, data structures and performance implications of modern hardware in general, compared to students who work in Python. This is because Julia makes it very easy the move between higher-level and lower-level programming, in contrast to the Python-plus-C++ two-language approach. More code reuse and transfer has been observed across student generations in Julia, compared to C++. This is due to the combination of an excellent package management with the use of multiple dispatch as a foundation. The first simplifies the maintenance of systems consisting of smaller and more modular packages, while the second solves the _expression problem_. ## 7 Conclusions The Julia programming language has been presented and compared with C++ and Python. To study the potential of Julia for HEP, a list of requirements for offline and software-based-trigger HEP applications, covering both the language and its ecosystem, has been established. The compatibility of Julia with these requirements has been studied. Julia and its ecosystem are impressively fulfilling all these requirements. Moreover, Julia brings other features--integrated packaging system with reproducibility support, multiple dispatch and automatic differentiation--from which HEP applications would benefit. The capacity to provide, at the same time, ease of programming and performance makes Julia the ideal programming language for HEP data analysis and is more generally an important asset for all the considered HEP applications. The dynamic multiple dispatch paradigm of Julia has proven to ease code reuse. This property will greatly benefit HEP community applications that involve code developed by many people from many different groups. Using a single and easy programming language will facilitate training. Experience has shown students with either a C++ or Python background learn the language very quickly, being productive after a few days. Using Julia as mainstream language in a collaboration allows students on short-term projects to use the common programming language, while in case of C++, using a simpler language as Python is often needed. This eases the reuse of the code developed in such context. We have measured the performance provided by the language in the context of HEP data analysis. The measurements show excellent runtime performance, competitive with C++: 11% faster for the simple LHC event analysis example used as benchmark. When compared to Python, in addition to being faster, it is much less sensitive to implementation choices. The Python implementation was shown to be three orders of magnitude slower than Julia when the event loop is performed in Python. Vectorization techniques can be used to move the event loop by using underlying compiled libraries and this reduces the gap in performance. One difference with C++ and Python is that Julia is younger and has a smaller community. The Julia community is very collaborative and, despite its lower popularity, information for developing with this language is easy to find on the Internet. Julia's rapid growth in academia and industry gives us confidence on the long term continuity of the Julia language, which is essential for HEP projects, because of their large time span. In view of this study, the HEP community will definitively benefit from a large scale adoption of the Julia programming language for its software development. Consolidation of HEP-specific foundation libraries will be essential to ease this adoption. ## Acknowledgements The authors would like to thank Enrico Guiraud (Princeton University and CERN) for the review of the C++ code used in the dimuon analysis benchmark and fruitful discussions. ## Endorsement Johannes Blaschke1, Ankur Dhar2, Matthew Feickert3, Sam Foreman, Cornelius Grunwald5, Alexander Held3, Philip Ilten6, Adam L. Lyon7, Mark Neubauer8, Ianna Osborne9, Johannes Schumann10, Daniel Spitzbart11 James Simone7, Rongkun Wang12, Michael Wilkinson6, and Efe Yazgan13 from the High Energy Physics and Astrophysics communities endorse this work. The authors thank them for their support. Footnote 1: NERSC, Lawrence Berkeley National Laboratory Footnote 2: SLAC National Accelerator Laboratory Footnote 3: University of Wisconsin–Madison, Madison Footnote 4: Fakultät Physik, Technische Universität Dortmund Footnote 5: Massachusetts Institute of Technology, Footnote 6: University of Cincinnati Footnote 7: Fermi National Accelerator Laboratory Footnote 8: University of Illinois Urbana-Champaign Footnote 9: Princeton University Footnote 10: Erlangen Centre for Astroparticle Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg Footnote 11: Boston University, Boston Footnote 12: Harvard University, Cambridge Footnote 13: National Taiwan University
2305.01532
Maximal polarization for periodic configurations on the real line
We prove that among all 1-periodic configurations $\Gamma$ of points on the real line $\mathbb{R}$ the quantities $$ \min_{x \in \mathbb{R}} \sum_{\gamma \in \Gamma} e^{- \pi \alpha (x - \gamma)^2} \quad \text{and} \quad \max_{x \in \mathbb{R}} \sum_{\gamma \in \Gamma} e^{- \pi \alpha (x - \gamma)^2}$$ are maximized and minimized, respectively, if and only if the points are equispaced and whenever the number of points $n$ per period is sufficiently large (depending on $\alpha$). This solves the polarization problem for periodic configurations with a Gaussian weight on $\mathbb{R}$ for large $n$. The first result is shown using Fourier series. The second result follows from work of Cohn and Kumar on universal optimality and holds for all $n$ (independent of $\alpha$).
Markus Faulhuber, Stefan Steinerberger
2023-05-02T15:41:13Z
http://arxiv.org/abs/2305.01532v2
# Maximal polarization for periodic configurations on the real line ###### Abstract. We prove that among all periodic configurations \(\Gamma\) of points on the real line \(\mathbb{R}\) the quantities \[\min_{x\in\mathbb{R}}\sum_{\gamma\in\Gamma}e^{-\alpha(x-\gamma)^{2}}\quad \text{and}\quad\max_{x\in\mathbb{R}}\sum_{\gamma\in\Gamma}e^{-\alpha(x-\gamma)^ {2}}\] are maximized and minimized, respectively, if and only if the points are equispaced points whenever the number of points per period is sufficiently large (depending on \(\alpha\)). This solves the polarization problem for periodic configurations with a Gaussian weight on \(\mathbb{R}\). The first result is shown using Fourier series, the second follows from work of Cohn-Kumar on universal optimality. Key words and phrases:equispaced points, Gaussian, periodic configuration, polarization 2020 Mathematics Subject Classification: 52C25, 74G65, 82B21 M.F. is supported by the Austrian Science Fund (FWF) grant P33217. S.S. is supported by the NSF (DMS-2123224) and the Alfred P. Sloan Foundation. This result is not particularly surprising - it is exactly what one would expect. However, to the best of our knowledge no "easy" proof of the Cohn-Kumar theorem is known; we refer to SS2.1 for a more in-depth discussion of this result. We prove the dual result in the regime when the number of points is sufficiently large, where "sufficiently large" depends only on the width \(\alpha\) of the Gaussian. **Theorem** (Main Result).: _For \(n\geq N(\alpha)\) sufficiently large (depending only on \(\alpha\))_ \[\min_{x\in\mathbb{T}}f_{\alpha}(x)=\min_{x\in\mathbb{T}}\sum_{j=1}^{n}\theta_{ \alpha}(x-x_{j})\qquad\text{is maximized}\] _if and only if the points are equispaced._ We remark that the result is invariant under global shifts \(z\) and that _equispaced_ is always understood periodically. The sets \(\{x_{1},\ldots,x_{n}\}\) and \(\{x_{1}+z,\ldots,x_{n}+z\}\) are always identified. The argument is structurally completely different from the Cohn-Kumar framework of universal optimality (see SS2) and invokes very different tools. One of the main difficulties when establishing the result is that 1. the location of the minimum depends on the \(x_{j}\) in a complicated way and 2. for equispaced points the difference between minimum and mean is super-exponentially small in \(n\), which forces an analysis on very small scales. The proof of the main result is completely Fourier-analytic which makes it somewhat robust and applicable to a wider range of functions than just the Gaussian function; if one has, generally, a function of the type \[g(x)=\sum_{k\in\mathbb{Z}}\widehat{g}(|k|)e^{2\pi ikx},\] with \(\widehat{g}(|k|)\) decaying sufficiently fast (say, faster than exponential), then much (but not all) of the argument carries over verbatim. For simplicity of exposition, the remainder of the paper only deals with the Gaussian case which is arguably the most natural. The proof is explicit enough that bounds on \(N(\alpha)\) could be obtained, however, since one would naturally assume that the result is true for all \(n\geq 2\), independently of the value of \(\alpha\), we will not track this dependency. The condition \(n\geq N(\alpha)\) is necessary in many different steps of our argument and it appears that an unconditional argument for all \(n\geq 2\) would require some new ideas. ## 2. Related results ### Energy minimization Energy minimization problems have received much attention in recent years. A truly seminal result due to Cohn, Kumar, Miller, Radchenko and Viazovska [21] states that the \(\mathsf{E}_{8}\)-lattice and Leech lattice are universally optimal in their respective dimension, meaning that they uniquely minimize energy \(E_{g}(\Gamma)\) among periodic configurations \(\Gamma\) and for a large class of (radial) potential functions \(g\). The energy is given by \[E_{g}(\Gamma)=\frac{1}{n}\sum_{k=1}^{n}\sum_{j=1}^{n}\sum_{\lambda\in\Lambda \setminus\{x_{j}-x_{k}\}}g(|\lambda+x_{j}-x_{k}|). \tag{2.1}\] Here, \(\Lambda\subset\mathbb{R}^{d}\) is a lattice, i.e., a discrete co-compact subgroup of \(\mathbb{R}^{d}\) and has density \(1/\mathrm{vol}(\mathbb{R}^{d}/\Lambda)\). We refer to [19, 21] for details on the energy minimization problem and to the textbook of Conway and Sloane [22] for an introduction to lattices, packing problems and covering problems as well as to the article of Schuermann and Vallentin [39]. In [19] Cohn and Kumar showed that on the real line \(\mathbb{R}\) (and at all scales) the scaled integer lattice is universally optimal. They obtained their result by constructing a "magic function" (using a version of the classical sampling theorem) which proved that the linear programming bounds for the problem (obtained in the same work) are indeed sharp for the scaled integer lattice. An alternative proof, also given in [19] is via spherical designs. Numerically, the hexagonal lattice also meets the linear programming bound for the energy minimization problem in dimension \(2\). However, a proof of its universal optimality is still missing. The results are linked to optimal sphere packings and the linear programming bounds for the sphere packing problem obtained by Cohn and Elkies [18]. In seminal work, the sphere packing problem in dimension \(8\) was solved by Viazovska [45] and in dimension \(24\) by Cohn-Kumar-Miller-Radchenko-Viazovska [20]. The problem of energy minimization has also been treated on the sphere \(\mathbb{S}^{d-1}\subset\mathbb{R}^{d}\), which in the case of \(d=2\) is a problem of distributing points on the circle line \(\mathbb{S}^{1}\cong\mathbb{T}\). Often, for general \(d\geq 2\), a connection to spherical \(t\)-designs is given when distributing points on a sphere. We refer to the review by Brauchart and Grabner [17] and to Hardin-Saff [27] for the classical problem of Riesz energy minimization. More recent results on energy minimizing point distributions on spheres were obtained by Beltran-Etayo [5] or Bilyk-Glazyrin-Matzke-Park-Vlasiuk [10]. For spherical \(t\)-designs we refer to the breakthrough of Bondarenko-Radchenko-Viazovska [12] and to work of the second author [41] for upper bounds. ### Polarization problems The polarization problem asks to place light sources such that the darkest point has maximal illumination. Often such problems are considered for compact manifolds, such as the sphere. We refer, e.g., to articles, published in different constellations, by Borodachov, Boyvalenkov, Hardin, Reznikov, Saff and Stoyanova [13, 14, 15, 16]. For more numerical investigations and algorithms we refer to the work by Rolfes, Schuler and Zimmermann [37]. The problem of polarization for Riesz potentials and lattices in \(\mathbb{R}^{d}\) was asked by Saff (cf. the problem collection curated by _American Institute of Mathematics_ for the workshop _Discrete Geometry and Automorphic Forms_). We note that many physically important potentials, such as the Riesz potential, can be written as a Laplace transform of a non-negative measure \(\mu\). More precisely, any completely monotone function \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\), meaning \((-1)^{k}f^{(k)}(x)\geq 0\), \(\forall k\geq 1\), is the Laplace transform of a non-negative Borel measure as a consequence of the Bernstein-Widder theorem [6, 47] (see also the textbook of Schilling, Song, Vondracek [38, Chap. 1]). Some results on polarization on \(\mathbb{S}^{1}\) for sufficiently fast decaying and convex potentials have been obtained in [15, Chap. 14.3]. We remark that the Gaussian potential does not fall into the class of completely monotone functions as it is not convex. However, by adjusting the Laplace transform and the Bernstein-Widder theorem, we get completely monotone functions of squared distance (compare [21]): \[g(r^{2})=\int_{0}^{\infty}e^{-\alpha r^{2}}\,d\mu(\alpha).\] Note that in the usual Laplace transform we have \(r\) and not \(r^{2}\) in the exponent. For example, the Riesz potentials, also known as inverse power laws (compare again, e.g., [21]), are obtained as \[\frac{1}{r^{s}}=\int_{0}^{\infty}e^{-\alpha r^{2}}\frac{\alpha^{s/2-1}}{\Gamma (s/2)}\,d\alpha.\] If our result were to hold for all \(\alpha>0\) (when \(n\) is fixed), one would immediately have a corresponding result for Riesz potentials as well as the whole class of completely monotone functions of squared distance (given sufficiently fast decay): \[\sum_{\gamma\in\Gamma}\left(\int_{0}^{\infty}e^{-\alpha\gamma^{2}}\,d\mu(\alpha) \right)=\int_{0}^{\infty}\left(\sum_{\gamma\in\Gamma}e^{-\alpha\gamma^{2}} \right)\,d\mu(\alpha).\] ### Lattices in \(\mathbb{R}^{2}\) Despite the seminal work of Cohn, Kumar, Miller, Radchenko and Viazovska [21] and overwhelming numerical evidence, the universal optimality of the hexagonal lattice, also known as \(A_{2}\) root lattice or sometimes triangular lattice, is still open to date. The best available result is due to Montgomery [32] and states that the hexagonal lattice is optimal among lattices at all scales. More recently, the polarization problem among \(2\)-dimensional lattices has been solved by the authors in joint work with Betermin [8]. Local optimality of the hexagonal lattice for lattice polarization and certain potential functions has been derived by the authors in [23]. In [7], Betermin and the first author showed that the hexagonal lattice maximizes _Madelung-like_ lattice energies (lattice points have alternating signs). This result is somewhat in-between the result of Montgomery [32] and the joint result of the authors with Betermin [8] as it does neither clearly relate to sphere packing nor covering. Related results concerning the Lennard-Jones potential (see Betermin-Zhang [9]), which is \(r\mapsto r^{-12}-2r^{-6}\) and neither non-negative nor monotonic nor convex, show that for different densities different geometrical arrangements can be optimal. This phenomenon is widely called _phase transition_. Some physically relevant consequences of the conjectured universal optimality of the hexagonal lattice (and proven optimality of \(\mathsf{E}_{8}\) and Leech lattice) are discussed by Petrache-Serfaty [36]. A general survey is given by Lewin-Blanc [29]. ### Heat Equation Sampling Our result solves the following problem on \(\mathbb{S}^{1}\) as a byproduct. The problem was originally discussed by Pausinger and the second author [34] on \(\mathbb{T}^{2}\). Suppose there is an unknown distribution of heat \(f\in L^{1}(\mathbb{S}^{1})\) and we are interested in estimating the total heat \(\int_{\mathbb{S}^{1}}f(x)dx\). If the function \(f\) is only in \(L^{1}\) then no effective sampling strategies are possible. If we now assume, however, that some time \(t>0\) has passed, then the solution of the heat equation \(e^{t\Delta}f\) with \(f\) as initial conditions satisfies \[\int_{\mathbb{S}^{1}}f(x)dx=\int_{\mathbb{S}^{1}}\left[e^{t\Delta}f\right](x)dx\] and is also a more regular function for which sampling strategies should be possible. **Corollary 2.4.1**.: _For any \(t>0\) and all \(n\geq N(t)\) sufficiently large (depending only on \(t\)) the worst case sampling error_ \[\sup_{f\in L^{1}(\mathbb{S}^{1})}\left|\frac{1}{n}\sum_{k=1}^{n}\left[e^{t \Delta}\right](x_{k})-\int_{\mathbb{S}^{1}}f(x)dx\right|\] _is minimized if and only if the sampling points \(\{x_{1},\ldots,x_{n}\}\) are equispaced._ Proof.: By linearity, we can assume that \(f\) has mean value \(0\). Then, interpreting the solution of the heat equation as a self-adjoint Fourier multiplier, we have \[\frac{1}{n}\sum_{k=1}^{n}\left[e^{t\Delta}\right](x_{k})=\left\langle e^{t \Delta}f,\frac{1}{n}\sum_{k=1}^{n}\delta_{x_{k}}\right\rangle=\left\langle f, \frac{1}{n}\sum_{k=1}^{n}e^{t\Delta}\delta_{x_{k}}\right\rangle.\] Recalling that the solution of the heat equation started with a Dirac is exactly the Jacobi \(\theta\)-function, we have \[\left[e^{t\Delta}\delta_{x_{k}}\right](x)=\theta_{t}(x-x_{k})\] and thus, using the fact that \(f\) has mean value \(0\), \[\left\langle f,\frac{1}{n}\sum_{k=1}^{n}e^{t\Delta}\delta_{x_{k}}\right\rangle =\left\langle f,\frac{1}{n}\sum_{k=1}^{n}\theta_{t}(x-x_{k})\right\rangle= \left\langle f,\frac{1}{n}\sum_{k=1}^{n}\left(\theta_{t}(x-x_{k})-1\right) \right\rangle.\] Using \(L^{1}-L^{\infty}\) duality, we arrive that \[\sup_{f\in L^{1}(\mathbb{S}^{1})}\left|\left\langle f,\frac{1}{n}\sum_{k=1}^{ n}\left(\theta_{t}(x-x_{k})-1\right)\right\rangle\right|=\left\|\frac{1}{n} \sum_{k=1}^{n}\left(\theta_{t}(x-x_{k})-1\right)\right\|_{L^{\infty}}.\] Our results show that the maximum is minimized and the minimum is maximized if and only if the points are equispaced. This implies the statement. ### Shift invariant systems A shift invariant system \(V^{2}(g)\) on \(\mathbb{R}\) with a generator \(g\in L^{2}(\mathbb{R})\) is a space of functions of the form \[V^{2}(g)=\{f(x)=\sum_{k\in\mathbb{Z}}c_{k}\,g(x-k)\mid(c_{k})\in\ell^{2}( \mathbb{Z})\}.\] An example is the classical Paley-Wiener space \(PW(\mathbb{R})\) of band-limited functions, i.e., \(\operatorname{supp}(\widehat{f})\subset[-1/2,1/2]\), which is generated by \(\operatorname{sinc}(x)=\sin(\pi x)/(\pi x)\). For a set \(\Gamma\subset\mathbb{R}\), we say that it is a set of sampling for \(V^{2}(g)\) if and only if there exist positive constants \(0<A\leq B<\infty\) such that \[A\|f\|_{L^{2}(\mathbb{R})}^{2}\leq\sum_{\gamma\in\Gamma}|f(\gamma)|^{2}\leq B \|f\|_{L^{2}(\mathbb{R})}^{2},\quad\forall f\in V^{2}(g).\] For the motivation of (nonuniform) sampling in \(V^{2}\) we refer to the article by Aldroubi-Grochenig [2]. Characterizing sampling sets for given generator \(g\) is a very difficult problem. A necessary condition is that the (lower Beurling) density of the set is at least \(1\). The case of density \(1\) is referred to as critical sampling. For a large class of functions, including the Gaussian function \(x\mapsto e^{-\alpha x^{2}}\), \(\alpha>0\), the problem is solved by Grochenig, Romero, Stockler [26]. The case of critical sampling with Gaussian generator is treated by Baranov, Belov, Grochenig [4]. Our results suggest that for the space \(V^{2}(\phi_{\alpha})\), where \(\phi_{\alpha}\) is a Gaussian, the bound \(B\) is minimal and \(A\) is maximal for equispaced sampling. Lastly, we mention the relatively new area of dynamical sampling introduced Aldroubi, Cabrelli, Molter, Tang [1]. This combines the sampling problem with dynamical systems. In particular, we find connections between the heat equation and the sampling problem, as described by Aldroubi, Grochenig, Huang, Jaming, Krishtal, Romero [3]. Ulanovskii-Zlotnikov [44] describe sampling sets for \(PW(\mathbb{R})\) so that \(f\) can be reconstructed from samples of \(f*\varphi_{t}\), where \(\varphi_{t}\) is a convolution kernel of a dynamical process. It would be interesting to see how our results connect to this area. ## 3. Notation and remarks ### Basic notation To clarify normalization, we note that we use the following version of the Fourier transform of a suitable function \(f\) on the real line. \[\widehat{f}(\omega)=\int_{\mathbb{R}}f(x)e^{-2\pi i\omega x}\,dx\qquad\text{so} \qquad\|f\|_{L^{2}(\mathbb{R})}=\|\widehat{f}\|_{L^{2}(\mathbb{R})}\] Thus, the Poisson summation formula reads (see, e.g., Grochenig [25, Chap. 1.4]) \[\sum_{k\in\mathbb{Z}}f(k+x)=\sum_{\ell\in\mathbb{Z}}\widehat{f}(\ell)e^{2\pi i \ell x}.\] The Fourier transform of a Gaussian is another Gaussian, differently scaled (see, e.g., Folland [24, App. A]); \[\text{if}\qquad\phi_{\alpha}(x)=e^{-\pi\alpha x^{2}},\ \alpha>0,\qquad\text{ then }\qquad\widehat{\phi_{\alpha}}(\omega)=\frac{1}{\sqrt{\alpha}}e^{-\frac{\pi}{ \alpha}\omega^{2}}=\frac{1}{\sqrt{\alpha}}\phi_{1/\alpha}(\omega).\] The periodization of \(\phi_{\alpha}\) will be called a periodic Gaussian: \(\sum_{k\in\mathbb{Z}}\phi_{\alpha}(x+k)\). A periodic configuration \(\Gamma\) with period \(\delta\) on the real line is a set of points of the following form: \[\Gamma=\bigcup_{k=1}^{n}\Lambda+x_{k},\quad\text{ where }\quad\Lambda=\delta \mathbb{Z},\,\delta>0,\quad x_{k}\neq x_{j},\ k\neq j,\quad x_{k}\in[0,\delta).\] The density \(\rho\) of a configuration \(\Gamma\) is the number \(n\) of points per period \(\rho=n/\delta\). ### Polarization on the real line We are now interested in the following polarization problem: which periodic configuration of fixed density \(\rho\) maximizes \[\min_{x}\frac{1}{\sqrt{\alpha}}\ \sum_{\gamma\in\Gamma}\phi_{1/\alpha}(x- \gamma)\;?\] We call the above quantity the polarization of \(\Gamma\) and seek to find the maximal polarization. In general, the minimum depends on \(\Gamma\) and its density \(\rho\) as well as on \(\alpha\). For equidistributed points, however, the minimum is always achieved midway between successive points (as we will prove as part of the proof of the main result). The polarization may explicitly be written in one of the following ways: \[\min_{x}\frac{1}{\sqrt{\alpha}}\sum_{\gamma\in\Gamma}\phi_{1/ \alpha}(x-\gamma) =\min_{x}\frac{1}{\sqrt{\alpha}}\sum_{j=1}^{n}\sum_{k\in\mathbb{Z }}e^{-\pi\frac{\delta^{2}}{\alpha}\left(k+\frac{x_{j}-x}{\delta}\right)^{2}} \tag{3.1}\] \[=\min_{x}\frac{1}{\delta}\sum_{j=1}^{n}\sum_{k\in\mathbb{Z}}e^{- \pi\frac{\alpha}{\delta^{2}}k^{2}}e^{2\pi ik\frac{(x_{j}-x)}{\delta}}, \tag{3.2}\] where the second equality is due to the Poisson Summation Formula. Note that in this explicit formula \(\{x_{1},\ldots,x_{n}\}\subset[0,1)\). We will next show that we may always assume \(\delta=1\). Clearly, neither the factor \(1/\sqrt{\alpha}\) nor the factor \(1/\delta\) are of relevance for the minimization process or determination of the maximizing configuration. As \(\alpha>0\) is arbitrary (but fixed) we may substitute it by \(\delta^{2}/\alpha\) in (3.1), obtaining \[\min_{x}\sum_{j=1}^{n}\sum_{k\in\mathbb{Z}}e^{-\pi\alpha\left(k+\frac{x_{j}-x }{\delta}\right)^{2}}, \tag{3.3}\] which is periodic with period \(\delta\). Lastly, scaling by \(\delta\) yields the expression \(p_{\alpha}(x)\) (up to a \(\pi\) in the exponent kept for technical reason) from (1.1); \[\min_{x}p_{\alpha}(x)=\min_{x}\sum_{j=1}^{n}\sum_{k\in\mathbb{Z}}e^{-\pi\alpha(k +x_{j}-x)^{2}}.\] This is periodic with period \(1\) and has the same minimum (and maximum) as the expression in (3.3). Only the points where the minimum (and maximum) are assumed have to be put to scale. Using similar arguments we see that finding the optimal configuration for (3.2) is the same as maximizing \[\min_{x}\sum_{j=1}^{n}\sum_{k\in\mathbb{Z}}e^{-\pi\alpha k^{2}}e^{2\pi ik(x_{j} -x)}=\min_{x}\sum_{j=1}^{n}\theta_{\alpha}(x_{j}-x).\] This is (up to flipping the argument) exactly the quantity \(f_{\alpha}(x)\) from (1.2) considered in our main result. Note that by the Poisson Summation Formula \[f_{\alpha}(x)=\frac{1}{\sqrt{\alpha}}\,p_{1/\alpha}(x).\] ### Theta functions The problem can be written as a variational problem for a finite superposition of real-valued theta functions. For parameter \(\tau\in\mathbb{H}\) (Siegel upper half-plane) and argument \(z\in\mathbb{C}\) the classical theta function is \[\vartheta(z;\tau)=\sum_{k\in\mathbb{Z}}e^{\pi i\tau k^{2}}e^{2\pi ikz}.\] This function is holomorphic in \(\tau\) and entire in \(z\). For \(\tau=i\alpha\), \(\alpha>0\) and \(z=x\in\mathbb{R}\) the function becomes real-valued and we use the notation: \[\theta(x;\alpha)=\sum_{k\in\mathbb{Z}}e^{-\pi\alpha k^{2}}e^{2\pi ikx}=\sum_{ k\in\mathbb{Z}}e^{-\pi\alpha k^{2}}\cos(2\pi kx)=1+2\sum_{k\geq 1}e^{-\pi\alpha k^{2} }\cos(2\pi kx).\] Note that the function \(\theta(x;\alpha)\) is the heat kernel on the flat torus \(\mathbb{R}/\mathbb{Z}\). As such it has mean value \(1\), which is easily verified by a small computation; \[\int_{0}^{1}\sum_{k\in\mathbb{Z}}e^{-\pi\alpha k^{2}}e^{2\pi ikx}\,dx=\sum_{k \in\mathbb{Z}}e^{-\pi\alpha k^{2}}\int_{0}^{1}e^{2\pi ikx}\,dx=\sum_{k\in \mathbb{Z}}e^{-\pi\alpha k^{2}}\delta_{k,0}=1,\] where \(\delta_{k,0}\) is the Kronecker delta. The function \(\vartheta(z;\tau)\) and, hence, \(\theta(x;\alpha)\) can be expressed as an infinite product known as the Jacobi triple product, which is a special case of the MacDonald identities for affine root systems [30]: \[\vartheta(z;\tau) =\prod_{k\geq 1}\left(1-e^{2k\pi i\tau}\right)\left(1+e^{(2k-1) \pi i\tau}e^{2\pi iz}\right)\left(1+e^{(2k-1)\pi i\tau}e^{-2\pi iz}\right)\] \[=\prod_{k\geq 1}\left(1-e^{2k\pi i\tau}\right)\left(1+2\cos(2\pi z )e^{(2k-1)\pi i\tau}+e^{2(2k-1)\pi i\tau}\right).\] We refer to Mumford [33], Stein-Shakarchi [40] or Whittaker-Watson [46] for more details on elliptic functions. ## 4. Proof of Proposition 1.1 Proposition 1.1 follows relatively easily from the work of Cohn-Kumar [19]. The heart of the argument has three ingredients: 1. first, universal optimality shows that, for any fixed \(\alpha>0\), the interaction energy \[\frac{1}{n}\sum_{k,j=1}^{n}\theta_{\alpha}(x_{j}-x_{k})\qquad\text{is minimized for equispaced points.}\] 2. The second ingredient is a trivial estimate that arises from replacing an average (arithmetic mean) of values by its maximum \[\max_{x}\sum_{k=1}^{n}\theta_{\alpha}(x-x_{k})\geq\frac{1}{n}\sum_{j=1}^{n} \sum_{k=1}^{n}\theta_{\alpha}(x_{j}-x_{k}).\] 3. The third ingredient is that (2) is _sharp_ whenever the points are equispaced (which, simultaneously, by universal optimality, minimizes the lower bound in (2) just above). There is a magic ingredient where, for equispaced points, the maximum of \(\sum_{k=1}^{n}\theta_{\alpha}(x-x_{k})\) is attained at the points \(x_{j}\) themselves. We remark that the counterpart to (1) is false for the minimization problem. Likewise, regarding (3), the location of the minimum depends in a highly nonlinear fashion on the location of the points. Understanding the minimum and the considered polarization problem thus requires a different approach. Proof.: (1) We note that the energy for the potential \(\Phi=1/\sqrt{\alpha}\,\phi_{1/\alpha}\) is given by \[E_{\Phi}(\Gamma)=\frac{1}{n}\sum_{k=1}^{n}\sum_{j=1}^{n}\sum_{\ell\in\mathbb{ Z}}\frac{1}{\sqrt{\alpha}}\phi_{1/\alpha}(\ell+x_{j}-x_{k})=\frac{1}{n}\sum_{k=1}^ {n}\sum_{j=1}^{n}\theta_{\alpha}(x_{j}-x_{k}),\] where the second equality comes from the Poisson Summation Formula. The potentials are sitting on the periodic configuration \(\Gamma\). However, not only their sum is considered but all their pairwise interactions and the sum over all of them. The condition \(\lambda\in\Lambda\backslash\{x_{k}-x_{j}\}\) in (2.1) excludes self-interaction as the potential function \(g\) is allowed to be singular at \(0\) (this is also of physical relevance). For the Gaussian, we may allow self-interaction (which adds a fixed additive constant determined by normalization, but independent of \(\Gamma\)) and we do not need to exclude it. If \(\Gamma_{0}=\cup_{j=1}^{n}\mathbb{Z}+(j-1)/n=(1/n)\mathbb{Z}\), then the energy can be written as (after applying the Poisson Summation Formula) \[E_{\Phi}(\Gamma_{0})=\frac{1}{n}\sum_{k=1}^{n}\sum_{j=1}^{n}\theta_{\alpha} \left(\ell+\frac{j-k}{n}\right)=\frac{1}{n}\sum_{k=1}^{n}\sum_{j=1}^{n} \theta_{\alpha}\left(\ell+\frac{j}{n}\right)=\sum_{j=1}^{n}\theta_{\alpha} \left(\ell+\frac{j}{n}\right),\] where the second and third equality are due to the periodicity of \(\theta_{\alpha}\). The universal optimality of the (scaled) integers due to Cohn-Kumar [19] states, for all \(\alpha>0\), \[E_{\Phi}(\Gamma_{0})\leq E_{\Phi}(\Gamma)\quad\text{ with equality if and only if }\quad\Gamma=\Gamma_{0}.\] Note that the result in [19] as well as our also hold for arbitrary scaling. (2) is a trivial observation and does not require any more details. (3) For \(\Gamma_{0}\) the maxima of \(p_{\alpha}\) (or likewise \(f_{\alpha}\)) are attained at the equispaced points \(\{0,1/n,\ldots,(n-1)/n\}\) (compare Proposition 5.1.1). This follows by a simple application of Poisson Summation Formula and the triangle inequality. This allows for various additional tools to be used, in particular, it allows for a lossless application of the triangle inequality. We will give the proof for the integers \(\mathbb{Z}\) but the proof can easily be adjusted to scaled integers \(\delta\mathbb{Z}\) (replace \(k\) by \(k/\delta\) and adjust the Poisson summation formula accordingly). We perform the following small computation: \[\frac{1}{\sqrt{\alpha}}\,p_{1/\alpha}(x) =f_{\alpha}(x)=\sum_{k\in\mathbb{Z}}e^{-\pi\alpha k^{2}}e^{2\pi ikx}\] \[\leq\sum_{k\in\mathbb{Z}}e^{-\pi\alpha k^{2}}\left|e^{2\pi ikx} \right|=\sum_{k\in\mathbb{Z}}e^{-\pi\alpha k^{2}}\] \[=f_{\alpha}(0)=\frac{1}{\sqrt{\alpha}}\,p_{1/\alpha}(0),\qquad \forall\alpha>0.\] So, the maximum is attained at \(0\) and by periodicity at all points in \(\mathbb{Z}\) (or \(\delta\mathbb{Z}\)). As \(E_{\Phi}(\Gamma)\) builds the average of all values taken on \(\Gamma\) (see Figure 1), it readily follows (for \(\Gamma_{0}\) the maximum is attained at \(x_{j}=(j-1)/n\) as just proven) that \[\max_{x}f_{\alpha}(x)=\max_{x}\sum_{j=1}^{n}\theta_{\alpha}\left(x+x_{j} \right)\quad\text{ is minimal if and only if $\Gamma$ is equispaced.}\] This gives Proposition 1.1 as a simple consequence of the result in [19]. ## 5. Proof of the Main Result We start with an overall overview of the argument. It is fairly modular and the subsections reflect its overall structure. We also emphasize that, due to the fast decay of the Fourier coefficients, the argument is somewhat forgiving when it comes to polynomial estimates in the number of points. As a consequence, some of the subsequent proofs are given in its simplest rather than their optimal form. The main argument comes in two parts: the first part shows that optimizing configurations have to be exponentially close (in \(n\)) to the equispaced distribution. The structure of the first part is as follows. Figure 1. Illustration of the Cohn-Kumar result [19]. Building the average of \(p_{\alpha}(x)\) at the points \(\{x_{1},\dots,x_{n}\}\) (in this case \(n=3\)) for periodic, non-equispaced configuration always yields a larger value than for the equispaced points. As we sum \(n\) times the maximum in the equispaced case, it follows that the maximum of \(p_{\alpha}(x)\) is minimal only for the equispaced configuration. 1. SS5.1 uses some basic facts of theta-functions. We show that if the points are equispaced, then the minimum is attained exactly at the midpoints between the equispaced points. This then allows us to deduce \[\min_{x}\sum_{j=1}^{n}\theta_{\alpha}\left(x-\frac{j}{n}\right)=n-2ne^{-\pi \alpha n^{2}}+\mathcal{O}(ne^{-4\pi\alpha n^{2}})\] which already shows some of the difficulty: the difference between the average and the minimum can be super-exponentially small in \(n\). 2. SS5.2 introduces a trivial \(L^{1}\)-estimate (essentially pigeonholing) and a non-trivial estimate: the McGehee-Pigno-Smith inequality [31]. 3. SS5.3 combines these ingredients to prove that if \(\{x_{1},\ldots,x_{n}\}\subset[0,1)\) is an optimal configuration (meaning one maximizing the minimum), then the first \(n-1\) Fourier coefficients of the measure \(\mu=\sum_{j=1}^{n}\delta_{x_{j}}\) must be small, more precisely \[\max_{1\leq|k|\leq n-1}\left|\sum_{j=1}^{n}e^{-2\pi ikx_{j}}\right|\leq 200 0\cdot n^{2}\cdot e^{-\pi\alpha(2n-1)}.\] 4. We note that for equispaced points the first \(n-1\) Fourier coefficients all vanish. SS5.4 proves a basic estimate, invoking the classical Fejer kernel, showing if the first \(n-1\) Fourier coefficients of \(\mu\) are close to \(0\), then the \(n\) points are (quantitatively) close to \(n\) equispaced points. Since the estimate from (3) is extremely small, exponentially small in \(n\), we get that any optimal configuration has to be exponentially close to equispaced. The second part of the proof shows that the only configuration that is exponentially close (in \(n\)) to the equispaced distribution and has maximal polarization is the equispaced distribution: this part can be understood as a detailed analysis of the perturbative regime. The main idea lies in making the ansatz \(x_{j}=j/n+\varepsilon_{j}\) together with the explicit Fourier series representation \[\sum_{j=1}^{n}\theta\left(x-\frac{j}{n}+\varepsilon_{j}\right)=\sum_{k\in \mathbb{Z}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{2\pi ik\varepsilon_{j}} e^{-2\pi ik\frac{j}{n}}\right)e^{2\pi ikx}.\] Since the problem is invariant under shifts, we can (and have to) assume that \(\varepsilon_{1}+\cdots+\varepsilon_{n}=0\) to eliminate the invariance of the problem under translation. The argument is then structured as follows. 1. In SS5.5 we show that the frequencies where \(k\) is a multiple of \(n\) are exactly the terms that contribute when the points are equispaced: among these frequencies only \(k\in\{-n,0,n\}\) have a sizeable contribution, the rest is small. The equispaced points yield \(n\) local minima and our goal is to show that at least one of these minima further decreases unless \(\varepsilon_{j}=0\) for all \(1\leq j\leq n\) (meaning the points are equispaced again). 2. We consider the trigonometric polynomial \(g_{1}(x)\) which is the restriction to the first \((n-1)/2\) frequencies. By a modified Poincare inequality, we will prove in SS5.6 that any such trigonometric polynomial assumes a small negative value at at least one of the points of the form \((k+1/2)/n\), for \(0\leq k\leq n-1\). This negative contribution is going to make at least one of the minima much smaller. It remains to make sure that this cannot be counteracted by contributions coming from the other frequencies. 7. There are two remaining parts to analyze: \(g_{2}(x)\) defined by restricting summation to the frequencies \(n/2\leq|k|\leq n-1\) and \(h(x)\) for all the remaining frequencies. We will prove in SS5.7 that \(\|g_{2}\|_{L^{\infty}},\|h\|_{L^{\infty}}\ll\|g_{1}\|_{L^{2}}\). Indeed, these terms are many orders of magnitude smaller. 8. The main ingredient for showing the last step is a surprising appearance of the _Discrete Fourier Transform_ (see SS5.8) hidden in the Fourier coefficients: since the sum of the perturbations \(\varepsilon_{1}+\dots+\varepsilon_{n}=0\), we can approximate the Fourier coefficients whenever \(k\) is not a multiple of \(n\), as \[\sum_{j=1}^{n}e^{2\pi ik\varepsilon_{j}}e^{-2\pi ik\frac{j}{n}}=2\pi ik\sum_{j =1}^{n}\varepsilon_{j}e^{-2\pi ik\frac{j}{n}}+\mathcal{O}\left(k^{2}\sum_{j=1 }^{n}\varepsilon_{j}^{2}\right),\] where the sum is merely a Discrete Fourier Transform of the \(\varepsilon_{1},\dots,\varepsilon_{n}\). This allows us to deduce a certain type of symmetry (because the \(\varepsilon_{j}\) are real-valued) which will be used to prove \(\|g_{2}\|_{L^{\infty}}\ll\|g_{1}\|_{L^{2}}\). It also guarantees that not all Fourier coefficients are small (via a Plancherel identity). 9. The final inequality, established in SS5.9, is, assuming the perturbations \(\varepsilon_{j}\) are exponentially close to \(0\), that the minimum \[Z=\min_{0\leq k\leq n-1}\sum_{j=1}^{n}\theta_{\alpha}\left(\frac{k+1/2}{n}- \frac{j}{n}+\varepsilon_{j}\right)\] satisfies \[Z\leq\left[\min_{0\leq x\leq 1}\sum_{j=1}^{n}\theta\left(x-\frac{j}{n}\right) \right]-Ce^{-\pi\alpha\left(\frac{n-1}{2}\right)^{2}}\left(\sum_{j=1}^{n} \varepsilon_{j}^{2}\right)^{1/2}\] which then forces all the perturbations to vanish. ## Part 1 of the proof ### Minimizer for equidistributed points We first prove that for equispaced points the minimum is attained exactly midway between two subsequent points. It is somewhat remarkable, and indicative of the difficulty of the problem, that even this very intuitive statement does not appear to have a very simple proof. **Proposition 5.1.1**.: _We have, for all \(0\leq\ell\leq n-1\)_ \[\min_{x\in\mathbb{T}}\sum_{j=1}^{n}\theta_{\alpha}\left(x-\frac{j}{n}\right)= \sum_{j=1}^{n}\theta_{\alpha}\left(\frac{\ell+1/2}{n}-\frac{j}{n}\right).\] Proof.: Suppose \(\{x_{1},\dots,x_{n}\}\subset[0,1]\) are equispaced points and \(x_{j}=j/n\). Then \[\sum_{j=1}^{n}\theta_{\alpha}\left(x-x_{j}\right)=\sum_{j=1}^{n}\sum_{k\in \mathbb{Z}}e^{-\pi\alpha k^{2}}e^{2\pi ik(x-x_{j})}=\sum_{k\in\mathbb{Z}}e^{- \pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{-2\pi ikx_{j}}\right)e^{2\pi ikx}.\] As the points are equispaced, we have \[\sum_{j=1}^{n}e^{-2\pi ikx_{j}}=\begin{cases}n&\text{whenever }n\big{|}k\\ 0&\text{otherwise}.\end{cases}\] Therefore \[\sum_{j=1}^{n}\theta_{\alpha}\left(x-\frac{j}{n}\right)=n\sum_{k\in\mathbb{Z}}e^{- \pi\alpha k^{2}n^{2}}e^{2\pi iknx}=n\cdot\theta(nx;n^{2}\alpha) \tag{5.1}\] We use the Jacobi triple product representation of the theta function \[\theta(x;\alpha)=\prod_{k\geq 1}\left(1-e^{-2k\pi\alpha}\right)\left(1+2\cos(2 \pi x)e^{-(2k-1)\pi\alpha}+e^{-2(2k-1)\pi\alpha}\right).\] From this the following inequality is an immediate consequence after scaling; \[n\cdot\theta\left(\frac{1}{2n};n^{2}\alpha\right)\leq n\cdot\theta(x;n^{2} \alpha),\quad\forall\alpha>0,\] where equality holds if and only if \(x\in\frac{1}{n}\left(\mathbb{Z}+\frac{1}{2}\right)\). The result follows from (5.1). This fact will be used frequently since it allows for the natural point of comparison (see Figure 2). The next step consists in computing the actual size of the minimum. Using, again, the fact that unit roots sum to \(0\) we end up with \[\sum_{j=1}^{n}\theta_{\alpha}\left(x-\frac{j}{n}\right) =n\sum_{k\in\mathbb{Z}}e^{-\pi\alpha k^{2}n^{2}}e^{2\pi iknx}\] \[=n+2ne^{-\pi\alpha n^{2}}\cos\left(2\pi nx\right)+\mathcal{O}(ne^ {-4\pi\alpha n^{2}}).\] Figure 2: For the sum of equispaced periodic Gaussians the minimum is achieved midway between successive shifts. For sums of shifts by a general periodic configurations it is rather difficult to grasp the minimum. For the plot we have normalized the sum to oscillate around \(1\), i.e., the integral over \(1\) period is \(1\). Since we know from Proposition 5.1.1 that the minimum is attained exactly in the middle between two subsequent points, we have the explicit representation \[\min_{x}\sum_{j=1}^{n}\theta_{\alpha}\left(x-\frac{j}{n}\right) =\sum_{j=1}^{n}\theta_{\alpha}\left(\frac{1}{2n}-\frac{j}{n}\right)\] \[=n\sum_{\ell\in\mathbb{Z}}e^{-\pi\alpha\ell^{2}n^{2}}e^{2\pi i \ell n\frac{1}{2n}}=n+2n\sum_{\ell=1}^{\infty}(-1)^{\ell}e^{-\pi\alpha\ell^{2 }n^{2}}\] \[=n-2ne^{-\pi\alpha n^{2}}+\mathcal{O}(ne^{-4\pi\alpha n^{2}}).\] ### \(L^{1}\)-estimates We continue with a basic \(L^{1}\)-estimate and a not so basic \(L^{1}\)-estimate. The reason why \(L^{1}\) is a natural space to bound deviation from the mean is given by the following elementary pigeonhole argument. **Lemma 5.2.1**.: _Suppose \(g:[0,1]\to\mathbb{R}\) is a periodic, continuous function with mean value 0. Then_ \[\min_{0\leq x\leq 1}g(x)\leq-\frac{\|g\|_{L^{1}}}{2}.\] Proof.: Since \(g\) has mean value 0, we have \[\int_{0}^{1}\max(0,g(x))dx=-\int_{0}^{1}\min(0,g(x))dx\] and thus \[\int_{0}^{1}\min(0,g(x))dx=-\frac{\|g\|_{L^{1}}}{2}.\] The argument then follows from \[-\frac{\|g\|_{L^{1}}}{2}=\int_{0}^{1}\min(0,g(x))dx\geq\min_{0\leq x\leq 1}g(x).\] The second ingredient is an inequality of McGehee-Pigno-Smith [31] which arose in their solution of the Littlewood conjecture. **Theorem** (McGehee-Pigno-Smith [31]).: _For any set of integers \(\lambda_{1}<\lambda_{2}<\cdots<\lambda_{n}\) we have_ \[\int_{0}^{1}\left|\sum_{j=1}^{n}a_{j}e^{2\pi i\lambda_{j}t}\right|dt\geq\frac {1}{200}\sum_{j=1}^{n}\frac{|a_{j}|}{j}.\] McGehee-Pigno-Smith work over the interval \([0,2\pi]\) and show that the inequality holds with constant \(c=1/30\) which leads to \(1/(60\pi)\geq 1/200\) being an admissible constant when working over the interval \([0,1]\). Stegeman [43] showed that one can take \(c=4/\pi^{3}\) on \([0,2\pi]\) which would lead to a constant of \(1/50\) being admissible after rescaling to \([0,1]\). In any case, the precise value of the constant will not be of importance for the subsequent argument. We will use the McGehee-Pigno-Smith inequality to derive a lower bound on the \(L^{1}\)-norm of the deviation of the sum of Jacobi \(\theta\)-functions from their mean. We note that if the lower bound is large, then the \(L^{1}\)-norm is large and, as a consequence, the minimal value attained by the function has to be quite a bit smaller than its average. Since we want to avoid this, this will implicitly force the first few Fourier coefficients to be small. **Lemma 5.2.2**.: _We have, for all \(\{x_{1},\ldots,x_{n}\}\subset[0,1)\) that_ \[\left\|\sum_{j=1}^{n}\theta_{\alpha}\left(x-x_{j}\right)-n\right\|_{L^{1}}\geq \frac{1}{400n}\sum_{\begin{subarray}{c}k\neq 0\\ |k|\leq n\end{subarray}}e^{-\pi\alpha k^{2}}\left|\sum_{j=1}^{n}e^{-2\pi ikx_{ j}}\right|-\mathcal{O}(ne^{-\pi\alpha(n+1)^{2}}).\] Proof.: Our object of interest \[\sum_{j=1}^{n}\theta_{\alpha}\left(x-x_{j}\right)=\sum_{k\in\mathbb{Z}}e^{-\pi \alpha k^{2}}\left(\sum_{j=1}^{n}e^{-2\pi ikx_{j}}\right)e^{2\pi ikx}\] is not quite of the required form since it is not a trigonometric polynomial. However, a simple application of the triangle inequality leads to \[\left\|\sum_{j=1}^{n}\theta_{\alpha}\left(x-x_{j}\right)-\sum_{|k|\leq n}e^{- \pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{-2\pi ikx_{j}}\right)e^{2\pi ikx}\right\| _{L^{\infty}}\lesssim ne^{-\pi\alpha(n+1)^{2}}.\] We apply the McGehee-Pigno-Smith inequality to the trigonometric polynomial \[\left\|\sum_{|k|\leq n}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{-2\pi ikx_{ j}}\right)e^{2\pi ikx}-n\right\|_{L^{1}}\geq\frac{1}{400n}\sum_{\begin{subarray}{c}k \neq 0\\ |k|\leq n\end{subarray}}e^{-\pi\alpha k^{2}}\left|\sum_{j=1}^{n}e^{-2\pi ikx_{ j}}\right|.\] Combined with the truncation error, this leads to the lower bound \[\left\|\sum_{j=1}^{n}\theta_{\alpha}\left(x-x_{j}\right)-n\right\|_{L^{1}} \geq\frac{1}{400n}\sum_{\begin{subarray}{c}k\neq 0\\ |k|\leq n\end{subarray}}e^{-\pi\alpha k^{2}}\left|\sum_{j=1}^{n}e^{-2\pi ikx_{ j}}\right|-\mathcal{O}(ne^{-\pi\alpha(n+1)^{2}}).\] ### The first \(n-1\) Fourier coefficients are small The purpose of this section is to show that the first \(n-1\) Fourier coefficients of any minimizing configuration are exponentially small in \(n\). **Lemma 5.3.1**.: _Let \(\{x_{1},\ldots,x_{n}\}\subset[0,1)\) be a configuration of points that maximizes the minimum. Then_ \[\max_{1\leq|k|\leq n-1}\left|\sum_{j=1}^{n}e^{-2\pi ikx_{j}}\right|\leq 2000n^{ 2}\cdot e^{-\pi\alpha(2n-1)}.\] Proof.: Combining Lemma 5.2.1 with Lemma 5.2.2, we deduce that for any set \(\{x_{1},\ldots,x_{n}\}\subset[0,1]\) the function \[f(x)=\sum_{k\in\mathbb{Z}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{-2\pi ikx _{j}}\right)e^{2\pi ikx}\] satisfies the inequality \[\min_{0\leq x\leq 1}f(x)-n\leq-\frac{1}{800n}\sum_{\begin{subarray}{c}k\neq 0\\ |k|\leq n\end{subarray}}e^{-\pi\alpha k^{2}}\left|\sum_{j=1}^{n}e^{-2\pi ikx_{ j}}\right|+\mathcal{O}(ne^{-\pi\alpha(n+1)^{2}}).\] We know that equispaced points satisfy \[\min_{0\leq x\leq 1}\sum_{j=1}^{n}\theta(x-x_{j})=n-2ne^{-\pi\alpha n^{2}}+ \mathcal{O}(ne^{-4\pi\alpha n^{2}}).\] Therefore, if we now assume that \(\{x_{1},\ldots,x_{n}\}\subset[0,1)\) is a configuration maximizing the minimum, we have that \[\min_{0\leq x\leq 1}\sum_{j=1}^{n}\theta(x-x_{j})\geq n-2ne^{-\pi\alpha n^{2}} +\mathcal{O}(ne^{-4\pi\alpha n^{2}}).\] which then implies \[\frac{1}{800n}\sum_{\begin{subarray}{c}k\neq 0\\ |k|\leq n\end{subarray}}e^{-\pi\alpha k^{2}t}\left|\sum_{j=1}^{n}e^{-2\pi ikx_ {j}}\right|\leq 2ne^{-\pi\alpha n^{2}}+\mathcal{O}(ne^{-\pi\alpha(n+1)^{2}}).\] This implies that for \(1\leq|k|\leq n\) and \(n\) sufficiently large (depending only on \(t\)) \[\left|\sum_{j=1}^{n}e^{-2\pi ikx_{j}}\right|\leq 2000n^{2}\cdot e^{-\pi\alpha(n ^{2}-k^{2})}.\] This allows us to conclude that the first \(n-1\) Fourier coefficients of the measure given by the sum of the \(n\) Dirac measures in \(x_{1},\ldots,x_{n}\) is exponentially small \[\max_{1\leq|k|\leq n-1}\left|\sum_{j=1}^{n}e^{-2\pi ikx_{j}}\right|\leq 2000n^{2 }\cdot e^{-\pi\alpha(2n-1)}. \tag{5.2}\] _Remark_.: We note that the proof actually shows quite a bit more since the last step of the argument is only sharp when \(k=n-1\). We note the stronger inequality \[\left|\sum_{j=1}^{n}e^{-2\pi ikx_{j}}\right|\leq 2000n^{2}\cdot e^{-\pi\alpha(n ^{2}-k^{2})}\] but this will not strictly be required in the remainder of the argument. ### The gaps are regular If we have \(n\) equispaced points, then the first \(n-1\) Fourier coefficients vanish. We prove a stability version of this statement: if the first \(n-1\) Fourier coefficients are small, the points are almost equispaced. **Lemma 5.4.1**.: _Suppose \(\{x_{1},\ldots,x_{n}\}\subset[0,1)\) has the property that_ \[\max_{1\leq|k|\leq n-1}\left|\sum_{j=1}^{n}e^{-2\pi ikx_{j}}\right|\leq\varepsilon.\] _Then, for \(\varepsilon>0\) sufficiently small, there exists a permutation \(\pi:S_{n}\to S_{n}\) and a global shift \(z\in[0,1]\) such that_ \[\left|x_{\pi(j)}-\frac{j}{n}-z\right|\leq\varepsilon.\] Proof.: We use the Fejer kernel \[F_{n}(x)=\sum_{|k|\leq n}\left(1-\frac{|k|}{n}\right)e^{2\pi ikx}=\frac{1}{n} \left(\frac{\sin\left(\pi nx\right)}{\sin\left(\pi x\right)}\right)^{2}\geq 0.\] Note that \(F_{n}(0)=n\). Therefore \[\sum_{i,j=1}^{n}F_{n}(x_{i}-x_{j}) =\sum_{i,j=1}^{n}\sum_{|k|\leq n}\left(1-\frac{|k|}{n}\right)e^{2 \pi ik(x_{i}-x_{j})}\] \[=\sum_{|k|\leq n}\left(1-\frac{|k|}{n}\right)\sum_{i,j=1}^{n}e^{2 \pi ik(x_{i}-x_{j})}\] \[=\sum_{|k|\leq n}\left(1-\frac{|k|}{n}\right)\left|\sum_{j=1}^{n} e^{2\pi ikx_{j}}\right|^{2}.\] Hence, applying the assumption of the first \(n-1\) non-zero Fourier coefficients being small, we get \[\sum_{i,j=1}^{n}F_{n}(x_{i}-x_{j}) =n^{2}+\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}F_{n}(x_{i}-x_{j})=\sum_{|k|\leq n}\left(1-\frac{|k| }{n}\right)\left|\sum_{j=1}^{n}e^{2\pi ikx_{j}}\right|^{2}\] \[=n^{2}+\sum_{\begin{subarray}{c}|k|\leq n\\ k\neq 0\end{subarray}}\left(1-\frac{|k|}{n}\right)\left|\sum_{j=1}^{n}e^{2 \pi ikx_{j}}\right|^{2}\leq n^{2}+2n\varepsilon^{2}.\] From the above calculation we also conclude that, for any \(i\neq j\), \[F_{n}(x_{i}-x_{j})\leq\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}F_{n}(x_{i}-x_{j})\leq 2n\varepsilon^{2}.\] This inequality, by itself, is not tremendously powerful: we bound a term by a sum containing \(\sim n^{2}\) similar terms. However, we have the luxury that we will only apply the Lemma in a regime where \(\varepsilon\) is already exponentially small in \(n\) which allows for losses at a polynomial scale. The roots of \(F_{n}\) are exactly the points of the form \(i/n\) for \(1\leq i\leq n-1\). Since \[F_{n}(x)=\frac{1}{n}\left(\frac{\sin\left(\pi nx\right)}{\sin\left(\pi x \right)}\right)^{2}\] we have \[\frac{d^{2}}{dx^{2}}F_{n}(x)=\frac{2\pi^{2}\csc^{2}(\pi x)}{n}\,X\] where \[X=\left(n^{2}\cos^{2}(\pi nx)-\sin^{2}(\pi nx)\left(n^{2}-3\csc^{2}(\pi x)+2 \right)-2n\cot(\pi x)\sin(2\pi nx)\right).\] At points of the form \(x=i/n\) this expression simplifies to \[\frac{d^{2}}{dx^{2}}F_{n}(x)\Big{|}_{x=\frac{i}{n}}=2\pi^{2}n\csc^{2}\left( \frac{i\pi}{n}\right)\geq 2\pi^{2}n.\] Therefore, for \(y\) sufficiently small, we have \[F_{n}\left(\frac{j}{n}+y\right)\geq 2ny^{2}\] for all \(y\) sufficiently close to \(0\). A Taylor series expansion shows that the inequality is valid in a region around \(0\) that shrinks at most polynomial in \(n\) and from this we deduce the validity of the inequality for \(\varepsilon\) sufficiently small. The previous inequality \[F_{n}(x_{i}-x_{j})\leq\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}F_{n}(x_{i}-x_{j})\leq 2n\varepsilon^{2}\] implies that \(x_{i}-x_{j}\) has to be of the form \(x_{i}-x_{j}=k/n+\delta\) with some \(\delta\leq\varepsilon\). Moreover, since \(F_{n}(0)=n\) we can also deduce that \(|x_{i}-x_{j}|>1/2n\) (provided \(\varepsilon\) is sufficiently small) which then forces the existence of a global perturbation. ## Part 2 of the proof ### The Main Contribution We have \[\sum_{j=1}^{n}\theta_{\alpha}\left(x-\frac{j}{n}+\varepsilon_{j}\right) =\sum_{j=1}^{n}\sum_{k\in\mathbb{Z}}e^{-\pi\alpha k^{2}}e^{2\pi ik (x-\frac{j}{n}+\varepsilon_{j})}\] \[=\sum_{k\in\mathbb{Z}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{ 2\pi ik(\varepsilon_{j}-\frac{j}{n})}\right)e^{2\pi ikx}\] \[=\sum_{k\in\mathbb{Z}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{ 2\pi ik\varepsilon_{j}}e^{-2\pi ik\frac{j}{n}}\right)e^{2\pi ikx}.\] We note that, as already above, when all the \(\varepsilon_{j}=0\), then \[\sum_{j=1}^{n}\theta_{\alpha}\left(x-\frac{j}{n}\right)=n+2ne^{-\pi\alpha n^{ 2}}\cos{(2\pi nx)}+\mathcal{O}(ne^{-4\pi\alpha n^{2}}).\] Then, the minimal value is very close to the mean value \(n\). It remains to show that small perturbations decrease the minimal value. Using Taylor series we note that \(k=n\) contributes \[e^{-\pi\alpha n^{2}}\left(\sum_{j=1}^{n}e^{2\pi in\varepsilon_{j }}\right)e^{2\pi inx} =e^{-\pi\alpha n^{2}}\left(n+2\pi in\sum_{j=1}^{n}\varepsilon_{j} +\mathcal{O}\left(n^{2}\sum_{j=1}^{n}\varepsilon_{j}^{2}\right)\right)e^{2\pi inx}\] \[=e^{-\pi\alpha n^{2}}ne^{2\pi inx}+\mathcal{O}\left(ne^{-\pi \alpha n^{2}}\sum_{j=1}^{n}\varepsilon_{j}^{2}\right)\] and the same contribution arises for \(k=-n\). Thus the three terms \[B=\sum_{k\in\{-n,0,n\}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{2\pi ik \varepsilon_{j}}e^{-2\pi ik\frac{j}{n}}\right)e^{2\pi ikx}\] contribute, up to a small error term, the same quantity as the unperturbed case \(\varepsilon_{j}=0\) and \[B=n+2ne^{-\pi\alpha n^{2}}\cos\left(2\pi nx\right)+\mathcal{O}\left(ne^{-\pi \alpha n^{2}}\sum_{j=1}^{n}\varepsilon_{j}^{2}\right).\] If it were the case that, for a suitable perturbation, the minimum increases, then this means that the remaining terms of the Fourier series have to be positive in all points of the form \((k+1/2)/n\), \(1\leq k\leq n-1\). The remainder of the argument is dedicated to showing that this cannot happen. ### A Trigonometric Lemma This section proves a self-contained Lemma, which shows that a trigonometric polynomial of degree at most \((n-1)/2\) assumes negative values at at least one of the points \((k+1/2)/n\), for \(0\leq k\leq n-1\). The obtained bound is likely far from optimal but suffices for our purpose. Indeed, the rapid decay of the Gaussian weight ensures that any type of polynomial bound would suffice for the remainder of the argument. **Lemma 5.6.1**.: _If \(f:[0,1]\to\mathbb{R}\) is a trigonometric polynomial of the form_ \[f(x)=\sum_{1\leq|j|\leq\frac{n-1}{2}}a_{j}e^{2\pi ijx}\quad\text{then}\quad \min_{0\leq k\leq n-1}f\left(\frac{k+1/2}{n}\right)\leq-\frac{\|f\|_{L^{2}([0, 1])}}{3n^{2}}.\] We note that the restriction on the frequency \(|j|\leq(n-1)/2\) is tight. Suppose \(n\) is even and consider the trigonometric polynomial \[f(x)=e^{2\pi i\frac{n}{2}x}+e^{-2\pi i\frac{n}{2}x}=2\cos\left(\pi nx\right)\] which satisfies \[f\left(\frac{k+1/2}{n}\right)=2\cos\left(\pi(k+1/2)\right)=0.\] Before stating the proof of Lemma 5.6.1, we establish one of the two main ingredients: a modified Poincare inequality for functions that do not quite vanish on the boundary. Needless to say, the tools and arguments used to establish this inequality are completely standard and we do not claim the inequality to be novel in any sense (but it is easier to prove than to find in the literature). **Proposition 5.6.2** (Modified Poincare Inequality).: _Let \(f:[a,b]\to\mathbb{R}\) be a differentiable function satisfying \(|f(a)|\leq M\) and \(|f(b)|\leq M\). Then_ \[\int_{a}^{b}f(x)^{2}dx \leq M^{2}(b-a)+2M\sqrt{b-a}\left(\frac{(b-a)^{2}}{\pi^{2}}\int _{a}^{b}f^{\prime}(x)^{2}dx\right)^{1/2}\] \[+\frac{(b-a)^{2}}{\pi^{2}}\int_{a}^{b}f^{\prime}(x)^{2}dx\] Proof.: We first note that replacing \(f(x)\) by \(|f(x)|\) does not change \(\|f\|_{L^{2}}\) and leaves \(\|f^{\prime}\|_{L^{2}}\) invariant. It thus suffices to prove the inequality for non-negative \(f(x)\). We proceed with basic facts: the first is the standard Poincare inequality, implying that if \(g:[c,d]\to\mathbb{R}\) satisfies \(g(c)=g(d)=0\) then \[\int_{c}^{d}g(x)^{2}dx\leq\frac{(d-c)^{2}}{\pi^{2}}\int_{c}^{d}g^{\prime}(x)^{ 2}dx.\] This one-dimensional inequality is sometimes known as the _Wirtinger inequality_ (for example in Blaschke's 1916 book _Kreis und Kugel_[11]). However, we note that it seems to have been discovered many times: for example, Hurwitz [28] already used it in his 1901 proof of the isoperimetric inequality. We refer to Payne-Weinberger [35] or work of the second author [42] for more on Poincare inequalities. This inequality then implies that \[\int_{a}^{b}(f(x)-M)^{2}\mathbf{1}_{f(x)\geq M}\,dx\leq\frac{(b-a)^{2}}{\pi^{2 }}\int_{a}^{b}f^{\prime}(x)^{2}\mathbf{1}_{f(x)\geq M}\,dx\] which we can square out and write as \[\int_{a}^{b}f(x)^{2}\mathbf{1}_{f(x)\geq M}\,dx+M^{2}\,|\,\{f\geq M \}\,| \leq 2M\int_{a}^{b}f(x)\mathbf{1}_{f(x)\geq M}\,dx\] \[+\frac{(b-a)^{2}}{\pi^{2}}\int_{a}^{b}f^{\prime}(x)^{2}\mathbf{1 }_{f(x)\geq M}\,dx\] The first integral on the right-hand side can be bounded with Cauchy-Schwarz \[\int_{a}^{b}f(x)\mathbf{1}_{f(x)\geq M}\,dx\leq|\,\{f\geq M\}\,|^{1/2}\left( \int_{a}^{b}f(x)^{2}\mathbf{1}_{f(x)\geq M}\,dx\right)^{1/2}\] which leads to the estimate, abbreviating \(Z=\left(\int_{a}^{b}f(x)^{2}\mathbf{1}_{f(x)\geq M}\,dx\right)^{1/2}\), \[Z^{2}-2M\,|\,\{f\geq M\}\,|^{1/2}Z+M^{2}\,|\,\{f\geq M\}\,|\leq\frac{(b-a)^{2} }{\pi^{2}}\int_{a}^{b}f^{\prime}(x)^{2}\mathbf{1}_{f(x)\geq M}\,dx.\] The left-hand side can be factored as \[(Z-M\,|\,\{f\geq M\}\,|^{1/2})^{2}\leq\frac{(b-a)^{2}}{\pi^{2}}\int_{a}^{b}f^ {\prime}(x)^{2}\mathbf{1}_{f(x)\geq M}\,dx\] and thus \[Z\leq M\,|\,\{f\geq M\}\,|^{1/2}+\left(\frac{(b-a)^{2}}{\pi^{2}}\int_{a}^{b}f ^{\prime}(x)^{2}\mathbf{1}_{f(x)\geq M}\,dx\right)^{1/2}\] We also have the trivial estimate \[\int_{a}^{b}f(x)^{2}\mathbf{1}_{f(x)\leq M}\,dx\leq\,|\,\{f(x)\leq M\}\,|M^{2}.\] Adding the last estimate to the square of the penultimate estimate and using \(|\,\{f\geq M\}\,|\leq b-a\), we arrive at \[\int_{a}^{b}f(x)^{2}dx \leq M^{2}(b-a)+2M(b-a)^{1/2}\left(\frac{(b-a)^{2}}{\pi^{2}}\int_ {a}^{b}f^{\prime}(x)^{2}\mathbf{1}_{f(x)\geq M}\,dx\right)^{1/2}\] \[\quad+\frac{(b-a)^{2}}{\pi^{2}}\int_{a}^{b}f^{\prime}(x)^{2} \mathbf{1}_{f(x)\geq M}\,dx.\] Proof of Lemma 5.6.1.: The minimum is necessarily \(\leq 0\) since \[\min_{0\leq k\leq n-1}f\left(\frac{k+1/2}{n}\right) \leq\frac{1}{n}\sum_{0\leq k\leq n-1}f\left(\frac{k+1/2}{n}\right)\] \[=\frac{1}{n}\sum_{k=0}^{n-1}\sum_{1\leq|j|\leq n-1}a_{j}e^{2\pi ij (k+1/2)/n}\] \[=\frac{1}{n}\sum_{1\leq|j|\leq n-1}\sum_{k=0}^{n-1}a_{j}e^{2\pi i \frac{j}{2n}}e^{2\pi ijk/n}\] \[=\frac{1}{n}\sum_{1\leq|j|\leq n-1}a_{j}e^{2\pi i\frac{j}{2n}} \sum_{k=0}^{n-1}e^{2\pi ijk/n}=0.\] Let us now assume that the minimum is negative but very close to \(0\) \[X=\min_{0\leq k\leq n-1}f\left(\frac{k+1/2}{n}\right)\leq 0.\] Roots of unity summing to \(0\) then shows, just as above, that \[0=\sum_{0\leq k\leq n-1}f\left(\frac{k+1/2}{n}\right)\geq nX+\max_{0\leq k \leq n-1}f\left(\frac{k+1/2}{n}\right)\] from which we deduce \[\max_{0\leq k\leq n-1}f\left(\frac{k+1/2}{n}\right)\leq(-X)n.\] Using this in combination with the modified Poincare inequality with \(M=n|X|\) we deduce \[\int_{\frac{k+1/2}{n}}^{\frac{k+3/2}{n}}f(x)^{2}dx \leq nX^{2}+2|X|\sqrt{n}\sqrt{\frac{1}{n^{2}\pi^{2}}\int_{\frac{k +1/2}{n}}^{\frac{k+3/2}{n}}f^{\prime}(x)^{2}dx}\] \[+\frac{1}{n^{2}\pi^{2}}\int_{\frac{k+1/2}{n}}^{\frac{k+3/2}{n}}f ^{\prime}(x)^{2}dx.\] Summing over all the intervals (periodically interpreted), we get \[\int_{0}^{1}f(x)^{2}\,dx \leq n^{2}X^{2}+\frac{1}{n^{2}\pi^{2}}\int_{0}^{1}f^{\prime}(x)^ {2}\,dx\] \[+2|X|\sqrt{n}\sum_{k=0}^{n-1}\sqrt{\frac{1}{n^{2}\pi^{2}}\int_{ \frac{k+1/2}{n}}^{\frac{k+3/2}{n}}f^{\prime}(x)^{2}\,dx}\] As for the remaining sum, we use the Cauchy-Schwarz inequality to bound \[\sum_{k=0}^{n-1}1\cdot\sqrt{\frac{1}{n^{2}\pi^{2}}\int_{\frac{k+1 /2}{n}}^{\frac{k+3/2}{n}}f^{\prime}(x)^{2}\,dx} \leq\sqrt{n}\sqrt{\sum_{k=0}^{n-1}\frac{1}{n^{2}\pi^{2}}\int_{ \frac{k+1/2}{n}}^{\frac{k+3/2}{n}}f^{\prime}(x)^{2}\,dx}\] \[=\sqrt{n}\sqrt{\frac{1}{n^{2}\pi^{2}}\int_{0}^{1}f^{\prime}(x)^{2 }\,dx}\] Altogether, this implies \[\int_{0}^{1}f(x)^{2}\,dx\leq n^{2}X^{2}+\frac{1}{n^{2}\pi^{2}}\int_{0}^{1}f^{ \prime}(x)^{2}\,dx+2|X|n\sqrt{\frac{1}{n^{2}\pi^{2}}\int_{0}^{1}f^{\prime}(x)^{2 }\,dx}.\] As \(f\) is a trigonometric polynomial of degree at most \((n-1)/2\), we have \[\int_{0}^{1}f(x)^{2}dx=\sum_{1\leq|j|\leq\frac{n-1}{2}}|a_{j}|^{2}\] as well as \[\int_{0}^{1}f^{\prime}(x)^{2}dx =\sum_{1\leq|j|\leq\frac{n-1}{2}}(2\pi j)^{2}|a_{j}|^{2}\] \[\leq(n-1)^{2}\pi^{2}\sum_{1\leq|j|\leq\frac{n-1}{2}}|a_{j}|^{2}= (n-1)^{2}\pi^{2}\int_{0}^{1}f(x)^{2}\,dx.\] Plugging this in, we get \[\int_{0}^{1}f(x)^{2}dx \leq\frac{1}{n^{2}\pi^{2}}\int_{0}^{1}f^{\prime}(x)^{2}\,dx+X^{2} n^{2}+2|X|\sqrt{\frac{1}{n^{2}\pi^{2}}\int_{0}^{1}f^{\prime}(x)^{2}\,dx}\] \[\leq\left(1-\frac{1}{n}\right)^{2}\int_{0}^{1}f(x)^{2}\,dx+X^{2} n^{2}+2|X|n\left(1-\frac{1}{n}\right)\sqrt{\int_{0}^{1}f(x)^{2}\,dx}.\] For an arbitrary parameter \(\gamma>0\), the inequality \[\gamma^{2}\leq\left(1-\frac{1}{n}\right)^{2}\gamma^{2}+X^{2}n^{2}+2|X|n\left( 1-\frac{1}{n}\right)\gamma\] can be simplified using \((1-1/n)^{2}\leq 1-1/n\) and \((1-1/n)\leq 1\) to imply that \[X^{2}n^{2}+2|X|n\gamma-\frac{\gamma^{2}}{n}\geq 0\] which, by solving the quadratic equation can be seen to imply that \[|X|\geq\frac{\sqrt{n^{2}+n}-n}{n^{2}}\gamma\geq\frac{\gamma}{3n^{2}}\] which is the desired result. _Remark_. Much of the difficulty comes from the fact that we only evaluate the trigonometric polynomial in equispaced points. If one was just interested in the minimum being small in _some_ place, there is a very elementary argument which we conclude for the sake of context. **Lemma 5.6.3**.: _Let \(f(x)=\sum_{1\leq|j|\leq n-1}a_{j}e^{2\pi ijx}\) be a real-valued trigonometric polynomial. Then_ \[\min_{0\leq x\leq 1}f(x)\leq-\frac{\|f\|_{L^{2}}}{3\sqrt{n}}.\] Proof.: We also have the trivial estimate \[\int_{0}^{1}f(x)^{2}\leq\|f\|_{L^{\infty}}\|f\|_{L^{1}}\quad\text{and thus} \quad\|f\|_{L^{1}}\geq\frac{\|f\|_{L^{2}}^{2}}{\|f\|_{L^{\infty}}}.\] Appealing to Lemma 5.2.1, we deduce \[\min_{0\leq x\leq 1}f(x)\leq-\frac{1}{2}\|f\|_{L^{1}}\leq-\frac{1}{2}\frac{\|f\| _{L^{2}}^{2}}{\|f\|_{L^{\infty}}}.\] We have, via Plancherel, that \[\|f\|_{L^{2}}^{2}=\sum_{1\leq j\leq n-1}|a_{j}|^{2}\] and, via the triangle inequality and Cauchy-Schwarz, that \[\|f\|_{L^{\infty}}\leq\sum_{1\leq j\leq n-1}|a_{j}|\leq\sqrt{2n}\left(\sum_{1 \leq j\leq n-1}|a_{j}|^{2}\right)^{1/2}=\sqrt{2n}\|f\|_{L^{2}}.\] From this and \(2\sqrt{2}\leq 3\) the result follows. ### Outline of the remaining argument In this section we outline how the argument will be concluded. We first recall that \[\sum_{j=1}^{n}\theta_{\alpha}\left(x-\frac{j}{n}+\varepsilon_{j}\right)=\sum _{k\in\mathbb{Z}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{2\pi ik\varepsilon _{j}}e^{-2\pi ik\frac{j}{n}}\right)e^{2\pi ikx}.\] The main contribution is coming from the three terms \(k\in\{-n,0,n\}\) \[B=\sum_{k\in\{-n,0,n\}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{2\pi ik \varepsilon_{j}}e^{-2\pi ik\frac{j}{n}}\right)e^{2\pi ikx}\] which contribute \[B=n+2ne^{-\pi\alpha n^{2}}\cos\left(2\pi nx\right)+\mathcal{O}\left(e^{-\pi \alpha n^{2}}\sum_{j=1}^{n}\varepsilon_{j}^{2}\right).\] We will choose to sum over even more terms (even though they are rather small), namely \(k\in n\mathbb{Z}\), so as to allow for a comparison to the minimal value attained by equidistributed points. For this purpose we set \[A(x) =\sum_{k\in n\mathbb{Z}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e ^{2\pi ik\varepsilon_{j}}e^{-2\pi ik\frac{j}{n}}\right)e^{2\pi ikx}\] \[=\sum_{k\in n\mathbb{Z}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e ^{2\pi ik\varepsilon_{j}}\right)e^{2\pi ikx},\] where the simplification comes from the fact that these exponential expressions are all \(1\) when \(k\) is a multiple of \(n\). In particular, all the Fourier coefficients are reasonably close to \(n\). More precisely, using again that the sum over all displacements \(\varepsilon_{j}\) sums to \(0\), from \[\sum_{j=1}^{n}e^{2\pi ik\varepsilon_{j}}=n+\sum_{j=1}^{n}\left(e^{2\pi ik \varepsilon_{j}}-1\right)=n+\sum_{j=1}^{n}\left(e^{2\pi ik\varepsilon_{j}}-1 -2\pi ik\varepsilon_{j}\right).\] We have, for all \(x\in\mathbb{R}\) that \(|e^{ix}-1-ix|\leq x^{2}\) and thus \[\left|-n+\sum_{j=1}^{n}e^{2\pi ik\varepsilon_{j}}\right|\leq 4\pi^{2}k^{2}\sum_{j =1}^{n}\varepsilon_{j}^{2}.\] Combining this with \[\sum_{\begin{subarray}{c}k\in n\mathbb{Z}\\ |k|>n\end{subarray}}k^{2}e^{-\pi\alpha k^{2}}\lesssim n^{2}e^{-\pi\alpha n^{2}}\] we deduce that \[A(x)=\sum_{j=1}^{n}\theta\left(x-\frac{j}{n}\right)+\mathcal{O}\left(n^{2}e^{ -\pi\alpha n^{2}}\sum_{j=1}^{n}\varepsilon_{j}^{2}\right).\] It is our goal to show that the perturbation induced by \(\varepsilon_{j}\neq 0\) has to decrease the value in at least one of the minima. To this end, we split the function as \[\sum_{j=1}^{n}\theta_{\alpha}\left(x-\frac{j}{n}+\varepsilon_{j}\right)=A(x) +g_{1}(x)+g_{2}(x)+h(x),\] where \(A\) sums over all multiples of \(n\), \(g_{1}\) sums over the first \((n-1)/2\) frequencies, \(g_{2}\) sums frequencies between \((n-1)/2\) and \(n-1\) and \(h\) sums over the rest, frequencies larger than \(n\) and where \(n\) does not divide \(k\). Thus \[g_{1}(x)=\sum_{1\leq|k|\leq\frac{n-1}{2}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1} ^{n}e^{2\pi ik\varepsilon_{j}}e^{-2\pi ik\frac{j}{n}}\right)e^{2\pi ikx}\] while \(g_{2}\) sums over the remaining small frequencies \[g_{2}(x)=\sum_{\begin{subarray}{c}\frac{n-1}{2}<|k|\leq n-1\\ \end{subarray}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{2\pi ik\varepsilon_ {j}}e^{-2\pi ik\frac{j}{n}}\right)e^{2\pi ikx},\] and \(h\) sums over the remaining terms \[h(x)=\sum_{\begin{subarray}{c}|k|\geq n+1\\ n\nmid k\end{subarray}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{2\pi ik \varepsilon_{j}}e^{-2\pi ik\frac{j}{n}}\right)e^{2\pi ikx}.\] The remaining argument proceeds as follows 1. We show, in the next section, that \(\|g_{1}\|_{L^{2}}\) is not too small (in terms of \(\sum_{j=1}^{n}\varepsilon_{j}^{2}\)). The discrete Fourier transform naturally arises in the process. 2. Lemma 5.6.1 then implies that \[\min_{0\leq k\leq n-1}g_{1}\left(\frac{k+1/2}{n}\right)\leq-\frac{\|g_{1}\|_{ L^{2}}}{3n^{2}}\] is fairly negative. 3. We show \(\|g_{2}\|_{L^{\infty}}\ll\|g_{1}\|_{L^{2}}\) (which follows again from the properties of the discrete Fourier transform) and that the same is true for \(h\). 4. Thus the sum of the three terms is fairly negative in at least one of the points of the form \((k+1/2)/n\) and this then implies the result. ### Discrete Fourier Transform We recall again that \[\sum_{j=1}^{n}\theta_{\alpha}\left(x-\frac{j}{n}+\varepsilon_{j}\right)=\sum_{k \in\mathbb{Z}}e^{-\pi\alpha k^{2}}\left(\sum_{j=1}^{n}e^{2\pi ik\varepsilon_{j} }e^{-2\pi ik\frac{j}{n}}\right)e^{2\pi ikx}.\] We also note that the \(\varepsilon_{j}\) are fairly small: (5.2) together with the proof of Lemma 5.4.1 gives \[\max_{1\leq j\leq n}|\varepsilon_{j}|=\mathcal{O}_{\alpha}\left(n^{2}\cdot e^{ -2\pi\alpha n}\right),\] where the implicit constant depends on \(\alpha\). As it turns out, since these are exponentially small in \(n\), the basic Taylor expansion \[e^{2\pi ik\varepsilon_{j}}=1+2\pi ik\varepsilon_{j}+\mathcal{O}(k^{2} \varepsilon_{j}^{2})\] is highly accurate and we deduce, as long as \(k\) is not a multiple of \(n\), that \[\sum_{j=1}^{n}e^{2\pi ik\varepsilon_{j}}e^{-2\pi ik\frac{j}{n}}=2\pi ik\sum_{ j=1}^{n}\varepsilon_{j}e^{-2\pi ik\frac{j}{n}}+\mathcal{O}\left(k^{2}\sum_{j=1} ^{n}\varepsilon_{j}^{2}\right).\] We observe that this is, up to various types of rescaling, simply a discrete Fourier transform (DFT) of \((\varepsilon_{1},\ldots,\varepsilon_{n})\). Since the \(\varepsilon_{j}\) are all real-valued, we have the symmetry \[\left|\sum_{j=1}^{n}\varepsilon_{j}e^{-2\pi ik\frac{(n-j)}{n}}\right|=\left| \sum_{j=1}^{n}\varepsilon_{j}e^{-2\pi ik\frac{j}{n}}\right|\] The DFT preserves the \(\ell^{2}\)-norm and therefore \[n\sum_{j=1}^{n}\varepsilon_{j}^{2}=\sum_{k=1}^{n-1}\left|\sum_{j=1}^{n} \varepsilon_{j}e^{-2\pi ik\frac{j}{n}}\right|^{2}\] where we omit the \(k=0\) term because \(\varepsilon_{1}+\cdots+\varepsilon_{n}=0\). This implies \[\sum_{j=1}^{n}\varepsilon_{j}^{2}=\frac{1}{n}\sum_{k=1}^{n-1}\left|\sum_{j=1} ^{n}\varepsilon_{j}e^{-2\pi ik\frac{j}{n}}\right|^{2}\leq\max_{1\leq k\leq n-1 }\left|\sum_{j=1}^{n}\varepsilon_{j}e^{-2\pi ik\frac{j}{n}}\right|^{2}.\] This immediately implies that at least one Fourier coefficient is large \[\max_{1\leq k\leq n-1}\left|\sum_{j=1}^{n}\varepsilon_{j}e^{-2\pi ik\frac{j} {n}}\right|\geq\left(\sum_{j=1}^{n}\varepsilon_{j}^{2}\right)^{1/2}\gg \mathcal{O}\left(n^{2}\sum_{j=1}^{n}\varepsilon_{j}^{2}\right)\] and, in particular, is many orders of magnitude larger than the error terms. ### The final estimates This also implies that, using the Plancherel identity, that \(g_{1}\) is large in \(L^{2}\) since \[\|g_{1}(x)\|_{L^{2}} =\left\|\sum_{1\leq|k|\leq\frac{n-1}{2}}e^{-\pi\alpha k^{2}}\left( \sum_{j=1}^{n}e^{2\pi ik\varepsilon_{j}}e^{-2\pi ik\frac{j}{n}}\right)e^{2\pi ikx }\right\|_{L^{2}}\] \[=\left(\sum_{1\leq|k|\leq\frac{n-1}{2}}e^{-2\pi\alpha k^{2}}\left| \sum_{j=1}^{n}e^{2\pi ik\varepsilon_{j}}e^{-2\pi ik\frac{j}{n}}\right|^{2} \right)^{1/2}.\] The worst case is when most of the Fourier energy is localized at high frequencies and thus we can remove the smallest weight and deduce \[\|g_{1}(x)\|_{L^{2}}\geq e^{-\pi\alpha\left(\frac{n-1}{2}\right)^{2}}\left(\sum_ {1\leq|k|\leq\frac{n-1}{2}}\left|\sum_{j=1}^{n}e^{2\pi ik\varepsilon_{j}}e^{-2 \pi ik\frac{j}{n}}\right|^{2}\right)^{1/2}.\] At this point, we can invoke a Taylor expansion and argue that \[\|g_{1}(x)\|_{L^{2}}\geq e^{-\pi\alpha\left(\frac{n-1}{2}\right)^{2}}\left( \sum_{1\leq|k|\leq\frac{n-1}{2}}\Bigg{|}\sum_{j=1}^{n}\varepsilon_{j}e^{-2\pi ik \frac{j}{n}}+\mathcal{O}\left(k^{2}\sum_{j=1}^{n}\varepsilon_{j}^{2}\right) \Bigg{|}^{2}\right)^{1/2}.\] At this point, the argument from the previous section come into play: we do not have information about any individual Fourier coefficient but we know that at least one of them is large \[\max_{1\leq k\leq n-1}\left|\sum_{j=1}^{n}\varepsilon_{j}e^{-2\pi ik\frac{j}{ n}}\right|\geq\left(\sum_{j=1}^{n}\varepsilon_{j}^{2}\right)^{1/2}\gg \mathcal{O}\left(n^{2}\sum_{j=1}^{n}\varepsilon_{j}^{2}\right)\] and thus, for \(n\) sufficiently large, \[\|g_{1}(x)\|_{L^{2}}\geq\frac{e^{-\pi\alpha\left(\frac{n-1}{2}\right)^{2}}}{2} \left(\sum_{j=1}^{n}\varepsilon_{j}^{2}\right)^{1/2}.\] It is rather easy to show that \(g_{2}\) is many orders of magnitude smaller than \(g_{1}\) as the Fourier coefficients are very nearly the same. Since the discrete Fourier transform has the symmetry \[\left|\sum_{j=1}^{n}\varepsilon_{j}e^{-2\pi ik\frac{(n-i)}{n}}\right|=\left| \sum_{j=1}^{n}\varepsilon_{j}e^{-2\pi ik\frac{j}{n}}\right|,\] the same Plancherel argument shows that, for \(n\) sufficiently large, \[\|g_{2}\|_{L^{2}}\leq 2e^{-\pi\alpha\left(\frac{n}{2}\right)^{2}}\left(\sum_ {j=1}^{n}\varepsilon_{j}^{2}\right)^{1/2}.\] This is exponentially smaller than \(g_{1}(x)\). We will require pointwise estimates for what follows. However, the decay is sufficiently strong so that we can simply take a triangle inequality. Using again the cancellation of the sum of roots of unity together with \(k^{2}\varepsilon_{j}^{2}\ll|k\varepsilon_{j}|\) we have, for \(n\) sufficiently large, \[\left|\sum_{j=1}^{n}e^{2\pi ik\varepsilon_{j}}e^{-2\pi ik\frac{j} {n}}\right| =\left|\sum_{j=1}^{n}(1+2\pi ik\varepsilon_{j}+\mathcal{O}(k^{2} \varepsilon_{j}^{2}))e^{-2\pi ik\frac{j}{n}}\right|\] \[=\left|2\pi ik(1+o(1))\sum_{j=1}^{n}\varepsilon_{j}e^{-2\pi ik \frac{j}{n}}\right|\leq 10k\left|\sum_{j=1}^{n}\varepsilon_{j}e^{-2\pi ik\frac{j}{n} }\right|.\] We deduce, since \(k>(n-1)/2\) and thus \(k\geq n/2\), that for \(n\) sufficiently large, \[\|g_{2}\|_{L^{\infty}} \leq\sum_{\frac{n-1}{2}<|k|\leq n-1}e^{-\pi\alpha k^{2}}\left|\sum_ {j=1}^{n}e^{2\pi ik\varepsilon_{j}}e^{-2\pi ik\frac{j}{n}}\right|\] \[\leq 10\sum_{\frac{n-1}{2}<|k|\leq n-1}e^{-\pi\alpha k^{2}}\,k \sum_{j=1}^{n}|\varepsilon_{j}|\] \[\leq 20ne^{-\pi\alpha(n/2)^{2}}\sum_{j=1}^{n}|\varepsilon_{j}|\leq 2 0n^{3/2}e^{-\pi\alpha(n/2)^{2}}\left(\sum_{j=1}^{n}\varepsilon_{j}^{2} \right)^{1/2}\ll\frac{\|g_{1}\|_{L^{2}}}{n^{100}}.\] The same argument applies to \(h\) and we deduce, again for \(n\) sufficiently large, \[\|h\|_{L^{\infty}}\ll\frac{\|g_{1}\|_{L^{2}}}{n^{100}}.\] We can now conclude the argument \[\sum_{j=1}^{n}\theta_{\alpha}\left(x-\frac{j}{n}+\varepsilon_{j} \right)=A(x)+g_{1}(x)+g_{2}(x)+h(x)\] \[=\sum_{j=1}^{n}\theta_{\alpha}\left(x-\frac{j}{n}\right)+\mathcal{ O}\left(ne^{-\pi\alpha n^{2}}\sum_{j=1}^{n}\varepsilon_{j}^{2}\right)+g_{1}(x)+g_ {2}(x)+h(x).\] Applying all the prior results, for \(n\) sufficiently large, we get \[\min_{x}f(x) \leq\min_{0\leq k\leq n-1}\sum_{j=1}^{n}\theta_{\alpha}\left( \frac{k+1/2}{n}-\frac{j}{n}+\varepsilon_{j}\right)\] \[\leq\left[\min_{0\leq x\leq 1}\sum_{j=1}^{n}\theta_{\alpha}\left(x -\frac{j}{n}\right)\right]+\mathcal{O}\left(ne^{-\pi\alpha n^{2}}\sum_{j=1}^{ n}\varepsilon_{j}^{2}\right)\] \[\qquad-\frac{\|g_{1}\|_{L^{2}}}{n^{3/2}}+\|g_{2}\|_{L^{\infty}}+ \|h\|_{L^{\infty}}\] \[\leq\left[\min_{0\leq x\leq 1}\sum_{j=1}^{n}\theta_{\alpha}\left(x -\frac{j}{n}\right)\right]-\frac{1}{2}\frac{\|g_{1}\|_{L^{2}}}{n^{3/2}}.\] Recalling that \[\|g_{1}(x)\|_{L^{2}}\geq\frac{e^{-\pi\alpha\left(\frac{n-1}{2} \right)^{2}}}{2}\left(\sum_{j=1}^{n}\varepsilon_{j}^{2}\right)^{1/2}\] we deduce that the minimal value of \(f(x)\) is maximal if and only if \[\varepsilon_{1}=\varepsilon_{2}=\cdots=\varepsilon_{n}=0.\] As the equidistributed points provide that the minimum is taken exactly in between them, we obtain equality in the last calculation and, hence, derive our main result.
2301.12907
Logarithmic stability estimates for initial data in Ornstein-Uhlenbeck equation on $L^2$-space
In this paper, we continue the investigation on the connection between observability and inverse problems for a class of parabolic equations with unbounded first order coefficients. We prove new logarithmic stability estimates for a class of initial data in the Ornstein-Uhlenbeck equation posed on $L^2\left(\mathbb{R}^N\right)$ with respect to the Lebesgue measure. The proofs combine observability and logarithmic convexity results that include a non-analytic semigroup case. This completes the picture of the recent results obtained for the analytic Ornstein-Uhlenbeck semigroup on $L^2$-space with invariant measure.
S. E. Chorfi, L. Maniar
2023-01-30T14:10:57Z
http://arxiv.org/abs/2301.12907v2
# Logarithmic stability estimates for initial data in Ornstein-Uhlenbeck equation on \(L^{2}\)-space ###### Abstract. In this paper, we continue the investigation on the connection between observability and inverse problems for a class of parabolic equations with unbounded first order coefficients. We prove new logarithmic stability estimates for a class of initial data in the Ornstein-Uhlenbeck equation posed on \(L^{2}\left(\mathbb{R}^{N}\right)\) with respect to the Lebesgue measure. The proofs combine observability and logarithmic convexity results that include a non-analytic semigroup case. This completes the picture of the recent results obtained for the analytic Ornstein-Uhlenbeck semigroup on \(L^{2}\)-space with invariant measure. Key words and phrases:Observability, controllability, inverse problem, logarithmic convexity, Ornstein-Uhlenbeck equation 2020 Mathematics Subject Classification: 93B07, 35R30, 93B05, 35K65 ## 1. Introduction Let \(N\geq 1\) be an integer and let \(\theta>0\) be a fixed time. We consider the Ornstein-Uhlenbeck equation given by \[\begin{cases}\partial_{t}u=\Delta u+Bx\cdot\nabla u,&0<t<\theta,\ \ x\in \mathbb{R}^{N},\\ u|_{t=0}=u_{0}(x),&x\in\mathbb{R}^{N},\end{cases} \tag{1}\] where \(B\) is a real constant \(N\times N\)-matrix, \(u_{0}\in L^{2}\left(\mathbb{R}^{N}\right)\) is an unknown initial datum, and the dot sign denotes the inner product in \(\mathbb{R}^{N}\). In the present paper, we are concerned with the following problem: **Inverse initial data problem.** Determine the unknown initial datum \(u_{0}\) belonging to an admissible set \(\mathcal{I}\) from the measurement \[u|_{(0,\theta)\times\omega},\] where \(\omega\subset\mathbb{R}^{N}\) is an observation region that will be given later. Two problems to be answered: * Uniqueness: for two initial data \(u_{0}\) and \(u_{1}\), does the equality \(u(u_{0})=u(u_{1})\) in \((0,\theta)\times\omega\) imply \(u_{0}=u_{1}\) in \(L^{2}\left(\mathbb{R}^{N}\right)\)? * Stability: is it possible to estimate \(\|u_{0}-u_{1}\|_{L^{2}(\mathbb{R}^{N})}\) by a suitable norm of the quantity \((u(u_{0})-u(u_{1}))|_{(0,\theta)\times\omega}\)? In the parabolic framework, one expects a stability estimate of logarithmic type, i.e., \[\|u_{0}\|_{L^{2}(\mathbb{R}^{N})}\leq\frac{c}{|\log\|u\|^{\alpha}} \tag{2}\] for a constant \(c>0\), some \(\alpha\in(0,1]\), a suitable norm of \(u|_{\omega}\), typically \(\|u\|_{L^{2}(0,\theta;L^{2}(\omega))}\) or \(\|u\|_{H^{1}(0,\theta;L^{2}(\omega))}\), and for all \(u_{0}\in\mathcal{I}\). The estimate (2) is commonly referred to as conditional stability since it depends on the admissible set \(\mathcal{I}\). Note that the rate of conditional stability depends on the choice of the admissible set. Also, conditional stability is very useful when dealing with the numerical reconstruction of initial data, see for instance [17, 32]. The Ornstein-Uhlenbeck equation (1) appears in several fields of research and applications including stochastic processes, Malliavin calculus, theory of quantum fields as well as control theory. We refer to [6, 7, 19, 20, 21, 22, 23, 16] and the references therein for more details. The associated Ornstein-Uhlenbeck semigroup has been extensively studied in two spaces; the space \(L^{2}\left(\mathbb{R}^{N},\mathrm{d}x\right)\) (with the Lebesgue measure, shortly denoted by \(L^{2}\left(\mathbb{R}^{N}\right)\)) and in \(L^{2}\left(\mathbb{R}^{N},\mathrm{d}\mu\right)\) with the so-called invariant measure \(\mu\). The existence of \(\mu\) is equivalent to the fact that the spectrum of the matrix \(B\) lies in the left half-plane, i.e., \[\sigma(B)\subset\mathbb{C}_{-}:=\{\lambda\in\mathbb{C}\colon\operatorname{Re} \lambda<0\}.\] The semigroup enjoys different properties with respect to the previous spaces. It is a \(C_{0}\)-semigroup on \(L^{2}\left(\mathbb{R}^{N},\mathrm{d}x\right)\) which is not analytic (it is not an eventually norm continuous semigroup), unless \(B=0\), whereas its realization in \(L^{2}\left(\mathbb{R}^{N},\mathrm{d}\mu\right)\) is an analytic \(C_{0}\)-semigroup with angle depending on the matrix \(B\). As for inverse problems, parabolic equations with bounded coefficients have been extensively studied, see for instance [13, 14, 17, 31]. Surprisingly, there are only a few researches for parabolic equations with unbounded coefficients, especially for the Ornstein-Uhlenbeck equation. We are only aware of the paper by Lorenzi [18], where the author studied the identification (existence and uniqueness) of a constant \(\alpha\in\mathbb{R}\) such that \(B=\alpha B_{0}\) in (1), where the matrix \(B_{0}\) and some prescribed data are known. This inverse problem was motivated by the determination of the matrix \(B\) since its properties play a crucial role in the study of the Ornstein-Uhlenbeck equation. Inverse problems are closely related to controllability and observability questions. The link between the two theories was initially observed in the case of a hyperbolic system in [27]. Then many stability estimates have been proven for inverse problems by using observability inequalities. Such inverse problems for parabolic systems are highly ill-posed in the sense of Hadamard, so one cannot expect strong stability in general. In this context, the logarithmic convexity method has been successfully applied to prove logarithmic stability for initial data. Many works have been done for second-order parabolic equations with bounded coefficients [13, 14, 30, 31], and more generally for some abstract systems governed by analytic semigroups of angle \(\frac{\pi}{2}\) as in the self-adjoint case; see [11]. We emphasize that the logarithmic stability for initial data in a parabolic equation with _unbounded coefficients_ has not been studied before, as far as we know. The prototype of such equations is given by the Ornstein-Uhlenbeck equation on \(L^{2}\)-spaces, which contains an unbounded gradient term. Recently, the authors of [3] have investigated the case of analytic semigroups and established an abstract stability result. Consequently, the stability problem for the Ornstein-Uhlenbeck equation has been recently resolved on \(L^{2}\left(\mathbb{R}^{N},\mathrm{d}\mu\right)\) with the invariant measure when \(\sigma(B)\subset\mathbb{C}_{-}\). Therein, the unbounded gradient coefficient is handled thanks to the exponential decay of the measure density. In that case, the semigroup is analytic and the angle of analyticity is generally smaller than \(\frac{\pi}{2}\). The developed approach does not work on \(L^{2}\left(\mathbb{R}^{N}\right)\) due to the lack of analyticity of the semigroup. The logarithmic convexity estimate has been initially known for self-adjoint operators since the work of Agmon and Nirenberg [1], and has been generalized to analytic semigroups in [15] (see also [24]). In our non-analytic framework, we exploit the explicit representation formula of the semigroup to prove a logarithmic convexity estimate, and then deduce a logarithmic stability for initial data using final state observability with respect to the \(L^{2}\left(\mathbb{R}^{N}\right)\)-norm. We mainly aim to clarify the relations between the final state observability, logarithmic convexity and stability estimates for initial data in the Ornstein-Uhlenbeck equation. Such a relation is by now fully understood in the analytic framework of parabolic equations with bounded coefficients. We start this article by gathering some useful facts on the Ornstein-Uhlenbeck equation (Section 2). In Section 3, we first prove a logarithmic convexity estimate for the Ornstein-Uhlenbeck equation (1) exploiting the explicit form of the associated semigroup. Then we recall some recent results concerning the final state observability of (1). Afterward in Section 4, we show that the logarithmic convexity along with the observability imply some logarithmic stability estimates for certain classes of initial data. These results allow us in particular to extend some known results for the heat equation on \(\mathbb{R}^{N}\). Finally, Section 5 is devoted to some final comments and open problems related to the stability estimates for initial data in parabolic equations with unbounded coefficients. ## 2. Miscellaneous facts on the Ornstein-Uhlenbeck semigroup We summarize some useful facts about the Ornstein-Uhlenbeck semigroup on \(L^{2}\left(\mathbb{R}^{N}\right)\) for future use. We refer to [21, 23] and the references therein for detailed proofs. Denote by \(I\) the identity operator and by \(\|\cdot\|\) the standard norm in \(L^{2}\left(\mathbb{R}^{N}\right)\). The Ornstein-Uhlenbeck operator is the sum of the diffusion term and the drift term. ### The diffusion part Let us first consider the diffusion term. It is well known that the Laplace operator \(\Delta\) on \(L^{2}\left(\mathbb{R}^{N}\right)\) with maximal domain \[D(\Delta) :=\left\{u\in L^{2}\left(\mathbb{R}^{N}\right):\Delta u\in L^{2} \left(\mathbb{R}^{N}\right)\right\}\] \[=H^{2}\left(\mathbb{R}^{N}\right),\] is self-adjoint and generates the heat \(C_{0}\)-semigroup which is explicitly defined by \(U(t)\colon L^{2}\left(\mathbb{R}^{N}\right)\to L^{2}\left(\mathbb{R}^{N}\right)\), \[U(0) =I,\] \[(U(t)f)(x) =\frac{1}{(4\pi t)^{N/2}}\int_{\mathbb{R}^{N}}\mathrm{e}^{- \frac{|x-y|^{2}}{4t}}f(y)\,\mathrm{d}y,\qquad t>0,\quad x\in\mathbb{R}^{N}\] for every \(f\in L^{2}\left(\mathbb{R}^{N}\right)\), where \(|\cdot|\) denotes the Euclidean norm on \(\mathbb{R}^{N}\). We clearly see that \[U(t)f=h_{t}\ast f,\] where \[h_{t}(y)=\frac{1}{(4\pi t)^{N/2}}\mathrm{e}^{-\frac{|y|^{2}}{4t}},\quad t>0, \quad y\in\mathbb{R}^{N}.\] Note that the heat semigroup \((U(t))_{t\geq 0}\) is analytic and contractive on \(L^{2}\left(\mathbb{R}^{N}\right)\). ### The drift part Let \(B\) be the drift matrix in (1). We consider the drift operator \[\mathcal{L}=Bx\cdot\nabla=\sum_{i,j=1}^{N}b_{ij}x_{j}\partial_{x_{i}},\] with its maximal domain \[D(\mathcal{L})=\left\{u\in L^{2}\left(\mathbb{R}^{N}\right):\mathcal{L}u\in L ^{2}\left(\mathbb{R}^{N}\right)\right\},\] where \(\mathcal{L}u\) is understood in the sense of distributions. The operator \(\left(\mathcal{L},D(\mathcal{L})\right)\) is closed in \(L^{2}\left(\mathbb{R}^{N}\right)\) and the space \(C_{c}^{\infty}(\mathbb{R}^{N})\) is a core of \(\mathcal{L}\). The operator \(\mathcal{L}\) generates a \(C_{0}\)-group \(\left(S(t)\right)_{t\in\mathbb{R}}\) given by \[\left(S(t)f\right)\left(x\right)=f\left(\mathrm{e}^{tB}x\right),\quad t\in \mathbb{R},\quad x\in\mathbb{R}^{N} \tag{3}\] for \(f\in L^{2}\left(\mathbb{R}^{N}\right)\). Moreover, \[\|S(t)f\|=\mathrm{e}^{-\frac{t}{2}\operatorname{tr}(B)}\|f\|,\qquad t\in \mathbb{R} \tag{4}\] for every \(f\in L^{2}\left(\mathbb{R}^{N}\right)\). ### The Ornstein-Uhlenbeck semigroup Let \(B^{*}\) denote the transpose matrix of \(B\). We introduce the matrices \[Q_{t}=\int_{0}^{t}\mathrm{e}^{sB}\,\mathrm{e}^{sB^{*}}\,\mathrm{d}s,\qquad t>0,\] which are symmetric and positive definite. The Ornstein-Uhlenbeck semigroup \(T(t)\colon L^{2}\left(\mathbb{R}^{N}\right)\to L^{2}\left(\mathbb{R}^{N}\right)\) is given by Kolmogorov's formula \[T(0) =I,\] \[(T(t)f)(x) =\frac{1}{\sqrt{(4\pi)^{N}\det Q_{t}}}\int_{\mathbb{R}^{N}} \mathrm{e}^{-\frac{1}{4}\left\langle Q_{t}^{-1}y,y\right\rangle}f\left( \mathrm{e}^{tB}x-y\right)\mathrm{d}y,\quad t>0,\quad x\in\mathbb{R}^{N}\] for every \(f\in L^{2}\left(\mathbb{R}^{N}\right)\). This can be written as \[T(t)f=S(t)\left(g_{t}*f\right), \tag{5}\] where \[g_{t}(y)=\frac{1}{\sqrt{(4\pi)^{N}\det Q_{t}}}\mathrm{e}^{-\frac{1}{4}\left \langle Q_{t}^{-1}y,y\right\rangle},\quad t>0,\quad y\in\mathbb{R}^{N}. \tag{6}\] Let us define the maximal domain of \(\mathcal{A}\) by \[D\left(\mathcal{A}\right)=\left\{u\in L^{2}\left(\mathbb{R}^{N}\right)\cap H _{\mathrm{loc}}^{2}\left(\mathbb{R}^{N}\right):\mathcal{A}u\in L^{2}\left( \mathbb{R}^{N}\right)\right\}.\] The space \(C_{c}^{\infty}\left(\mathbb{R}^{N}\right)\) is a core for \(\mathcal{A}\). More precisely, \(D\left(\mathcal{A}\right)\) is characterized by \[D\left(\mathcal{A}\right)=\left\{u\in H^{2}\left(\mathbb{R}^{N}\right):Bx\cdot \nabla u\in L^{2}\left(\mathbb{R}^{N}\right)\right\}=D(\Delta)\cap D( \mathcal{L}).\] It is known that \(\left(\mathcal{A},D\left(\mathcal{A}\right)\right)\) generates a \(C_{0}\)-semigroup on \(L^{2}\left(\mathbb{R}^{N}\right)\) given by \(\left(T(t)\right)_{t\geq 0}\) which satisfies the estimate \[\|T(t)\|\leq\mathrm{e}^{-\frac{t}{2}\operatorname{tr}(B)},\qquad t\geq 0,\] where \(\|\cdot\|\) also denotes the operator norm. The \(C_{0}\)-semigroup \(\left(T(t)\right)_{t\geq 0}\) is not analytic in \(L^{2}\left(\mathbb{R}^{N}\right)\) (e.g. it is not an eventually norm continuous semigroup), except for \(B=0\). ## 3. Logarithmic convexity and observability ### Logarithmic convexity estimates The logarithmic convexity method is one of the well-known approaches that had been widely used to prove conditional stability for improperly posed problems such as backward parabolic equations as well as inverse initial data problems. The interested reader can be referred to [1, 2, 12, 15, 26] and the references therein. Next we give an abstract definition to a logarithmic convexity estimate. Let \(\left(H,\langle\cdot,\cdot\rangle\right)\) be a Hilbert space with corresponding norm \(\|\cdot\|\). Let \(\left(\mathrm{e}^{tA}\right)_{t\geq 0}\) be a \(C_{0}\)-semigroup on \(H\) associated with its generator \(A\colon D(A)\subset H\to H\). Inspired by [1, 15], we introduce the following definition. **Definition 3.1**.: We say that the \(C_{0}\)-semigroup \(\left(\mathrm{e}^{tA}\right)_{t\geq 0}\) satisfies a logarithmic convexity estimate for \(\theta>0\) if there exists a constant \(\kappa_{\theta}\geq 1\) and a function \(w\colon(0,\theta)\to(0,1)\), \(w(0)=0\) and \(w(\theta)=1\), so that the following estimate holds \[\left\|\mathrm{e}^{tA}u\right\|\leq\kappa_{\theta}\|u\|^{1-w(t)}\left\| \mathrm{e}^{\theta A}u\right\|^{w(t)} \tag{7}\] for all \(t\in[0,\theta]\) and all \(u\in H\). It is known that the logarithmic convexity holds for self-adjoint bounded above operators with \(\kappa_{\theta}=1\) and \(w(t)=\dfrac{t}{\theta}\), see e.g. [11, Section 2]. If the operator is subordinated to its symmetric part, there is a logarithmic convexity result in [12, Theorem 3.1.3], where \(w(t)=\dfrac{1-\mathrm{e}^{-ct}}{1-\mathrm{e}^{-c\theta}}\) for some constant \(c>0\). More generally, it holds for analytic semigroups with a more general function \(w(t)\). We refer to [3] for more details. **Remark 1**.: Note that the estimate (7) implies the backward uniqueness for the semigroup \(\left(\mathrm{e}^{tA}\right)_{t\geq 0}\), namely the following property: if \(\mathrm{e}^{\theta A}u=0\) for some \(u\in H\), then \(u=0\). Furthermore, the logarithmic convexity can be seen as a stability estimate for the backward uniqueness. More precisely, if one assumes a priori bound on the initial data \(\|u\|\leq R\) for some positive constant \(R\), the norm of the corresponding solution \(\left\|\mathrm{e}^{tA}u\right\|\), \(t\in(0,\theta)\) is small whenever \(\left\|\mathrm{e}^{\theta A}u\right\|\) is small. Since the Laplace operator with domain \(H^{2}\left(\mathbb{R}^{N}\right)\) is self-adjoint and negative on \(L^{2}\left(\mathbb{R}^{N}\right)\), the heat semigroup \(\left(U(t)\right)_{t\geq 0}\) satisfies the logarithmic convexity \[\|U(t)f\|\leq\|f\|^{1-\frac{t}{\theta}}\|U(\theta)f\|^{\frac{t}{ \theta}},\qquad f\in L^{2}\left(\mathbb{R}^{N}\right),\qquad t\in[0,\theta].\] In fact, the function \(t\mapsto\|U(t)f\|^{2}\) is even log-convex in this case. The same holds for the drift \(C_{0}\)-group \(\left(S(t)\right)_{t\in\mathbb{R}}\). Indeed, using (4), we see that \[\|S(t)f\|=\|f\|^{1-\frac{t}{\theta}}\|S(\theta)f\|^{\frac{t}{ \theta}},\qquad f\in L^{2}\left(\mathbb{R}^{N}\right),\qquad t\in[0,\theta].\] Next we prove a logarithmic convexity estimate for the Ornstein-Uhlenbeck semigroup \(\left(T(t)\right)_{t\geq 0}\). **Proposition 1**.: _There exists a constant \(\kappa_{\theta}\geq 1\) such that the following estimate holds_ \[\|T(t)f\|\leq\kappa_{\theta}\|f\|^{1-\frac{t}{\theta}}\|T(\theta)f\|^{\frac{t }{\theta}},\qquad f\in L^{2}\left(\mathbb{R}^{N}\right),\qquad t\in[0,\theta]. \tag{8}\] Proof.: Using (4) and (5), the estimate (8) is equivalent to the following inequality \[\|g_{t}\ast f\|\leq\kappa_{\theta}\|f\|^{1-\frac{t}{\theta}}\|g_{\theta}\ast f\| ^{\frac{t}{\theta}},\qquad f\in L^{2}\left(\mathbb{R}^{N}\right),\qquad t\in(0, \theta]. \tag{9}\] Invoking the Fourier transform denoted by \(\widehat{f}\) for any \(f\in L^{2}\left(\mathbb{R}^{N}\right)\), we obtain the identities \(\widehat{g}_{t}(\xi)=\mathrm{e}^{-\langle Q_{t}\xi,\xi\rangle}\) and \(\widehat{g_{t}\ast f}(\xi)=\mathrm{e}^{-\langle Q_{t}\xi,\xi\rangle}\widehat{f }(\xi)\), \(\xi\in\mathbb{R}^{N}\). We first claim that there exists a constant \(c_{\theta}>0\) such that \[\langle Q_{t}\xi,\xi\rangle\geq c_{\theta}\frac{t}{\theta}\,\langle Q_{\theta} \xi,\xi\rangle\qquad\forall t\in[0,\theta],\quad\forall\xi\in\mathbb{R}^{N}. \tag{10}\] Indeed, by continuity of \(t\mapsto\frac{Q_{t}}{t}\) and the fact that \(\frac{Q_{t}}{t}\to I_{N}\) (the identity matrix) as \(t\to 0\), there exist positive constants \(c_{1}\) and \(c_{2}\) such that \[c_{1}t|\xi|^{2}\leq\langle Q_{t}\xi,\xi\rangle\leq c_{2}t|\xi|^{2}\qquad \forall t\in[0,\theta],\quad\forall\xi\in\mathbb{R}^{N}.\] This inequality implies (10) with \(c_{\theta}=\frac{c_{1}}{c2}\). Setting \(\kappa_{\theta}=\mathrm{e}^{c_{\theta}}\), the inequality (10) entails that \[\mathrm{e}^{-2\langle Q_{t}\xi,\xi\rangle}|\widehat{f}(\xi)|^{2}\leq\kappa_{ \theta}^{2}|\widehat{f}(\xi)|^{2\left(1-\frac{t}{\theta}\right)}\left(\mathrm{ e}^{-2\langle Q_{\theta}\xi,\xi\rangle}|\widehat{f}(\xi)|^{2}\right)^{\frac{t}{ \theta}}.\] Let \(t\in(0,\theta)\) be fixed. Applying the Holder inequality for \(p=\frac{\theta}{\theta-t}\), \(q=\frac{\theta}{t}\) and the functions \(F(\xi)=|\widehat{f}(\xi)|^{2\left(1-\frac{t}{\theta}\right)}\), \(G(\xi)=\left(\mathrm{e}^{-2\langle Q_{\theta}\xi,\xi\rangle}|\widehat{f}(\xi )|^{2}\right)^{\frac{t}{\theta}}\), we obtain (9). This completes the proof of (8). **Remark 2**.: Proposition 1, aside from being of independent interest, implies the backward uniqueness property for the Ornstein-Uhlenbeck semigroup. This property plays an important role in some control problems. ### Observability estimate In this subsection, we discuss some recent results on the final state observability for the system (1). We refer to [28, 33] for the general theory. In [6], the authors have shown the final state observability of the system (1) at any positive time \(\theta\) whenever the observation region \(\omega\subset\mathbb{R}^{N}\) is a nonempty open set satisfying the geometric condition \[\exists\delta,r>0,\forall y\in\mathbb{R}^{N},\exists y^{\prime}\in\omega,\quad B \left(y^{\prime},r\right)\subset\omega\text{ and }\left|y-y^{\prime}\right|<\delta. \tag{11}\] This condition was known as a sufficient condition ensuring the final state observability of the heat equation on \(\mathbb{R}^{N}\) at any positive time [25]. The later property has been recently characterized by the thickness of the set \(\omega\)[9, 29]: **Definition 3.2**.: Let \(\gamma\in(0,1]\) and \(a=(a_{1},\ldots,a_{N})\in\left(\mathbb{R}_{+}^{*}\right)^{N}\). Let us denote by \(\mathcal{C}=\left[0,a_{1}\right]\times\ldots\times\left[0,a_{N}\right]\). * A measurable set \(\omega\subset\mathbb{R}^{N}\) is said to be \((\gamma,a)\)-thick if \[\left|\omega\cap(x+\mathcal{C})\right|\geq\gamma\prod_{j=1}^{N}a_{j}\qquad \forall x\in\mathbb{R}^{N},\] where \(\left|E\right|\) denotes the Lebesgue measure of a measurable set \(E\subset\mathbb{R}^{N}\). * A measurable set \(\omega\subset\mathbb{R}^{N}\) is thick if there exist \(\gamma\in(0,1]\) and \(a\in\left(\mathbb{R}_{+}^{*}\right)^{N}\) such that \(\omega\) is \((\gamma,a)\)-thick. * A measurable set \(\omega\subset\mathbb{R}^{N}\) is \(\gamma\)-thick at scale \(L>0\) if \(\omega\) is \((\gamma,a)\)-thick and \(a=(L,\ldots,L)\in\left(\mathbb{R}_{+}^{*}\right)^{N}\). We emphasize that the notion of thickness is weaker than the condition (11). Furthermore, the thickness of the observation set \(\omega\) turns out to be a sufficient condition that ensures the final state observability of several parabolic equations at any positive time, including the system (1). Next we state the observability inequality of system (1) from thick sets. **Proposition 2**.: _Let \(\theta>0\) be fixed and let \(\omega\subset\mathbb{R}^{N}\) be a thick set. Consider \(u\) the mild solution of (1). Then there exists a positive constant \(\kappa_{\theta}\) such that for all \(u_{0}\in L^{2}\left(\mathbb{R}^{N}\right)\), we have_ \[\|u(\theta,\cdot)\|_{L^{2}(\mathbb{R}^{N})}^{2}\leq\kappa_{\theta}\int_{0}^{ \theta}\|u(t,\cdot)\|_{L^{2}(\omega)}^{2}\,\mathrm{d}t. \tag{12}\] The above result has been recently proven in [4] for the possibly degenerate fractional Ornstein-Uhlenbeck equation under a Kalman rank condition. It improves its counterpart in [6] regarding the Ornstein-Uhlenbeck equation for observation sets \(\omega\) satisfying (11) which has been considered in [3]. ## 4. Logarithmic stability for a class of initial data We introduce the set of admissible initial data: \[\mathcal{I}_{R}=\left\{u_{0}\in D(\mathcal{A})\colon\|u_{0}\|_{D(\mathcal{A}) }\leq R\right\}\] for a fixed constant \(R>0\). **Theorem 4.1**.: _Let \(\theta>0\) be fixed and let \(\omega\subset\mathbb{R}^{N}\) be a thick set. There exist positive constants \(C\) and \(C_{1}\) depending on \((N,\theta,\omega,R)\) such that, for all \(u_{0}\in\mathcal{I}_{R}\),_ \[\|u_{0}\|_{L^{2}(\mathbb{R}^{N})}\leq\frac{-C}{\log\left(C_{1}\|u\|_{H^{1}(0, \theta;L^{2}(\omega))}\right)} \tag{13}\] _for \(\|u\|_{H^{1}(0,\theta;L^{2}(\omega))}\) sufficiently small, where \(u\) is the solution of system (1)._ Proof.: Let \(z=u_{t}\) and apply (12) to \(z\), we obtain \[\|z(\theta,\cdot)\|_{L^{2}(\mathbb{R}^{N})}\leq C\|z\|_{L^{2}(0,\theta;L^{2}( \omega))}. \tag{14}\] Applying the logarithmic convexity estimate to \(z\) (Proposition 1), we have \[\|z(t,\cdot)\|_{L^{2}(\mathbb{R}^{N})}\leq C^{1-\frac{1}{\theta}}\,\|z( \theta,\cdot)\|_{L^{2}(\mathbb{R}^{N})}^{\frac{t}{\theta}},\quad 0\leq t\leq\theta. \tag{15}\] Since \[u(0,\cdot)=-\int_{0}^{\theta}z(\tau,\cdot)\mathrm{d}\tau+u(\theta,\cdot),\] by (14) and (15) we have \[\|u(0,\cdot)\|_{L^{2}(\mathbb{R}^{N})} \leq\int_{0}^{\theta}\|z(\tau,\cdot)\|_{L^{2}(\mathbb{R}^{N})} \,\mathrm{d}\tau+\|u(\theta,\cdot)\|_{L^{2}(\mathbb{R}^{N})}\] \[\leq C\int_{0}^{\theta}\|z(\theta,\cdot)\|_{L^{2}(\mathbb{R}^{N} )}^{\frac{t}{\theta}}\,\mathrm{d}\tau+C\|u\|_{L^{2}(0,\theta;L^{2}(\omega))}\] \[\leq C\theta\frac{\|z(\theta,\cdot)\|_{L^{2}(\mathbb{R}^{N})}-1} {\log\|z(\theta,\cdot)\|_{L^{2}(\mathbb{R}^{N})}}+C\|u\|_{L^{2}(0,\theta;L^{ 2}(\omega))}\] \[\leq C\left(\frac{E-1}{\log E}+E\right),\] where we denoted \(E:=\|z(\theta,\cdot)\|_{L^{2}(\mathbb{R}^{N})}+C\|z\|_{L^{2}(0,\theta;L^{2}(\omega))}\). By (14), when the norm \(\|z\|_{L^{2}(0,\theta;L^{2}(\omega))}\) is sufficiently small, we obtain \[0<E\leq C_{1}\|z\|_{L^{2}(0,\theta;L^{2}(\omega))}<1 \tag{16}\] for some constant \(C_{1}>0\). Using the inequality \(\dfrac{\tau-1}{\log\tau}+\tau\leq-\dfrac{1+\mathrm{e}^{-2}}{\log\tau}\) for \(0<\tau<1\), with (14) we obtain \[\|u_{0}\|_{L^{2}(\mathbb{R}^{N})}\leq\dfrac{-C}{\log\big{(}C_{1}\|u\|_{H^{1}(0,\theta;L^{2}(\omega))}\big{)}}.\] At this level, some comments should be made on the particular case of the heat equation (when \(B=0\)): Theorem 4.1 enables us to derive a logarithmic stability result for a class of initial data of the heat equation posed in \(\mathbb{R}^{N}\). A similar result was shown in [30] for the heat equation on a bounded domain \(\Omega\subset\mathbb{R}^{N}\) with homogeneous Dirichlet boundary conditions on \(\partial\Omega\). Since the heat semigroup is analytic on \(L^{2}\left(\mathbb{R}^{N}\right)\), we can show an improved logarithmic stability result for a large class of initial data \[\mathcal{I}_{\varepsilon,R}:=\left\{u_{0}\in H^{2\varepsilon}\left(\mathbb{R} ^{N}\right):\|u_{0}\|_{H^{2\varepsilon}(\mathbb{R}^{N})}\leq R\right\}\] for fixed \(\varepsilon\in(0,1)\) and \(R>0\). Although the used techniques are quite classical, we will give a full proof for the reader convenience. **Theorem 4.2**.: _Let \(\theta>0\) be fixed and let \(\omega\subset\mathbb{R}^{N}\) be a thick set. Assume also that \(p\in\left(1,\dfrac{1}{1-\varepsilon}\right)\) and \(s\in\left(0,1-\dfrac{1}{p}\right)\). Then there exists a positive constant \(K(\varepsilon,R,\theta,\kappa_{\theta},p,s)\) such that, for all \(u_{0}\in\mathcal{I}_{\varepsilon,R}\), we have_ \[\|u_{0}\|_{L^{2}(\mathbb{R}^{N})}\leq K\left(\dfrac{\|u\|_{L^{2}(0,\theta;L^{2 }(\omega))}^{p}-1}{\log\|u\|_{L^{2}(0,\theta;L^{2}(\omega))}}\right)^{\frac{ \varepsilon}{p}}, \tag{17}\] _where \(u\) is the solution of system (1) with \(B=0\). Moreover, if \(\|u\|_{L^{2}(0,\theta;L^{2}(\omega))}<1\), then_ \[\|u_{0}\|_{L^{2}(\mathbb{R}^{N})}\leq K\left(-\log\|u\|_{L^{2}(0,\theta;L^{2} (\omega))}\right)^{-\frac{\varepsilon}{p}}. \tag{18}\] Proof.: Since \(u_{0}\in\mathcal{I}_{\varepsilon,R}\), we deduce by the logarithmic convexity that \[\|u(t)\|_{(\mathbb{R}^{N})}\leq R^{\frac{\theta-t}{\theta}}\left\|u\left( \theta\right)\right\|_{L^{2}(\mathbb{R}^{N})}^{\frac{t}{\theta}}\quad\left(0 \leq t\leq\theta\right).\] Assume \(p>1\). By integrating the above relation between \(0\) and \(\theta\), we obtain \[\int_{0}^{\theta}\|u(t)\|_{L^{2}(\mathbb{R}^{N})}^{p}\mathrm{d}t \leq R^{p}\int_{0}^{\theta}\mathrm{e}^{p\frac{t}{\theta}\log\left( R^{-1}\|u(\theta)\|_{L^{2}(\mathbb{R}^{N})}\right)}\mathrm{d}t\] \[\leq\theta\dfrac{\|u\left(\theta\right)\|_{L^{2}(\mathbb{R}^{N})} ^{p}-R^{p}}{\log\|u\left(\theta\right)\|_{L^{2}(\mathbb{R}^{N})}^{p}-\log R^{ p}},\] and thus \[\|u\|_{L^{p}(0,\theta;L^{2}(\mathbb{R}^{N}))}\leq\theta^{\frac{1}{p}}\left( \dfrac{\|u\left(\theta\right)\|_{L^{2}(\mathbb{R}^{N})}^{p}-R^{p}}{\log\|u \left(\theta\right)\|_{L^{2}(\mathbb{R}^{N})}^{p}-\log R^{p}}\right)^{\frac{1 }{p}}. \tag{19}\] On the other hand, by the semigroup representation of the solution \(u\), we have \[u_{t}=-(-\Delta)^{1-\varepsilon}\mathrm{e}^{t\Delta}(-\Delta)^{\varepsilon}u_ {0}.\] Moreover, since \(\Delta\) is the generator of an analytic semigroup on \(L^{2}\left(\mathbb{R}^{N}\right)\), we have \[\left\|u_{t}(t)\right\|_{L^{2}(\mathbb{R}^{N})}\leq\frac{1}{t^{1-\varepsilon}} \left\|u_{0}\right\|_{H^{2\varepsilon}(\mathbb{R}^{N})}.\] We deduce from the above inequality that for \(p\in\left(1,\frac{1}{1-\varepsilon}\right)\), \[\left\|u_{t}\right\|_{L^{p}(0,\theta;L^{2}(\mathbb{R}^{N}))}\leq R\frac{\theta ^{\frac{1}{p}-(1-\varepsilon)}}{(1-p(1-\varepsilon))^{\frac{1}{p}}}.\] Using \(\left\|u(t)\right\|_{L^{2}(\mathbb{R}^{N})}\leq\left\|u(0)\right\|_{L^{2}( \mathbb{R}^{N})}\leq R\) for all \(t\), we also have \[\left\|u\right\|_{L^{p}(0,\theta;L^{2}(\mathbb{R}^{N}))}\leq R\theta^{\frac{1 }{p}},\] and therefore \[\left\|u\right\|_{W^{1,p}(0,\theta;L^{2}(\mathbb{R}^{N}))}\leq R\theta^{\frac {1}{p}}\left(1+\frac{1}{\theta^{(1-\varepsilon)}(1-p(1-\varepsilon))^{\frac{ 1}{p}}}\right). \tag{20}\] Combining (19) and (20), and using a Sobolev interpolation, we obtain for all \(0<s<1\) that \[\left\|u\right\|_{W^{1-s,p}(0,\theta;L^{2}(\mathbb{R}^{N}))}\leq CR ^{1-s}\theta^{\frac{1}{p}}\left(1+\frac{1}{\theta^{(1-\varepsilon)}(1-p(1- \varepsilon))^{\frac{1}{p}}}\right)^{1-s}\] \[\times\left(\frac{\left\|u\left(\theta\right)\right\|_{L^{2}( \mathbb{R}^{N})}^{p}-R^{p}}{\log\left\|u\left(\theta\right)\right\|_{L^{2}( \mathbb{R}^{N})}^{p}-\log R^{p}}\right)^{\frac{s}{p}}.\] Using the Sobolev embedding \[W^{1-s,p}\left(0,\theta;L^{2}\left(\mathbb{R}^{N}\right)\right)\subset C\left( \left[0,\theta\right];L^{2}\left(\mathbb{R}^{N}\right)\right)\] for \((1-s)p>1\), there exists \(K=K\left(R,\theta,p,\varepsilon,s\right)>0\) such that \[\left\|u(0)\right\|_{L^{2}(\mathbb{R}^{N})}\leq K\left(\frac{\left\|u\left( \theta\right)\right\|_{L^{2}(\mathbb{R}^{N})}^{p}-R^{p}}{\log\left\|u\left( \theta\right)\right\|_{L^{2}(\mathbb{R}^{N})}^{p}-\log R^{p}}\right)^{\frac{s }{p}}.\] Using the observability inequality (12), \[\left\|u(\theta)\right\|_{L^{2}(\mathbb{R}^{N})}\leq\kappa_{\theta}\|u\|_{L^{ 2}(0,\theta;L^{2}(\omega))},\] we deduce that \[\|u(0)\|_{L^{2}(\mathbb{R}^{N})}\leq K\left(\frac{\kappa_{\theta}^{p}\|u\|_{L^ {2}(0,\theta;L^{2}(\omega))}^{p}-R^{p}}{\log\kappa_{\theta}^{p}\|u\|_{L^{2}( 0,\theta;L^{2}(\omega))}^{p}-\log R^{p}}\right)^{\frac{s}{p}}.\] Then considering both cases \(\kappa_{\theta}\geq R\), \(\kappa_{\theta}<R\), and using again the concavity of the logarithm function, we deduce that \[\|u(0)\|_{L^{2}(\mathbb{R}^{N})}\leq K\left(\frac{\|u\|_{L^{2}(0,\theta;L^{2} (\omega))}^{p}-1}{\log\|u\|_{L^{2}(0,\theta;L^{2}(\omega))}}\right)^{\frac{s}{ p}}.\] This ends the proof. **Remark 3**.: We emphasize that in Theorem 4.2 we have considered observation sets \(\omega\subset\mathbb{R}^{N}\) that are thick. Such a result improves [5, Theorem 1.1] in terms of observation where the authors consider open sets \(\omega\) such that \(\mathbb{R}^{N}\setminus\omega\) is bounded, which is far to be optimal. ## 5. Comments and open problems We have investigated the interplay between observability and inverse problems for a class of equations which represents a prototype of parabolic equations with unbounded coefficients (of gradient type) via the logarithmic convexity estimate. More precisely, we have proven that the observability inequality along with the logarithmic convexity imply conditional logarithmic stability of initial data. In this paper, we have considered a case including a non-analytic semigroup which has not been studied in the literature within this context. Also, stability estimates for initial data for parabolic equations with unbounded coefficients have not been considered before up to our knowledge, except in [3] where analyticity have played a crucial role. In the particular (but important) case of heat equation, i.e. the case without the drift term, that is, we can improve some interesting results obtained in the recent paper [5] in terms of the observability regions for the reconstruction of the initial temperatures. More precisely, we can consider sharp observation regions given by thick sets instead of considering sets such that the unobserved region is bounded. Furthermore, for the general case, the results can be improved as far as the observation region is sharpened. In the present paper, we have proven a logarithmic convexity estimate for the Ornstein-Uhlenbeck equation leveraging the explicit representation formula of the corresponding semigroup. This raises the following problems: 1. Can one prove a logarithmic convexity estimate in the absence of an explicit formula for the associated semigroup as in the analytic case? For instance, can we consider more general Ornstein-Uhlenbeck operators as, where and are real constant -matrices which are positive semidefinite? Under suitable assumptions on the matrices, and, the observability inequality holds true, see [8, Subsection 6.4]. Note that this case also covers some situations where the observation region is not thick. 2. Given the perturbation argument in Section 2, it is natural to ask whether the logarithmic convexity is preserved when a main abstract operator is perturbed by an operator. This is true for instance in the analytic case, when is perturbed by a bounded operator or more generally a relatively -bounded operator, since analyticity is preserved in this case, see e.g. [10, Theorem 2.10, p. 176]. ## Acknowledgment The first named author would like to thank Giorgio Metafune for a fruitful discussion with invaluable comments.
2306.08720
A Client-server Deep Federated Learning for Cross-domain Surgical Image Segmentation
This paper presents a solution to the cross-domain adaptation problem for 2D surgical image segmentation, explicitly considering the privacy protection of distributed datasets belonging to different centers. Deep learning architectures in medical image analysis necessitate extensive training data for better generalization. However, obtaining sufficient diagnostic and surgical data is still challenging, mainly due to the inherent cost of data curation and the need of experts for data annotation. Moreover, increased privacy and legal compliance concerns can make data sharing across clinical sites or regions difficult. Another ubiquitous challenge the medical datasets face is inevitable domain shifts among the collected data at the different centers. To this end, we propose a Client-server deep federated architecture for cross-domain adaptation. A server hosts a set of immutable parameters common to both the source and target domains. The clients consist of the respective domain-specific parameters and make requests to the server while learning their parameters and inferencing. We evaluate our framework in two benchmark datasets, demonstrating applicability in computer-assisted interventions for endoscopic polyp segmentation and diagnostic skin lesion detection and analysis. Our extensive quantitative and qualitative experiments demonstrate the superiority of the proposed method compared to competitive baseline and state-of-the-art methods. Codes are available at: https://github.com/thetna/distributed-da
Ronast Subedi, Rebati Raman Gaire, Sharib Ali, Anh Nguyen, Danail Stoyanov, Binod Bhattarai
2023-06-14T19:49:47Z
http://arxiv.org/abs/2306.08720v1
# A Client-server Deep Federated Learning for Cross-domain Surgical Image Segmentation ###### Abstract This paper presents a solution to the cross-domain adaptation problem for 2D surgical image segmentation, explicitly considering the privacy protection of distributed datasets belonging to different centers. Deep learning architectures in medical image analysis necessitate extensive training data for better generalization. However, obtaining sufficient diagnostic and surgical data is still challenging, mainly due to the inherent cost of data curation and the need of experts for data annotation. Moreover, increased privacy and legal compliance concerns can make data sharing across clinical sites or regions difficult. Another ubiquitous challenge the medical datasets face is inevitable domain shifts among the collected data at the different centers. To this end, we propose a Client-server deep federated architecture for cross-domain adaptation. A server hosts a set of immutable parameters common to both the source and target domains. The clients consist of the respective domain-specific parameters and make requests to the server while learning their parameters and inferencing. We evaluate our framework in two benchmark datasets, demonstrating applicability in computer-assisted interventions for endoscopic polyp segmentation and diagnostic skin lesion detection and analysis. Our extensive quantitative and qualitative experiments demonstrate the superiority of the proposed method compared to competitive baseline and state-of-the-art methods. Codes are available at: [https://github.com/thetna/distributed-da](https://github.com/thetna/distributed-da). Keywords:Domain Adaptation Federated Learning Decentralised Storage Privacy ## 1 Introduction The deployment of artificial intelligence (AI) technology in medical image analysis is rapidly growing, and training robust deep network architectures demands millions of annotated examples. Despite significant progress in establishing large-scale medical datasets, these are still limited in some clinical indications, especially in surgical data science and computer-assisted interventions [21]. Scaling training data needs multi-site collaboration and data sharing [1], which can be complex due to regulatory requirements (e.g. the EU General Data Protection Regulation [31], and China's cyber power [13]), privacy, and legal concerns. Additionally, even after training, practical AI model deployment in the clinic will require fine-tuning or optimization to local conditions and updates [11]. Therefore, architectures trained in federated and distributed ways to tackle cross-domain adaptation problems are critical. Yet, developing such architectures has challenges [25]. Several works [29, 7, 8] have been proposed to tackle the problem of cross-domain adaptation in medical imaging. However, these methods require raw source and target domain data and cannot address the ever-increasing privacy concerns in sharing medical data. To circumvent the problem of privacy protection, there is a lot of research interest growing in Federated Learning (FL) in the medical domain [25, 18, 27, 30, 28, 15, 24].Some methods even rely on synthetic data [10] to avoid sharing real data. For more details, we refer readers to a survey [23] on federated learning for smart health care. The common drawback of most existing methods [25, 18, 15, 24] is that these methods are not designed for the domain shift problem. The most common topology in the FL workflow is averaging the local gradients (FedAvg) at the center and peer-to-peer gradient (FedP2P) sharing. These architectures are effective when data are independent and identically distributed (IID) in every client. In reality, domain shift is quite prevalent as data collected at different centers tend to be center specific. In Fig. 1, we can see the training examples for polyp segmentation collected at different centres. These examples show the discrepancy in lighting, camera pose and modalities in different centers. Some recent works, such as by Guo et al. [9] and FedDG by Liu et al. [19], address cross-domain problems in FL. However, [9] limits to a source-target pair at a time. Also, they employed adversarial loss to align the parameters, which is difficult to optimize. Similarly, FedDG [19] shares the information between the sources in the form of amplitudes of images. Their evaluation is limited to fundus images and MRI. Figure 1: Sample training examples collected from various centres for polyp segmentation (left); Sizes of training and test set at different centres for polyp segmentation (middle) and skin lesion segmentation (right). To tackle the problems of cross-domain adaptation and privacy protection in surgical image segmentation, we propose a simple yet effective Client-server FL architecture consisting of a server and multiple clients, as shown in Fig. 2. A server hosts a set of _immutable_ task-specific parameters common to all the clients. Whereas every client requests the server to learn their domain-specific parameters locally and make the inference. In particular, every client learns an encoder's parameters to obtain an image's latent representation. These latent representations and ground truth masks are sent to the server. The decoder deployed on the server makes the predictions and computes the loss. The gradients are computed and updated only on the encoder to align the client's features with task-specific parameters hosted on the server. Aligning domain-specific parameters to common parameters helps diminish the gap between the source and target domains. We can draw an analogy between our framework with public-key cryptography. A client's network parameter is equivalent to a private key, and the decoder's parameters shared on the server are equivalent to the public key. Thus a client only with access to its private key can transfer its latent vector to the server containing the public key to obtain the semantic mask. Distributed storage of the parameters diminishes the risk of model parameter theft and adversarial attacks [20]. Moreover, each client communicates to the server only via a latent image representation, which prevents exposing the information of the raw data collected on the client side. It is possible to encrypt data transfer between the server and clients to make it more secure. Finally, the server receives only fixed latent dimension representations, making it agnostic to the client's architecture. This enables clients to communicate with the server concurrently, improving efficiency. Likewise, none of the centres can modify the parameters deployed on the server; this would prevent the memorisation of client-specific information and parameter poisoning on the server [17]. To sum up, we propose a Client-server Federated Learning method for cross-domain surgical image segmentation. We applied our method to two multi-centre datasets for endoscopic polyp and skin lesion segmentation. We compared with multiple baselines, including recent works on cross-domain FL [19, 9] and obtained a superior performance. ## 2 Method **Background:** We consider a scenario where we have \(C_{1},C_{2},\ldots C_{n}\) represent \(n\) number of different institution's centres located at various geographical regions. Each centre collects its data in the form of tuple \((\mathbf{x},\mathbf{y})\) where \(\mathbf{x}\in\mathbb{R}^{w\times h\times c},\mathbf{y}\in\mathbb{R}^{w\times h}\), where, \(w,h,c\) represent the width, height, and number of channels of an image. The annotated examples collected at the different centers are not IID due to variations in the illumination, the instruments used to acquire data, the ethnicity of the patients, the expertise of the clinician who collects the data, etc. We denote the total number of annotated pairs in each centre by \(N_{n}\). In this paper, one of the major goals is to address the problem of domain adaptation, avoiding the need for the sharing of raw data to protect privacy. **Learning Source Domain Parameters:** First, we train a semantic segmentation network on the source data. In Figure 2, the Source domain block shows the training of source domain/centre parameters. For us, data collected on Centre 1, \(C_{1}\) is source data. We employ fully-convolutional encoder-decoder architecture. Such architectures are quite popular for semantic segmentation [2, 26]. With the randomly initialised parameters, we minimise the objective of the Equation 1. In Equation 1, \(\theta^{e}_{C_{1}}\) and \(\theta^{d}_{C_{1}}\) represent the learnable parameters of the encoder and decoder, respectively. \[\begin{split} L([\theta^{e}_{C_{1}};\theta^{d}_{C_{1}}])=-\frac{ 1}{N_{1}}\sum_{i=1}^{N_{1}}\sum_{i=j}^{W}\sum_{k=1}^{H}\mathbf{y}_{ijk}\log \hat{\mathbf{y}}_{ijk}\\ +(1-\mathbf{y}_{ijk})\log(1-\hat{\mathbf{y}}_{ijk})\end{split} \tag{1}\] **Setting-up Server Infra-structure:** Figure 2 Server infra-structure block shows the setting up of the server infrastructure. Once we learn the parameters of the network from the source (\(C_{1}\)) data set, we upload the decoder (\(\theta^{d}_{C_{1}}\)) parameters on the server to share with every target client. The decoder module specializes to segment anatomies, given the encoder module's latent vector representation of the input image. As this segmentation task is common to all the centres, we propose to use a single decoder for all the centres. The previous works on cross-modal [4] and cross-feature [3] representations learning for cross Figure 2: The schematic diagram of the proposed framework. There are three major components: Source domain, Target domains, and Server infrastructure to share common parameters. domain classification in computer vision employed the idea of sharing Convolutional Neural Networks' top layers parameters. However, none of these methods were employed in Federated Learning. The idea of sharing top layers parameters is in contrast to the conventional transfer learning [12] where the parameters are initialised with a model pre-trained on Imagenet and fine-tuned only the task-specific fully connected layers. We freeze the shared decoder parameters of the source. This arrangement brings advantages for privacy protection by preventing weight poisoning attacks [17]. Weight poisoning attacks alter the pre-trained model's parameters to create a backdoor. The parameters of the encoders can be shipped to the target client as per demand. Sometimes, the clients may demand these parameters to initialize their local networks when the training data is very small. **Federated Cross-domain Adaptation:** Target centres other than the source centre deploy only the encoder network of their choice. In Figure 2, the target domain block depicts it. Every centre feeds its images to its encoder network during training, generating the respective latent representations. The latent representation and the ground truth (\(\mathbf{y}_{i}\)) mask from each target centre are pushed to the server where the pre-trained source decoder, \(\theta_{C1}^{d}\), is placed. The decoder feeds the latent representation, which predicts the output segmentation labels (\(\hat{\mathbf{y}}_{i}\)). We learn the parameters of the target encoders (\(\theta_{Ci}^{e}\)) to minimize the objective given in Equation 2. Since the decoder parameters are frozen and shared with every client, only the target encoder's parameters are updated on the client side. This helps to align the latent representations to that of the source decoder's parameters and maximises the benefit from the task-specific discriminative representations learned from the large volume of source data. \[\begin{split} L([\theta_{C_{i}}^{e};\theta_{C_{1}}^{d}])=-\frac{1} {N_{i}}\sum_{i=2}^{N_{i}}\sum_{i=j}^{W}\sum_{k=1}^{H}\mathbf{y}_{ijk}\log\hat {\mathbf{y}}_{ijk}\\ +(1-\mathbf{y}_{ijk})\log(1-\hat{\mathbf{y}}_{ijk})\\ \forall i\in 2,\ldots n\end{split} \tag{2}\] The only thing that matters for target centres to communicate to the server is the fixed dimension of latent representations of an image. Thus, our architecture gives the flexibility of deploying the various sizes of networks on the client side based on available computing resources. And it is also entirely up to the target centres whether they want to initialize the parameters of the encoder using the parameters of the source domain. If the number of training examples is extremely few, then initialization using the pre-trained model's weight can prevent overfitting. ## 3 Experiments **Data sets and Evaluation Protocol:** We applied our method in two benchmark datasets: endoscopic polyp segmentation and skin lesion segmentation. The **polyp segmentation dataset** contains images collected at four different centres. Kvasir-SEG [14] data set makes the source centre (\(C_{1}\)) in our experiment. It has 800 images in the train set and 200 in the test set. These high-resolution images acquired by electromagnetic imaging systems are made available to the public by Simula Lab in Norway. Similarly, the EndoUDA-target data set makes the first target domain (\(C_{2}\)) in our experiment, consisting of 21 images in both the training and testing sets [5]. Our experiment's second target domain centre (\(C_{3}\)) consists of images from the CVC-ClinicDB dataset made available to the research community by a research team in Spain. There are 520 images in the train set and 92 in the test set. Finally, the ETIS-Larib dataset released by a laboratory in France makes our third target domain data set (\(C_{4}\)). This data set consists of 166 in the train set and 30 images in the test set. These data sets were curated at different time frames in different geographical locations. For **skin lesion segmentation**, we took data set collected at two different centres: ISIC(International Skin Imaging Collaboration) [6] and PH2 [22]. In ISIC, there are 2596 training examples and 102 test examples. The PH2 database is curated through a joint research collaboration between the Universidade do Porto, Tecnico Lisboa, and the Dermatology Service of Hospital Pedro Hispano in Matosinhos, Portugal. In this data set, there are only 180 training examples and 20 testing examples. We consider ISIC and PH2 source and target domain, respectively. We use the mean Intersection of Union (mIoU) and dice score for both datasets for quantitative evaluations. Qualitative comparisons also validate our idea. **Baselines:** We have compared the performance of our method with several competitive baselines, including both non-federated and federated frameworks. One of the naive baselines is to train a model for each target centre independently (_INDP_). The models of the centres with less training data overfit. Another configuration is creating a data pool by combining the training data (_COMB_) from all the centres and training a single model. However, this method does not address any of the issues regarding privacy and compliance. Another viable option is to adapt a pre-trained model to a new domain by fine-tuning the parameters of the latter layers (_FtDe_). We also compared our method with competitive feder \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \multirow{2}{*}{Data} & \multirow{2}{*}{Centres} & \multicolumn{8}{c}{mIOU} \\ \cline{3-10} & & INDP & COMB & [16] & [19] & FtDe & [9] & RandEn & FtEn \\ \hline \multirow{4}{*}{Endo.} & Kvasir-SEG (\(C_{1}\), source) & 80.3 & 81.0 & 82.3 & 73.5 & N/A & 80.5 & 80.3 & 80.3 \\ & EndoUDA (\(C_{2}\)) & 52.0 & 57.5 & 53.1 & 29.7 & 59.9 & 61.7 & 50.6 & 62.0 \\ & CVC-ClinicDB (\(C_{3}\)) & 88.3 & 87.8 & 86.8 & 74.5 & 85.8 & 83.0 & 89.1 & 88.4 \\ & ETIS-Larib (\(C_{4}\)) & 62.1 & 66.9 & 61.4 & 70.8 & 65.1 & 71.7 & 64.3 & 69.9 \\ \hline \multirow{2}{*}{Skin} & ISIC (\(C_{1}\), source) & 81.3 & 75.7 & 84.9 & N/A & N/A & NA & 81.3 & N/A \\ & PH2 (\(C_{2}\)) & 88.4 & 88.3 & 88.4 & N/A & 88.0 & NA & 89.6 & 89.4 \\ \hline \end{tabular} \end{table} Table 1: mIoU scores on Endoscopic Polyp Segmentation Data sets (upper block) and Skin Lesion Segmentation (lower block). ated learning algorithms. FedAvg [16] averages the gradients computed in every center and shares the average gradients with the clients. This method ignores the non-IID nature of data from different centres. FedDG [19] is another Federated Learning method for domain adaption published at CVPR 2021. Finally, we also compared with another recent work by Guo et al. [9] for federated learning for multi-institutional data published at CVPR 2021. Our methods have two variants: initialising clients' parameters randomly(**RandEn**) and with the source's parameters (**FtEn**) **Implementation Details and Learning Behaviour:** We implement our algorithms on PyTorch framework. All the images were resized to the dimension of \(418\times 418\). For optimization, we employ Adam optimizer with values of \(\beta 1\) and \(\beta 2\) set to \(0.9\) and \(0.999\) respectively. We initialize learning rate to 2e-4 and set the decaying of the learning rate every 25k iterations. Taking UNet as a base architecture, we train the networks for 100k iterations and save the best-performing checkpoints on the validation set and report the performance on the test set. Figure 3 summarises the learning behaviour of the different methods for the first 100k iterations on PH2 data set, a target domain for skin lesion segmentation. The solid lines are our methods, and the dashed lines are the compared methods. The smooth curves demonstrate that our methods are easy to optimise the parameters. **Quantitative Evaluations:** Table 3 shows the quantitative performance comparison. In the table, the last two grey-shaded columns show the performance of our methods. Our method outperforms _INDP_ in every target centre. This signifies the importance of domain adaptation by our method. Compared to the other Federated Learning methods, our methods obtain the highest performance on Figure 3: Curves show the learning behaviour of different methods on PH2 data set. \(2/3\) of target centres and are competitive on the third one for endoscopic polyp segmentation. On skin lesion segmentation, our method surpassed all the compared baselines and the recent competitive Federated Learning methods. We have compared the performance on the dice score, too and obtained a similar performance (See Supplementary). We have also evaluated the performance with varying sizes of the target domain/centre encoders. Please see the appendix for the details. **Qualitative Evaluations:** Figure 4 shows the qualitative performance comparisons between the baselines and the proposed methods on the target domains. Rows 2-4 (inclusive) are from endoscopy benchmarks, and the last row is from skin benchmarks. FedAvg, our closest work, fails to generalise well on target domains (see ETIS-Larib and CVC-ClinicDB). Whereas our method is consistent in every target domain. These results further validate that our method is superior to the others. You can find more examples in the appendix section. **Computational Complexity:** For INDP and FedAvg, the parameters of both the encoders and decoders grow in O(n) with the number of centres. Similarly, for FtDe and our method, the parameters of the encoder grow in O(n), while the parameters of the decoder are constant, i.e., O(1). Although the growth of the parameters for both the encoder and decoder for COMB is O(1), it does not address any privacy concerns. From these, our method is computationally less expensive and has high privacy protection. ## 4 Conclusions In this paper, we presented a client-server Federated Learning architecture for cross-domain surgical image segmentation. Our architecture addresses the cross-domain adaptation problem without sharing the raw images. Moreover, sharing Figure 4: Qualitative Comparisons only a part of the parameters from the source domain enhances privacy protection. Extensive experiments on two benchmarks from various data centres demonstrated improved cross-domain generalisation and privacy protection over the baselines and the competitive contemporary method. ## 5 Acknowledgements This work is partly supported by the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) [203145Z/16/Z]; Engineering and Physical Sciences Research Council (EPSRC) [EP/P027938/1, EP/R004080/1, EP/P012841/1]; The Royal Academy of Engineering Chair in Emerging Technologies scheme; and the EndoMapper project by Horizon 2020 FET (GA 863146).
2306.07413
Search for Dark Photons in Rare $Z$ Boson Decays with the ATLAS Detector
A search for events with a dark photon produced in association with a dark Higgs boson via rare decays of the Standard Model $Z$ boson is presented, using 139 fb$^{-1}$ of $\sqrt{s} = 13$ TeV proton-proton collision data recorded by the ATLAS detector at the Large Hadron Collider. The dark Higgs boson decays into a pair of dark photons, and at least two of the three dark photons must each decay into a pair of electrons or muons, resulting in at least two same-flavor opposite-charge lepton pairs in the final state. The data are found to be consistent with the background prediction, and upper limits are set on the dark photon's coupling to the dark Higgs boson times the kinetic mixing between the Standard Model photon and the dark photon, $\alpha_{D}\varepsilon^2$, in the dark photon mass range of $[5, 40]$ GeV except for the $\Upsilon$ mass window $[8.8, 11.1]$ GeV. This search explores new parameter space not previously excluded by other experiments.
ATLAS Collaboration
2023-06-12T20:41:35Z
http://arxiv.org/abs/2306.07413v2
# Search for dark photons in rare \(Z\) boson decays with the ATLAS detector ###### Abstract A search for events with a dark photon produced in association with a dark Higgs boson via rare decays of the Standard Model \(Z\) boson is presented, using 139 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV proton-proton collision data recorded by the ATLAS detector at the Large Hadron Collider. The dark Higgs boson decays into a pair of dark photons, and at least two of the three dark photons must each decay into a pair of electrons or muons, resulting in at least two same-flavor opposite-charge lepton pairs in the final state. The data are found to be consistent with the background prediction, and upper limits are set on the dark photon's coupling to the dark Higgs boson times the kinetic mixing between the Standard Model photon and the dark photon, \(\alpha_{\rm D}\varepsilon^{2}\), in the dark photon mass range of \([5,40]\) GeV except for the \(\Upsilon\) mass window \([8.8,11.1]\) GeV. This search explores new parameter space not previously excluded by other experiments. + Footnote †: preprint: Search for dark photons in rare \(Z\) boson decays with the ATLAS detector The ATLAS Collaboration A search for events with a dark photon produced in association with a dark Higgs boson via rare decays of the Standard Model \(Z\) boson is presented, using 139 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV proton-proton collision data recorded by the ATLAS detector at the Large Hadron Collider. The dark Higgs boson decays into a pair of dark photons, and at least two of the three dark photons must each decay into a pair of electrons or muons, resulting in at least two same-flavor opposite-charge lepton pairs in the final state. The data are found to be consistent with the background prediction, and upper limits are set on the dark photon's coupling to the dark Higgs boson times the kinetic mixing between the Standard Model photon and the dark photon, \(\alpha_{\rm D}\varepsilon^{2}\), in the dark photon mass range of \([5,40]\) GeV except for the \(\Upsilon\) mass window \([8.8,11.1]\) GeV. This search explores new parameter space not previously excluded by other experiments. + Footnote †: preprint: Overwhelming astrophysical evidence [1; 2; 3; 4] supports the existence of dark matter, and understanding its nature is one of the most important goals in particle physics. Dark matter is generally expected to interact very weakly with Standard Model (SM) particles. This motivates the extension of the SM with hidden or dark sectors (DS). One of the simplest examples is an additional \(U(1)_{\rm D}\) gauge symmetry associated with a gauge boson, the dark photon \(A^{\prime}\), that mediates DS-SM interactions [5; 6]. In the dark Abelian Higgs scenario, the \(U(1)_{\rm D}\) symmetry group could be spontaneously broken by a Higgs mechanism through which the dark photon acquires a mass, adding a dark Higgs boson \(h_{\rm D}\) to such models [7; 8]. The minimal \(A^{\prime}\) model has three unknown parameters: the mass of the dark photon, \(m_{A^{\prime}}\); the effective coupling of the dark photon to SM particles, \(\varepsilon\), induced via kinematic mixing with the SM photon; and the hidden-sector gauge coupling, \(\alpha_{\rm D}\), which is the coupling of the \(A^{\prime}\) to DS particles [7]. Dark photons will decay into visible SM particles, either lepton pairs or hadrons, or invisible particles of the DS. Constraints were placed on visible \(A^{\prime}\) decays, in the parameter space of \(m_{A^{\prime}}\) and \(\varepsilon\), by previous beam-dump, fixed-target and collider experiments [7; 9; 10; 11; 12; 13]. The dark Abelian Higgs model introduces two additional unknown parameters: the mass of the dark Higgs boson, \(m_{h_{\rm D}}\), and the mixing between \(h_{\rm D}\) and the SM Higgs boson. The Higgs-strahlung channel, where a dark photon is produced in association with a dark Higgs boson, was also explored at low-energy electron-positron colliders via \(e^{+}e^{-}\to A^{\prime}h_{\rm D}\)[14; 15]. The Higgs-strahlung channel is sensitive to \(\alpha_{\rm D}\), which is also the coupling of the \(A^{\prime}\) to the \(h_{\rm D}\). Hence, experimental evidence of a signal in this process would provide information complementary to that from direct searches for \(A^{\prime}\). This Letter presents a search for the dark photon in rare decays of the \(Z\) boson, \(Z\to A^{\prime}h_{\rm D}\), with a mass hierarchy of \(m_{A^{\prime}}+m_{h_{\rm D}}<m_{Z}\) and requiring at least two same-flavor opposite-charge lepton pairs in the final state. For the model considered [8], no mixing between the SM and dark Higgs bosons is assumed, the \(A^{\prime}\) is the lightest particle in the DS and invisible DS decays are kinematically forbidden. When kinematically allowed, the dark Higgs boson can decay into one or two on-shell \(A^{\prime}\) via \(h_{\rm D}\to A^{\prime}A^{\prime(*)}\), as illustrated in Figure 1, and the \(A^{\prime}\) in turn decays into SM fermions. The parameter space \(m_{h_{\rm D}}>m_{A^{\prime}}\) is explored in this search, giving the process \(pp\to Z\to A^{\prime}h_{\rm D}\to A^{\prime}A^{\prime}A^{\prime(*)}\). Final states with at least two on-shell \(A^{\prime}\) decaying fully leptonically, \(A^{\prime}\to\ell^{+}\ell^{-}(\ell=e,\mu)\), are used to search for the \(A^{\prime}\). In this scenario, the kinematic mixing \(\varepsilon\) is small and thus the dark photon has a total decay width narrower than \(10^{-3}\) GeV, but \(\varepsilon\) is large enough (\(\varepsilon>10^{-6}\)) to ensure that the dark photon decays promptly [16]. The \(\sqrt{s}=13\) TeV proton-proton (\(pp\)) collision data used for this analysis were recorded by the ATLAS experiment at the Large Hadron Collider (LHC) during 2015-2018. The corresponding integrated luminosity is 139 fb\({}^{-1}\)[17] after applying data quality requirements [18]. A combination of single-lepton and multi-lepton triggers [19; 20] is used. The ATLAS experiment at the LHC is a multipurpose particle detector with a forward-backward symmetric cylindrical geometry and a near \(4\pi\) coverage in solid angle [21, 22, 23].1 An extensive software suite [24] is used in data simulation, in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment. Footnote 1: ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the center of the detector and the \(z\)-axis along the beam pipe.The \(x\)-axis points from the IP to the center of the LHC ring, and the \(y\)-axis points upward. Cylindrical coordinates (\(r\), \(\phi\)) are used in the transverse plane, \(\phi\) being the azimuthal angle around the \(z\)-axis. The pseudorapidity is defined in terms of the polar angle \(\theta\) as \(\eta=-\ln\tan\theta/2\). Angular distance is measured in units of \(\Delta R\equiv\sqrt{(\Delta\eta)^{2}+(\Delta\phi)^{2}}\). Monte Carlo (MC) simulated signal samples were generated using MadGraph5_aMC@NLO[25], with matrix elements (ME) calculated at leading order (LO) in perturbative QCD and with the NNPDF3.0nlo[26] parton distribution function (PDF) set. The events were interfaced to Pythia 8.230 [27] to model the parton shower, hadronization, and underlying event, with parameter values set according to the A14 parton-shower tune [28] and using the NNPDF2.3lo[29] set of PDFs. Benchmark signal samples were generated with \(\alpha_{\mathrm{D}}=0.1\) and \(\varepsilon=10^{-3}\), in the mass ranges \(5\leavevmode\nobreak\ \mathrm{GeV}<m_{A^{\prime}}<40\leavevmode\nobreak\ \mathrm{GeV}\) and \(20\leavevmode\nobreak\ \mathrm{GeV}<m_{h_{\mathrm{D}}}<70\leavevmode\nobreak\ \mathrm{GeV}\). The contribution from \(A^{\prime}\to\tau^{+}\tau^{-}\) is found to be negligible and thus not included in MC signal samples. The dominant SM background process, \(q\bar{q}\to 4\ell\), was simulated with the Sherpa 2.2.2 event generator [30]. Matrix elements were calculated at next-to-leading-order (NLO) accuracy in QCD for up to one additional parton and at LO accuracy for two and three additional parton emissions. The matrix element calculations were matched and merged with the Sherpa parton shower based on Catani-Seymoour dipole factorization [31, 32], using the MEPS@NLO prescription [33, 34, 35, 36]. An alternative \(q\bar{q}\to 4\ell\) sample for estimating the theory modeling uncertainty was generated at NLO accuracy in QCD using Powheg Box v2[37, 38, 39], interfaced to Pythia 8.186 [40] for the modeling of the parton shower, hadronization, and underlying event, with parameter values set according to the AZNLO tune [41]. The CT10 PDF set [42] was used for the hard-scattering processes, whereas the CTEQ6L1 PDF set [43] was used for the parton shower. The real higher-order electroweak contribution to \(4\ell\) production in association with two jets (which includes vector-boson scattering, but excludes processes involving the Higgs boson) was not included in the sample discussed above but was simulated separately with the Sherpa 2.2.2 generator. Sherpa 2.2.2 was also used for the \(gg\to ZZ\) process, with LO precision for zero- and one-jet final states, where a constant \(K\)-factor of 1.7 [44] is applied to account for NLO effects on the cross-section. The resonant \(H\to ZZ^{*}\to 4\ell\) process was generated independently to provide the highest possible precision. The dominant gluon-gluon fusion [45] and vector-boson fusion (VBF) [46] processes were modeled with Powheg Box v2. The gluon-gluon fusion sample used Powheg-NNLOPS [47, 48, 49, 45] to achieve inclusive NNLO QCD precision. Four or more prompt leptons can also be produced by a number of triboson processes (\(ZWW\), \(ZZW\) and \(ZZZ\)) and by \(Z\) bosons produced in association with a \(t\bar{t}\) pair (\(t\bar{t}Z\)). Samples for these triboson and \(t\bar{t}Z\) processes were simulated with Sherpa 2.2.2 and Sherpa 2.2.0, respectively. Except for the signal, all samples were produced with a detailed simulation of the ATLAS detector [50] based on Geant4[51], to produce predictions that can be compared with the data. The signal samples were produced through a simplified simulation of the ATLAS detector [50]. Furthermore, simulated inelastic minimum-bias events were overlaid to model additional \(pp\) collisions in the same and neighboring bunch crossings (pileup) [52]. Simulated events were reweighted to match the pileup conditions in the data. All simulated events were processed using the same reconstruction algorithms as used for data. Events are required to have a collision vertex associated with at least two charged-particle tracks, each with a transverse momentum \(p_{\mathrm{T}}\) > 0.5 \(\mathrm{GeV}\). The vertex with the highest sum of the squared transverse momenta of the associated tracks is referred to as the primary vertex. Muon candidates within the range \(|\eta|<2.5\) are reconstructed by combining the inner detector (ID) and muon spectrometer information [53]. In the region \(2.5<|\eta|<2.7\), muons can also be identified by tracks of the muon spectrometer alone. In the region \(|\eta|<0.1\), muons are identified by an ID track with \(p_{\mathrm{T}}>15\)\(\mathrm{GeV}\) associated with a compatible calorimeter energy deposit. Muons are required to have \(p_{\mathrm{T}}>3\)\(\mathrm{GeV}\) and \(|\eta|<2.7\), and satisfy the "loose" identification criterion [53]. Electrons are reconstructed from energy deposits in the electromagnetic calorimeter matched to a track in the ID [54]. Candidate electrons must have \(p_{\mathrm{T}}>4.5\)\(\mathrm{GeV}\) and \(|\eta|<2.47\), and satisfy the "loose" identification criteria [54]. All electrons and muons must be isolated and satisfy the "FixedCutLoose" and "PflowLoose_VarRad" isolation criteria [54, 55], respectively. Furthermore, electrons (muons) are required to have associated tracks satisfying \(|d_{0}|/\sigma_{d_{0}}<5\) (3) and \(|z_{0}\sin(\theta)|<0.5\) mm, where \(d_{0}\) is the transverse impact parameter relative to the beam line, \(\sigma_{d_{0}}\) is its uncertainty, and \(z_{0}\) is the longitudinal impact parameter relative to the primary vertex. Jets are reconstructed with the anti-\(k_{t}\) algorithm [56, 57] with a radius parameter of \(R=0.4\). The jet-clustering input objects are based on particle flow [58] in the ID and the calorimeter. Jets are required to have \(p_{\mathrm{T}}>30\)\(\mathrm{GeV}\) and \(|\eta|<4.5\). A jet-vertex tagger [59] is applied to jets with \(p_{\mathrm{T}}<60\)\(\mathrm{GeV}\) and \(|\eta|<2.4\) to preferentially suppress jets that originated from pileup. An overlap-removal procedure detailed in Ref. [60] is applied to the selected leptons and jets, to avoid ambiguities in the event selection and in the energy measurement of the physics objects. Candidate events are selected by requiring at least two same-flavor and opposite-charge (SFOC) lepton pairs. The four-lepton invariant mass must satisfy \(m_{4\ell}<m_{Z}-5\)\(\mathrm{GeV}\) to suppress the SM \(pp\to 4\ell\) background. If more than one lepton quadruplet is selected in an event, the one with the smallest lepton-pair mass difference \(|m_{\ell^{+}\ell^{-}}-m_{\ell^{+}\ell^{-}}|\), where \(m_{\ell^{+}\ell^{-}}\) and \(m_{\ell^{+}\ell^{-}}\) are the invariant masses of the two SFOC lepton pairs in the quadruplet, is selected. The lepton pair with the higher (lower) invariant mass is denoted by \(m_{\ell_{1}\ell_{2}}\) (\(m_{\ell_{3}\ell_{4}}\)). To ensure that both SFOC lepton pairs from a signal event originate from an \(A^{\prime}\) decay and to reduce the mispairing effect, the dilepton masses must satisfy \(m_{\ell_{3}\ell_{4}}/m_{\ell_{1}\ell_{2}}>0.85\). All the same- (different-)flavored leptons are required to have an angular separation of \(\Delta R>0.1\) (0.2). The two SFOC lepton pairs (and the two pairs with the alternative opposite-charge pairing, in the case of \(4e\) and \(4\mu\) final states) within a quadruplet are required to have a dilepton mass \(m_{\ell^{+}\ell^{-}}>5\)\(\mathrm{GeV}\), and events with \((m_{\Upsilon(1S)}-0.70\mathrm{~{}GeV})<m_{\ell^{+}\ell^{-}}<(m_{\Upsilon(3S)}+ 0.75\mathrm{~{}GeV})\) are vetoed to suppress the quarkonia background. Events passing the above selections, referred to as the signal region (SR), are used to search for the dark photon. The dominant background contribution in the SR is from the \(qq\to 4\ell\) process. The kinematic distributions of the \(qq\to 4\ell\) background are modeled using simulation, while the background event yield is normalized to data with the help of a control region (CR) enriched in \(qq\to 4\ell\) events. The CR is defined similarly to the SR but with \(m_{Z}-5\mathrm{~{}GeV}<m_{4\ell}<m_{Z}+5\mathrm{~{}GeV}\), and the \(m_{\ell_{5}\ell_{4}}/m_{\ell_{1}\ell_{2}}\) and \(\Upsilon\) veto requirements are not applied. The modeling of the kinematic properties of the \(qq\to 4\ell\) background is studied in a validation region (VR), which is disjoint to both the SR and the CR. The VR is defined using the same selections as for the SR except for requiring \(m_{\ell_{3}\ell_{4}}/m_{\ell_{1}\ell_{2}}<0.85\). Subleading background originates from processes involving the production of \(Z+\mathrm{jets}\), top-quark and \(WZjj\) events, with nonprompt leptons from hadron decays or misidentification of jets. A fake-factor method described in Ref. [61] is used to estimate the contributions from non-prompt leptons. The fake factor is defined as the ratio of numbers of nonprompt leptons \(N_{\mathrm{fake}}^{\mathrm{tight}}/N_{\mathrm{fake}}^{\mathrm{loose}}\), where "tight" or "loose" indicate whether those leptons pass all the requirements on the impact parameters, isolation and identification, or fail at least one of the requirements. The fake factor is measured in \(Z+\mathrm{jets}\) events, using additional leptons and not the lepton pair arising from the \(Z\) boson decay. The nonprompt-lepton background is then estimated by applying the fake factor in a region defined with the same event selection as the SR, but with at least one loose-not-tight lepton required when forming the quadruplet. Minor background contributions from \(pp\to H\to 4\ell\), the \(gg\to ZZ\to 4\ell\) continuum, and triboson and \(ttZ\) processes are estimated from simulation, and their event yield contribution is found to be about 5% in the SR. The search sensitivity is limited by statistical uncertainties. Systematic uncertainties associated with the prediction of signal and background processes are also considered. These uncertainties are either experimental or theoretical in nature, due to imperfect modeling of the detector in the simulation or the underlying physics of each process. Experimental uncertainties originate mainly from measurements of lepton energies, and lepton reconstruction and identification efficiencies. Uncertainties due to the trigger selection efficiency, pileup correction, and luminosity measurement are also considered. Overall, the total experimental uncertainty in the predicted yields is about 7% (6%) for the signal (background with prompt leptons). The theoretical uncertainties of the signal, as well as the major background due to the \(qq\to 4\ell\) process, include the uncertainties from PDFs, QCD scales, and \(\alpha_{\mathrm{s}}\). The PDF uncertainty is estimated following the PDF4LHC [62] procedure. The \(\alpha_{\mathrm{s}}\) uncertainty's effect is estimated by varying the \(\alpha_{\mathrm{s}}\) value by \(\pm 0.001\). The QCD scale uncertainty's effect is estimated by varying the renormalization and factorization scales, following the procedure described in Ref. [63]. The parton showering and hadronization uncertainty is estimated for the signal by comparing the nominal Pythia 8 parton showering with the alternative Herwig 7[64, 65] algorithm. For the \(qq\to 4\ell\) background, the modeling uncertainty due to the matrix element, showering, and hadronization is obtained by comparing predictions from the nominal Sherpa sample and an alternative sample generated by Powheg Box v2 interfaced with Pythia 8. Modeling uncertainties in the \(p_{\mathrm{T}}^{Z}\) distribution for the signal process, which is simulated at LO, are also considered. The total theoretical uncertainties in the reconstructed event yields for the signal and the \(qq\to 4\ell\) background processes are estimated to be about 14% and 13%, respectively. Systematic uncertainties assigned to the fake-lepton background mainly account for differences in the composition of the events with fake leptons between \(Z+\mathrm{jets}\) events and the events in the SR, and data statistical uncertainties in the dedicated region where fake factors are applied. They are estimated to be about 51% and 41%, respectively. A simultaneous profiled binned maximum-likelihood fit [66, 67, 68] to the average invariant mass \(\bar{m}_{\ell\ell}\), \(\bar{m}_{\ell\ell}=(m_{\ell_{1}\ell_{2}}+m_{\ell_{3}\ell_{4}})/2\), of events in the SR and CR is performed to constrain uncertainties and obtain information on a possible signal. A bin width of 1 GeV is used for \(\bar{m}_{\ell\ell}\) distributions to take into account the resolution of the signal samples and data statistical uncertainties. The normalizations of both the signal and the \(qq\to 4\ell\) background are allowed to float in the fit. Systematic uncertainties described above are modeled as constrained nuisance parameters. A background-only fit is also performed and the obtained background prediction is compared with data in the VR to assess the quality of the background modeling. Table 1 shows the expected background and observed event yields in the SR, CR and VR after the background-only fit. The normalization factor of the \(qq\to 4\ell\) background is determined to be \(0.95\pm 0.08\). The \(\bar{m}_{\ell\ell}\) distributions in the SR, CR and VR are presented in Figure 2. The data are found to be consistent with the background expectation in all three regions. No significant deviation from the SM background hypothesis is observed and the largest excess of events is found around \(\bar{m}_{\ell\ell}=25\) GeV, with a significance of about \(1.6\sigma\). Exclusion limits are set using the \(\mathrm{CL_{s}}\) prescription [69]. Upper limits at 95% confidence level (CL) on the cross-section times branching fraction of the process \(pp\to Z\to A^{\prime}h_{\mathrm{D}}\to 4\ell+X\) are shown in Figure 3 as a function of \(m_{A^{\prime}}\) for different \(h_{\mathrm{D}}\) masses. The lower sensitivity in the mass range \(m_{A^{\prime}}>m_{h_{\mathrm{D}}}/2\) is due to the smaller signal acceptance for the off-shell \(A^{\prime}\). Since it is assumed that the SM and dark Higgs bosons do not mix, and that \(A^{\prime}\) is the lightest particle in the DS, the branching fraction for \(h_{\mathrm{D}}\) decay into a \(A^{\prime}\) pair, and for \(A^{\prime}\) decay into a SM fermion pair, is set to 100%. The branching fraction for \(A^{\prime}\) decay into a specific fermion pair is dependent on \(m_{A^{\prime}}\)[16, 7]. In this dark Abelian Higgs model, upper limits at 90% CL are also set on the parameter combination \(\alpha_{\mathrm{D}}e^{2}\), which scales the signal yield linearly, as shown in Figure 4. The search is sensitive to a set of \(m_{A^{\prime}}\) and \(m_{h_{\rm D}}\) masses complementary to, and higher than, those in a similar search reported by the Belle Collaboration [15]. Figure 5 shows the upper limits at 90% CL on \(\varepsilon^{2}\) as a function of \(m_{A^{\prime}}\) with different dark Higgs boson masses, with a benchmark value of \(\alpha_{\rm D}=0.1\) as used elsewhere [70, 71, 72, 73]. These are compared with recent results from the LHCb [12] and CMS [74] collaborations, using the process \(pp\to A^{\prime}\to\mu^{+}\mu^{-}\), which does not depend on \(\alpha_{\rm D}\). For \(m_{h_{\rm D}}\lesssim 60\) GeV and \(\alpha_{\rm D}\gtrsim 0.1\), the exclusion sensitivity of this search is comparable to, or better than, that of the LHCb and CMS searches. In conclusion, this Letter reports the first search for a dark photon and dark Higgs boson produced via the dark Higgs-strahlung process in rare \(Z\) boson decays at the LHC, with a final state of at least four charged leptons and using 139 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV \(pp\) collision data recorded by the ATLAS detector. The data are found to be consistent with the background prediction. Upper limits are set on the production \begin{table} \begin{tabular}{c|c|c|c} \hline \hline SM backgrounds & SR & CR & VR \\ \hline \(qq\to 4\ell\) & 26.0 \(\pm\) 2.4 & 1555 \(\pm\) 48 & 239 \(\pm\) 15 \\ \hline Fake & 13.2 \(\pm\) 5.6 & 43 \(\pm\) 25 & 47 \(\pm\) 26 \\ \hline Others & 2.2 \(\pm\) 0.7 & 5.8 \(\pm\) 1.9 & 6.8 \(\pm\) 2.0 \\ \hline Total background & 41.3 \(\pm\) 5.3 & 1604 \(\pm\) 40 & 293 \(\pm\) 28 \\ \hline \hline Data & 44 & 1602 & 286 \\ \hline \hline Signal (\(m_{A^{\prime}}\), \(m_{h_{\rm D}}\)) = (12, 30) GeV & 5.9 \(\pm\) 0.9 & - & - \\ \hline Signal (\(m_{A^{\prime}}\), \(m_{h_{\rm D}}\)) = (25, 60) GeV & 3.5 \(\pm\) 0.6 & - & - \\ \hline \hline \end{tabular} \end{table} Table 1: Post-fit expected background and observed number of data events in the SR, CR and VR. The “Fake” background represents the contribution from non-prompt leptons, and the “Others” category combines \(gg\to ZZ\), Higgs, \(VVV\) and \(ttZ\) background contributions. The expected signal yields for two benchmark points are also shown, with cross-sections calculated with \(\alpha_{\rm D}=0.1\) and \(\varepsilon=10^{-3}\), and they are negligible in the CR and VR. Figure 2: The \(\bar{m}_{\ell\ell}\) distribution in the (a) CR, (b) VR and (c) SR for the data and post-fit background contributions. The error bands include experimental and theoretical systematic uncertainties as constrained by a background-only fit. The contributions from the production of \(qq\to 4\ell\) events are scaled by a normalization factor 0.95, from the simultaneous fit in the SR and CR. The “Others” category combines \(gg\to ZZ\), Higgs, \(VVV\) and \(ttZ\) background contributions. The “Fake” background represents the contribution from non-prompt leptons. Three representative signal distributions are overlaid in the SR, assuming \(m_{h_{\rm D}}\) = 40 GeV and different values of \(m_{A^{\prime}}\). The cross-sections for these benchmark points are calculated with \(\alpha_{\rm D}=0.1\) and \(\varepsilon=10^{-3}\). cross-section times branching fraction, \(\sigma(pp\to Z\to A^{\prime}h_{\rm D}\to 4\ell+X)\), and on the dark photon coupling to the dark Higgs boson times the kinetic mixing between the Standard Model photon and the dark photon, \(\sigma_{\rm D}\epsilon^{2}\), in the mass ranges of \(5~{}{\rm GeV}<m_{A^{\prime}}<40~{}{\rm GeV}\) and \(20~{}{\rm GeV}<m_{h_{\rm D}}<70~{}{\rm GeV}\). This search explores new regions of parameter space not previously excluded by other experiments. Figure 3: Observed and expected upper limits at 95% CL on the production cross-section times branching fraction as a function of \(m_{A^{\prime}}\), from top left to bottom right, corresponding to the dark Higgs boson mass of 20 GeV, 30 GeV, 40 GeV, 50 GeV, 60 GeV, and 70 GeV, respectively. The green (inner) and yellow (outer) bands represent the \(\pm 1\sigma\) and \(\pm 2\sigma\) uncertainty in the expected limits. Figure 4: Observed 90% CL upper limits on \(\alpha_{\rm D}e^{2}\), as a function of \(m_{A^{\prime}}\) with different dark Higgs boson masses, from this search (solid curves) compared with the results from Belle [15] (dashed curves). Figure 5: Observed 90% CL upper limits on \(\varepsilon^{2}\), assuming of \(\alpha_{\rm D}=0.1\), as a function of \(m_{A^{\prime}}\) with different dark Higgs boson masses ranging from 20 \(\,\,{\rm GeV}\) to 70 \(\,\,{\rm GeV}\). The parameter space excluded by LHCb [12] (CMS [74]) is covered by the green (gray) shaded regions. ## Acknowledgements We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently. We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azerbaijan; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; ANID, Chile; CAS, MOST and NSFC, China; Micencias, Colombia; MEYS CR, Czech Republic; DNRF and DNSRC, Denmark; IN2P3-CNRS and CEA-DRF/IRFU, France; SRNSFG, Georgia; BMBF, HGF and MPG, Germany; GSRI, Greece; RGC and Hong Kong SAR, China; ISF and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; NWO, Netherlands; RCN, Norway; MEiN, Poland; FCT, Portugal; MNE/IFA, Romania; MESTD, Serbia; MSSR, Slovakia; ARRS and MIZS, Slovenia; DSI/NRF, South Africa; MICINN, Spain; SRC and Wallenberg Foundation, Sweden; SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST, Taiwan; TENMAK, Turkaye; STFC, United Kingdom; DOE and NSF, United States of America. In addition, individual groups and members have received support from BCKDF, CANARIE, Compute Canada and CRC, Canada; PRIMUS 21/SCI/017 and UNCE SCI/013, Czech Republic; COST, ERC, ERDF, Horizon 2020 and Marie Sklodowska-Curie Actions, European Union; Investissements d'Avenir Labex, Investissements d'Avenir Idex and ANR, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF, Greece; BSF-NSF and MINERVA, Israel; Norwegian Financial Mechanism 2014-2021, Norway; NCN and NAWA, Poland; La Caixa Banking Foundation, CERCA Programme Generalitat de Catalunya and PROMETEO and GenT Programmes Generalitat Valenciana, Spain; Goran Gustafssons Stiftelse, Sweden; The Royal Society and Leverhulme Trust, United Kingdom. The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers. Major contributors of computing resources are listed in Ref. [75]. ## Appendix Upper limits at 95% CL are also set on the branching fraction of the rare \(Z\) boson decay \(Z\to A^{\prime}h_{\rm D}\), \[\mathcal{B}(Z\to A^{\prime}h_{\rm D})=\frac{\sigma(pp\to Z\to A^{\prime}h_{\rm D} \to 4\ell+X))}{\sigma(pp\to Z)\cdot\mathcal{B}(A^{\prime}h_{\rm D}\to 4\ell+X)},\] where \(\sigma(pp\to Z\to A^{\prime}h_{\rm D}\to 4\ell+X)\) is the 95% CL upper limit taken from Figure 3, \(\mathcal{B}(A^{\prime}h_{\rm D}\to 4\ell+X)\) is the branching fraction of \(A^{\prime}\) and \(h_{\rm D}\) decaying into at least two lepton pairs, and \(\sigma(pp\to Z)\) is the measured \(Z\) boson production cross-section as described in Ref. [76] in the phase space of \(66\,\mathrm{GeV}\,<m_{\ell\ell}<116\,\mathrm{GeV}\). The branching fraction limits are shown in Figure 6. Figure 6: Observed and expected upper limits at 95% CL on the branching fraction \(\mathcal{B}(Z\to A^{\prime}h_{\rm D})\) as a function of \(m_{A^{\prime}}\), from top left to bottom right corresponding to the dark Higgs boson mass of \(20\,\mathrm{GeV}\), \(30\,\mathrm{GeV}\), \(40\,\mathrm{GeV}\), \(50\,\mathrm{GeV}\), \(60\,\mathrm{GeV}\), and \(70\,\mathrm{GeV}\), respectively. The green (inner) and yellow (outer) bands represent the \(\pm 1\sigma\) and \(\pm 2\sigma\) uncertainty in the expected limits.
2304.09247
SigSegment: A Signal-Based Segmentation Algorithm for Identifying Anomalous Driving Behaviours in Naturalistic Driving Videos
In recent years, distracted driving has garnered considerable attention as it continues to pose a significant threat to public safety on the roads. This has increased the need for innovative solutions that can identify and eliminate distracted driving behavior before it results in fatal accidents. In this paper, we propose a Signal-Based anomaly detection algorithm that segments videos into anomalies and non-anomalies using a deep CNN-LSTM classifier to precisely estimate the start and end times of an anomalous driving event. In the phase of anomaly detection and analysis, driver pose background estimation, mask extraction, and signal activity spikes are utilized. A Deep CNN-LSTM classifier was applied to candidate anomalies to detect and classify final anomalies. The proposed method achieved an overlap score of 0.5424 and ranked 9th on the public leader board in the AI City Challenge 2023, according to experimental validation results.
Kelvin Kwakye, Younho Seong, Armstrong Aboah, Sun Yi
2023-04-13T22:38:18Z
http://arxiv.org/abs/2304.09247v1
SigSegment: A Signal-Based Segmentation Algorithm for Identifying Anomalous Driving Behaviours in Naturalistic Driving Videos ###### Abstract In recent years, distracted driving has garnered considerable attention as it continues to pose a significant threat to public safety on the roads. This has increased the need for innovative solutions that can identify and eliminate distracted driving behavior before it results in fatal accidents. In this paper, we propose a Signal-Based anomaly detection algorithm that segments videos into anomalies and non-anomalies using a deep CNN-LSTM classifier to precisely estimate the start and end times of an anomalous driving event. In the phase of anomaly detection and analysis, driver pose background estimation, mask extraction, and signal activity spikes are utilized. A Deep CNN-LSTM classifier was applied to candidate anomalies to detect and classify final anomalies. The proposed method achieved an overlap score of **0.5424** and ranked **9th** on the public leader board in the AI City Challenge 2023, according to experimental validation results. ## 1 Introduction In the United States, distracted driving is a serious threat to public safety, resulting in numerous fatalities each year [1]. Researchers have turned to naturalistic driving studies and computer vision techniques in order to identify and eliminate distracting driving behaviors [2, 3]. However, these methods face obstacles such as inadequate data labeling, poor data quality, and low resolution [4, 5]. Naturalistic driving studies or videos provide data on all aspects of the driver's actions, including drowsiness and distracted driving, making them a valuable tool for understanding driver behavior in real-world scenarios [6]. Traditional machine learning approaches have been used in previous studies to present various driver distraction activity analyses and feature extraction methods [7, 8]. For example, Braunagel et al. proposed using a Support Vector Machine (SVM) model to recognize driver activity using five contextual features: saccades, fixations, blink, head position, and rotation [9, 10]. Similarly, Liang et al. used an SVM model and a logistic regression model to identify driver cognitive distractions in real-time using driver eye movement and vehicle dynamic data [11]. Also, in recent years, deep learning approaches have also been used in recent years for activity analysis and feature extraction in driver distraction detection. Vijayan., for example, proposed a deep convolutional neural network (CNN) architecture for detecting driver drowsiness based on features extracted from the driver's face, such as eye, mouth, and brow movements [12, 13, 14]. Similarly, Omerustaoglu et al. [15] used a deep neural network (DNN) to detect driver distraction using features extracted from the driver's face, arms, and hands movements in detecting distracted and non-distracted states. Although these Driver distraction detection systems have shown promising results in identifying anomalous driving events using traditional machine learning approaches and deep learning such as activity analysis and feature extraction. However, due to the complexity and dynamic nature of driving behavior, more research is required to investigate the generalization and scalability of these approaches across a wide range of driving conditions and environments. Due to the complexity and dynamic nature of driving behavior, identifying anomalous driving events in these videos is difficult. In response to this difficulty, we present SigSegment, a signal-based segmentation algorithm that employs pose estimation, signal generation, activity classification using deep learning techniques to identify anomalous driving events in natural driving videos. SigSegment uses a deep convolutional neural network (CNN) and long short-term memory (LSTM) architecture to classify anomalous driving events precisely by estimating the start and end times. The remainder of the paper is structured as follows. The second section is a review of relevant literature. Section three contains the data and proposed methodology used for this study. The fourth section discusses the model development's results. Section five concludes with a summary of the findings, and recommendations for future research. ### Objective The objective of this study is to develop a classification model that accurately identifies distracted behavior activities executed by drivers within a given time frame. ## 2 Related Works Driver distraction is a major concern in transportation safety. In recent years, researchers have used machine learning and deep learning techniques to classify driver distraction activities. In the literature, two main methods have been proposed: the single-frame-based method and the consecutive-sequence-based method [16, 17]. The single-frame method analyzes a single frame of a video feed or image and applies machine learning algorithms, such as convolutional neural networks (CNNs), to classify the driver's behavior. While this method has shown high accuracy rates for certain types of distractions, such as cell phone use, it has limitations when it comes to capturing the temporal aspect of driver behavior. Several studies have been conducted to investigate the efficacy of the single-frame-based method in detecting different types of driver distraction using only a single image or video frame. Lee et al. [18], for example, proposed a method for detecting aggressive driving using CNN. Near-infrared (NIR) light and thermal camera sensors were used to capture the driver's face, and the CNN achieved a high accuracy rate of 99.95%. Similarly, Guo et al. [19] proposed a single-frame hybrid of convolutional neural network (CNN) and long short-term memory (LSTM)-based driver drowsiness detection system. CNN was trained to recognize drowsy facial features like drooping eyelids and yawning and achieved a high accuracy rate in classifying drowsy and alert states using only a single frame of the driver's face. Liang et al. [20] proposed a real-time system for detecting driver cognitive distraction based on a single-frame Bayesian network in another study. The network was trained to model the relationship between the driver's eye movement and driving performance, and it detected cognitive distraction with an overall accuracy rate of 80.1%. Qu et al. (2020) proposed a system for detecting driver phone use that was based on a single-frame CNN that analyzed the driver's hand and arm positions and achieved a 93.3% accuracy rate. Furthermore, Zhang et al. (2019) proposed a system for detecting driver smoking based on a single-frame CNN that recognized the distinctive motion of a smoking hand and achieved 92.3% accuracy [21]. Despite the single-frame-based method's demonstrated effectiveness in detecting various types of driver distraction. It is critical to recognize its limitations in capturing the temporal aspect of driver behavior, which the consecutive-sequence-based method may address more effectively by extracting temporal high-level features from sequential frames. The consecutive-sequence-based method, on the other hand, entails analyzing a continuous sequence of frames or video feed to identify patterns of behavior over time. This method, which typically employs recurrent neural networks (RNNs) and other time-series analysis techniques, has been demonstrated to be effective in identifying more complex types of driver distraction activities, such as touching the radio, eating or drinking while driving [22]. Sun et al. [23] proposed a method for detecting driver cognitive distraction using an RNN-based approach in one study. To detect cognitive distraction, the authors used the RNN's attention mechanism to focus on important regions of the driver's face, such as the eyes and mouth. The proposed method detected cognitive distraction with an accuracy rate of 90.64%. Ed-Doughmi et al. [24] proposed a system for detecting driver phone use based on a consecutive-sequence-based approach in another study. To detect phone use, the system used a two-stream RNN to analyze both the driver's hand and face movements. The proposed method detected phone use with an accuracy rate of 92.1%. Similarly, Bai et al. [25] proposed a method for detecting driver drowsiness based on consecutive frames of the driver's face in their study. The authors achieved an accuracy rate of 93.4% by using a two-stream Graph CNN to analyze the temporal changes in facial features associated with drowsiness, such as eye closure and head nodding [26]. Because of the rapid increase in the number of researchers and techniques, the accuracy of human driver distraction activity recognition has significantly improved. However, due to several issues such as inadequate labeling, poor data quality, low resolution, large intra-class variability, indistinct boundaries between classes, varying viewpoints, occlusions, appearance variations, influence of environmental factors, and recording settings, this task remains difficult [27, 28]. These difficulties are exacerbated when dealing with naturalistic driving videos. In this study, we propose we propose a Signal-Based anomaly detection algorithm that segments videos into anomalies and non-anomalies using a deep CNN-LSTM classifier to precisely estimate the start and end times of an anomalous driving event. ## 3 Method This section describes the dataset and algorithm framework used in this study. Section 3.1, in particular, introduces the dataset used. The video pre-processing and segmentation based on the algorithm, the CNN frameworks, as well as the LSTM framework, are described in Section 3.2 ### Dataset The dataset was provided by AI CITY CHALLENGE 2023. It includes 210 video clips captured by 35 drivers for a total of approximately 34 hours. The drivers performed 16 random tasks, such as talking on the phone, eating, and reaching back, which were recorded simultaneously by three cameras mounted in the car from various angles. Each driver performed the data collection tasks twice: once without any appearance block such as a hat or sun-glasses, and once with such a block. This resulted in 6 videos per driver, with 3 synchronized without an appearance block and 3 with an appearance block. The 34 hours of video footage was divided into three datasets, labeled A1, A2, and B, containing 25, 5, and 5 drivers, respectively. The A1 dataset contains manually annotated ground truth labels for the start time, end time, and type of distracted behavior. The dataset A2 contains no labels and Dataset B is for testing. ### The Proposed Framework The proposed framework for the Naturalistic Driving Action Recognition consists of several steps, beginning with video data input, pose estimation, signal generation, activity classification and postprocessing as shown in Figure 1. **Posture Estimation**. Posture estimation is the process of determining the position and orientation of the driver's body. This information is important for identifying unusual driving behaviors, as changes in posture may indicate that the driver is becoming distracted or fatigued. To estimate the driver's posture from the video, we find the mean \(\bar{X}\) of all frames in the video, where \(X_{i}\) denotes the posture of the driver in the \(i\)-th frame (see Equation 1). The assumption is that the normal driving posture \(\bar{X}\) is the most frequent and probable driver activity in any driving section. \[\bar{X}=\frac{1}{n}\sum_{i=1}^{n}X_{i} \tag{1}\] where \(n\) is the total number of frames in the video. **Signal Generation**. The estimated posture is subtracted from each frame in the video, yielding an image that is flattened and its pixel intensities summed as shown in Algorithm 1. The resulting summation values serve as a quantitative measure of how different each frame is from the expected normal posture. Specifically, frames exhibiting close adherence to the expected posture will have small summation values, while those deviating from the expected posture will have large values. This usually results in a spike when plotted. Frames corresponding to the spike are considered to potentially indicate an anomaly. ``` 1:procedureDetectAnomalies(\(X_{1},X_{2},...,X_{n},\hat{X},k\)) 2:\(A\leftarrow\varnothing\) 3:for\(i\gets 1\) to \(n\)do 4:\(Y_{i}\gets X_{i}-\hat{X}\) 5:\(V_{i}\leftarrow\sum_{j=1}^{p}Y_{i,j}\)\(\triangleright\) Flatten and sum pixel intensities 6:endfor 7:\(m\leftarrow\operatorname{median}(V_{1},V_{2},...,V_{n})\)\(\triangleright\) Compute median 8:\(S\leftarrow\operatorname{std}(V_{1},V_{2},...,V_{n})\)\(\triangleright\) Compute standard deviation 9:\(\textit{thresh}\gets m+k\cdot S\)\(\triangleright\) Set anomaly detection threshold 10:for\(i\gets 1\) to \(n\)do 11:if\(V_{i}>\textit{thresh}\)then 12:\(A\gets A\cup X_{i}\)\(\triangleright\) Add potential anomaly to set \(A\) 13:endif 14:endfor 15:return\(A\) 16:endprocedure ``` **Algorithm 1** Anomaly Detection based on Posture Estimation **Classification**. The classification model consists of two components: a convolutional neural network (CNN) to extract information from the image sequence, and a long short-term memory (LSTM) component to learn the temporal relationship between the frames in the input sequence. The results from the LSTM is passed through a fully connected neural network for the classification. **Postprocessing**. The postprocessing step involves preparing the classified anomalies into the format required by the evaluation system. ## 4 Results and Discussions The performance of our proposed model was assessed based on its ability to detect anomalies in driver distraction videos. Track 3 was evaluated based on the performance of model activity identification, as measured by the average activity overlap score, which is defined as follows. Given a ground-truth activity \(g\) with start time gs and end time _ge_, the closest predicted activity match is that predicted activity \(p\) of the same class as g and highest overlap score _os_, with the additional condition that start time _ps_ and end time _pe_ are in the range [gs - 10s, gs + 10s] and [ge - 10s, ge + 10s], respectively. The overlap between \(g\) and \(p\) is defined as the proportion of the time intersection to the time union of the two activities. \[os(p,g)=\frac{\max(\min(ge,pe)-\max(gs,ps),0)}{\max(ge,pe)-\min(gs,ps)} \tag{2}\] With an overlap score of **0.5424**, our proposed model was ranked **ninth** in the AI City Challenge 2023. The top-performing teams from the challenge's public leaderboard are shown in Table 1. Our findings indicate that our proposed model can effectively detect driver distraction videos with anomalies, as evidenced by its F1score performance. Further enhancements can be made by experimenting with different evaluation metrics and optimizing the model's architecture. ## 5 Conclusion and Futureworks In conclusion, this paper proposes a Signal-Based anomaly detection algorithm designed to detect distracted driving behavior, which poses a significant threat to road safety. In the phase of anomaly detection and analysis, the proposed method employs driver pose background estimation, mask extraction, and signal activity spikes, which increases the accuracy of identifying anomalous driving events. Moreover, a Deep CNN-LSTM classifier was applied to candidate anomalies for accurate detection and analysis of final anomalies. The experimental validation of the proposed method in the AI City Challenge 2023 demonstrated its efficacy, with an overlap score of 0.5424 and a ranking of ninth among other competitors. These results demonstrate the potential of applying deep learning techniques, in particular CNN-LSTM, to effectively detect and analyze anomalous driving behavior. Future research could investigate the use of more sophisticated CNN-LSTM architectures or other deep learning models to enhance the detection and analysis of driving anomalies. In addition, the proposed method could be expanded to detect other types of driver distraction or incorporated into existing advanced driver assistance systems to further improve road safety.
2307.10246
Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey)
Can we obtain insights about the brain using AI models? How is the information in deep learning models related to brain recordings? Can we improve AI models with the help of brain recordings? Such questions can be tackled by studying brain recordings like functional magnetic resonance imaging (fMRI). As a first step, the neuroscience community has contributed several large cognitive neuroscience datasets related to passive reading/listening/viewing of concept words, narratives, pictures, and movies. Encoding and decoding models using these datasets have also been proposed in the past two decades. These models serve as additional tools for basic cognitive science and neuroscience research. Encoding models aim at generating fMRI brain representations given a stimulus automatically. They have several practical applications in evaluating and diagnosing neurological conditions and thus may also help design therapies for brain damage. Decoding models solve the inverse problem of reconstructing the stimuli given the fMRI. They are useful for designing brain-machine or brain-computer interfaces. Inspired by the effectiveness of deep learning models for natural language processing, computer vision, and speech, several neural encoding and decoding models have been recently proposed. In this survey, we will first discuss popular representations of language, vision and speech stimuli, and present a summary of neuroscience datasets. Further, we will review popular deep learning based encoding and decoding architectures and note their benefits and limitations. Finally, we will conclude with a summary and discussion about future trends. Given the large amount of recently published work in the computational cognitive neuroscience (CCN) community, we believe that this survey enables an entry point for DNN researchers to diversify into CCN research.
Subba Reddy Oota, Zijiao Chen, Manish Gupta, Raju S. Bapi, Gael Jobard, Frederic Alexandre, Xavier Hinaut
2023-07-17T06:54:36Z
http://arxiv.org/abs/2307.10246v2
# Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey) ###### Abstract How does the brain represent different modes of information? Can we design a system that automatically understands what the user is thinking? Such questions can be answered by studying brain recordings like functional magnetic resonance imaging (fMRI). As a first step, the neuroscience community has contributed several large cognitive neuroscience datasets related to passive reading/listening/viewing of concept words, narratives, pictures and movies. Encoding and decoding models using these datasets have also been proposed in the past two decades. These models serve as additional tools for basic research in cognitive science and neuroscience. Encoding models aim at generating fMRI brain representations given a stimulus automatically. They have several practical applications in evaluating and diagnosing neurological conditions and thus also help design therapies for brain damage. Decoding models solve the inverse problem of reconstructing the stimuli given the fMRI. They are useful for designing brain-machine or brain-computer interfaces. Inspired by the effectiveness of deep learning models for natural language processing, computer vision, and speech, recently several neural encoding and decoding models have been proposed. In this survey, we will first discuss popular representations of language, vision and speech stimuli, and present a summary of neuroscience datasets. Further, we will review popular deep learning based encoding and decoding architectures and note their benefits and limitations. Finally, we will conclude with a brief summary and discussion about future trends. Given the large amount of recently published work in the 'computational cognitive neuroscience' community, we believe that this survey nicely organizes the plethora of work and presents it as a coherent story. ## 1 Introduction Neuroscience is the field of science that studies the structure and function of the nervous system of different species. It involves answering interesting questions like the following1. (1) How learning occurs during adolescence, and how it differs from the way adults learn and form memories. (2) Which specific cells in the brain (and what connections they form with other cells), have a role in how memories are formed? (3) How animals cancel out irrelevant information arriving from the senses and focus only on information that matters. (4) How do humans make decisions? (5) How humans develop speech and learn languages. Neuroscientists study diverse topics that help us understand how the brain and nervous system work. Footnote 1: [https://zuckermaninstitute.columbia.edu/file/5184/download?](https://zuckermaninstitute.columbia.edu/file/5184/download?) token=qzId8yR **Motivation:** The central aim of neuroscience is to unravel how the brain represents information and processes it to carry out various tasks (visual, linguistic, auditory, etc.). Deep neural networks (DNN) offer a computational medium to capture the unprecedented complexity and richness of brain activity. _Encoding_ and _decoding_ stated as computational problems succinctly encapsulate this puzzle. As the previous surveys systematically explore the brain encoding and decoding studies with respect to only language [1, 1], this survey summarizes the latest efforts in how DNNs begin to solve these problems and thereby illuminate the computations that the unreachable brain accomplishes effortlessly. **Brain encoding and decoding**: Two main tasks studied in cognitive neuroscience are brain encoding and brain decoding, as shown in Figure 1. Encoding is the process of learning the mapping \(e\) from the stimuli \(S\) to the neural activation \(F\). The mapping can be learned using features engineering or deep learning. On the other hand, decoding constitutes learning mapping \(d\), which predicts stimuli \(S\) back from the brain activation \(F\). However, in most cases, brain decoding aims at predicting a stimulus representation \(R\) rather than actually reconstructing \(S\). In both cases, the first step is to learn a semantic representation \(R\) of the stimuli \(S\) at the train time. Next, for encoding, a regression function \(e:R\to F\) is trained. For decoding, a function \(d:F\to R\) is trained. These functions \(e\) and \(d\) can then be used at test time to process new stimuli and brain activations, respectively. **Techniques for recording brain activations**: Popular techniques for recording brain activations include single Micro Electrode (ME), Micro-Electrode array (MEA), Electro-Cortico Graphy (ECoG), Positron emission tomography (PET), functional MRI (fMRI), Magneto-encephalography (MEG), Electro-encephalography (EEG) and Near-Infrared Spectroscopy (NIRS). These techniques differ in their spatial resolution of neural recording and temporal resolution. fMRI enable high spatial but low time resolution. Hence, they are good for examining which parts of the brain handle critical functions. fMRI takes 1-4 seconds to complete a scan. This is far lower than the speed at which humans can process language. On the other hand, both MEG and EEG have high time but low spatial resolution. They can preserve rich syntactic information [1] but cannot be used for source analysis. fNIRS are a compromise option. Their time resolution is better than fMRI, and spatial resolution is better than EEG. However, this spatial and temporal resolution balance may not compensate for the loss in both. **Stimulus Representations**: Neuroscience datasets contain stimuli across various modalities: text, visual, audio, video and other multimodal forms. Representations differ based on modality. Older methods for _text-based stimulus representation_ include text corpus co-occurrence counts, topic models, syntactic, and discourse features. In recent times, both semantic and experiential attribute models have been explored for text-based stimuli. Semantic representation models include distributed word embeddings, sentence representation models, recurrent neural networks (RNNs), and Transformer-based language models. Experiential attribute models represent words in terms of human ratings of their degree of association with different attributes of experience, typically on a scale of 0-6 or binary. Older methods for _visual stimulus representation_ used visual field filter bank and Gabor wavelet pyramid for visual stimuli, but recent methods use models like ImageNet-pretrained convolutional neural networks (CNNs) and concept recognition methods. For _audio stimuli_, phoneme rate and the presence of phonemes have been leveraged, besides deep learning models like SoundNet. Finally, for multimodal stimulus representations, researchers have used both early fusion and late fusion deep learning methods. In the early fusion methods, information across modalities is combined in the early steps of processing. While in late fusion, the combination is performed only at the end. We discuss stimulus representation methods in detail in Sec. 2. **Naturalistic Neuroscience Datasets**: Several neuroscience datasets have been proposed across modalities (see Figure 2). These datasets differ in terms of the following criteria: (1) Method for recording activations: fMRI, EEG, MEG, etc. (2) Repetition time (TR), i.e. the sampling rate. (3) Characteristics of fixation points: location, color, shape. (4) Form of stimuli presentation: text, video, audio, images, or other multimodality. (5) Task that participant performs during recording sessions: question answering, property generation, rating quality, etc. (6) Time given to participants for the task, e.g., 1 minute to list properties. (7) Demography of participants: males/females, sighted/blind, etc. (8) Number of times the response to stimuli was recorded. (9) Natural language associated with the stimuli. We discuss details of proposed datasets in Sec. 3. **Brain Encoding**: Other than using the standard stimuli representation architectures, brain encoding literature has focused on studying a few important aspects: (1) Which models lead Figure 1: Computational Cognitive Neuroscience of Brain Encoding and Decoding: Datasets & Stimulus Representations to better predictive accuracy across modalities? (2) How can we disentangle the contributions of syntax and semantics from language model representations to the alignment between brain recordings and language models? (3) Why do some representations lead to better brain predictions? How are deep learning models and brains aligned in terms of their information processing pipelines? (4) Does joint encoding of task and stimulus representations help? We discuss these details of encoding methods in Sec. 5. **Brain Decoding**: Ridge regression is the most popular brain decoder. Recently, a fully connected layer [1] or multi-layered perceptrons (MLPs) [23] have also been used. While older methods attempted to decode to a vector representation using stimuli of a single mode, newer methods focus on multimodal stimuli decoding [13, 12]. Decoding using Transformers [11, 12, 13], and decoding to actual stimuli (word, passage, image, dialogues) have also been explored. We discuss details of these decoding methods in Sec. 6. **Computational Cognitive Science (CCS) Research goals**: CCS researchers have primarily focused on two main areas [14] (also, see Figure 3). (1) Improving predictive Accuracy. In this area, the work is around the following questions. (a) Compare feature sets: Which feature set provides the most faithful reflection of the neural representational space? (b) Test feature decodability: "Does neu Figure 3: Alignment between deep learning systems and human brains [12]. Figure 2: Representative Samples of Naturalistic Brain Dataset: (LEFT) Brain activity recorded when subjects are reading and listening to the same narrative [1], and (RIGHT) example naturalistic image stimuli from various public repositories: BOLD5000 [13], SSFMRI [1], and VIM-1 [14]. ral data Y contain information about features X?" (c) Build accurate models of brain data: The aim is to enable simulation of neuroscience experiments. (2) Interpretability. In this area, the work is around the following questions. (a) Examine individual features: Which features contribute most to neural activity? (b) Test correspondences between representational spaces: "CNNs vs ventral visual stream" or "Two text representations". (c) Interpret feature sets: Do features X, generated by a known process, accurately describe the space of neural responses Y? Do voxels respond to a single feature or exhibit mixed selectivity? (d) How does the mapping relate to other models or theories of brain function? We discuss some of these questions in Sections 5 and 6. ## 2 Stimulus Representations In this section, we discuss types of stimulus representations that have been proposed in the literature across different modalities: text, visual, audio, video and other multimodal stimuli. **Text Stimulus Representations**: Older methods for text-based stimuli representation include text corpus co-occurrence counts [16, 15, 17], topic models [15], syntactic features and discourse features [23]. In recent times, for text-based stimuli, both semantic models as well as experiential attribute models have been explored. Semantic representation models include word embedding methods [15, 16, 17, 18, 19, 20], sentence representations models (see Figure 4) [23, 24, 25], RNNs [10, 26] and Transformer methods [16, 17, 18, 19, 20, 21, 22]. Popular word embedding methods include textual (i.e., Word2Vec, fastText, and GloVe), linguistic (i.e., dependency), conceptual (i.e., RWSGwn and ConceptNet), contextual (i.e., ELMo). Popular sentence embedding models include average, max, content of avg and max, SIF, fairseq, skip, GenSen, InferSent, ELMo, BERT, RoBERTa, USE, QuickThoughts and GPT-2. Transformer-based methods include pretrained BERT with various NLU tasks, finetuned BERT, Transformer-XL, GPT-2, BART, BigBird, LED, and LongT5. Experiential attribute models represent words in terms of human ratings of their degree of association with different attributes of experience, typically on a scale of 0-6 [1, 1, 19, 17] or binary [1, 20]. **Visual Stimulus Representations**: For visual stimuli, older methods used visual field filter bank [14] and Gabor wavelet pyramid [15, 16]. Recent methods use models like CNNs [23, 24, 25, 26] and concept recognition models [1]. **Audio Stimuli Representations**: For audio stimuli, phoneme rate and presence of phonemes have been leveraged [17]. Recently, authors in [20] used features from an audio deep learning model called SoundNet for audio stimuli representation. **Multimodal Stimulus Representations**: To jointly model the information from multimodal stimuli, recently, various multimodal representations have been used. These include processing videos using audio-image representations like VGG+SoundNet [20] or using image+text combination models like GloVe+VGG and ELMo+VGG in [26]. Recently, the usage of multimodal text+vision models like CLIP, LXMERT, and VisualBERT was proposed in [1]. ## 3 Naturalistic Neuroscience Datasets We discuss the popular text, visual, audio, video and other multimodal neuroscience datasets that have been proposed in the literature. Table 1 shows a detailed overview of brain recording type, language, stimulus, number of subjects (\(|\)S\(|\)) and the task across datasets of different modalities. Figure 2 shows examples from a few datasets. **Text Datasets**: These datasets are created by presenting words, sentences, passages or chapters as stimuli. Some of the text datasets include Harry Potter Story [20], ZUCO EEG [12] and datasets proposed in [1, 19, 20, 21]. In [19, 20], participants were asked to verbally enumerate in one minute the properties (features) that describe the entities the words refer to. There were four groups of participants: 5 sighted individuals were presented with a pictorial form of the nouns, 5 sighted individuals with a verbal-visual (i.e., written Italian words) form, 5 sighted individuals with a verbal auditory (i.e., spoken Italian words) form, and 5 congenitally blind with a verbal auditory form. Data proposed by [1] contains 70 Italian words taken from seven taxonomic categories (abstract, attribute, communication, event/action, person/social role, location, object/tool) in the law and music domain. The word list contains concrete as well as abstract words. ZUCO dataset [12] contains sentences for which fMRIs were obtained for 3 tasks: normal reading of movie reviews, normal reading of Wikipedia sentences and task-specific reading of Wikipedia sentences. For this dataset curation, sentences were presented Figure 4: Language Model to the subjects in a naturalistic reading scenario. A complete sentence is presented on the screen. Subjects read each sentence at their own speed, i.e., the reader determines for how long each word is fixated and which word to fixate next. **Visual Datasets**: Older visual datasets were based on binary visual patterns [13]. Recent datasets contain natural images. Examples include Vim-1 [11], BOLD5000 [15], Algonauts [10], NSD [14], Things-data[1], and the dataset proposed in [1]. BOLD5000 includes \(\sim\)20 hours of MRI scans per each of the four participants. 4,916 unique images were used as stimuli from 3 image sources. Algonauts contains two sets of training data, each consisting of an image set and brain activity in RDM format (for fMRI and MEG). Training set 1 has 92 silhouette object images, and training set 2 has 118 object images with natural backgrounds. Testing data consists of 78 images of objects on natural backgrounds. Most of the visual datasets involve passive viewing, but the dataset in [1] involved the participant doing the one-back repetition detection task. **Audio Datasets**: Most of the proposed audio datasets are in English [11, 1, 12], while there is one [11] on Italian. The participants were involved in a variety of tasks while their brain activations were measured: Property generation [11, 12], passive listening [11, 12], question answering [1] and imagining themselves personally experiencing common scenarios [13]. In the last one, participants underwent fMRI as they reimagined the scenarios (e.g., resting, reading, writing, bathing, etc.) when prompted by standardized cues. Narratives [14] used 17 different stories as stimuli. Across subjects, it is 6.4 days worth of recordings. **Video Datasets**: Recently, video neuroscience datasets have also been proposed. These include BBC's Doctor Who [15], Japanese Ads [15], Pippi \begin{table} \begin{tabular}{|p{11.4pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \multicolumn{2}{|c|}{**Dataset**} & **Authors** & **Type** & **Lang.** & **Stimulus** & **S** & **Task** \\ \hline \multirow{10}{*}{} & Harry Potter & [14] & MRU & English & Reading Chapter 9 of Harry Potter and the Soreerer’s Stone & 9 & Story understanding \\ \cline{2-7} & & MRU & MEG & & & & \\ \cline{2-7} & & [11] & fMRI & Italian & Verbal, pictorial or auditory presentation of 40 concrete nouns, four times & 20 & Property Generation \\ \cline{2-7} & & [11] & fMRI & Italian & Reading 70 concrete and abstract nouns from two music, five times & 7 & Imaging a situation with noun \\ \cline{2-7} & ZuCo & [11] & fMRI & English & Reading 1107 sentences with 21,629 words from movie reviews & 12 & Rate movie quality \\ \cline{2-7} & 240 Sentences with Content Words & [1] & fMRI & English & Reading 240 active voice sentences describing everyday situations & 14 & Passive reading \\ \cline{2-7} & BCCW-EEG & [14] & EEG & Japanese & Reading 20 newspaper articles for \(\sim\)30-40 minutes & 40 & Passive reading \\ \cline{2-7} & Subset Moh Radio Hour & [15] & fMRI & English & Reading 11 stories & 9 & Passive reading and Listening \\ \cline{2-7} & & [11] & fMRI & - & Viewing routing edges (8 times), expanding/contracting rings (8 times), rotating 36 Gabor filters (4 times), grad (36 times) & 9 & Passive viewing \\ \cline{2-7} & Vim-1 & [11] & MRU & - & Viewing sequences of 1870 natural photos & 2 & Passive viewing \\ \cline{2-7} & Generic Object Decoder & [11] & fMRI & - & Viewing 1,200 images from 150 object categories; 50 images from 50 & 5 & Repetition detection \\ \cline{2-7} & BOLD5000 & [15] & fMRI & - & Viewing 5254 images depicting real-world scenes & 4 & Passive viewing \\ \cline{2-7} & Algonauts & [14] & fMRI & - & Viewing 92 silhouette object images and 118 images of objects on natural & 15 & Passive viewing \\ \cline{2-7} & NSD & [14] & fMRI & - & Viewing 7300 natural images & 8 & Passive viewing \\ \cline{2-7} & THINGS & [11] & fMRI & - & Viewing 31188 natural images & 8 & Passive viewing \\ \cline{2-7} & THINGS & [11] & MRU & Italian & Verbal, pictorial or auditory presentation of 40 concrete nouns, 4 times & 20 & Property Generation \\ \cline{2-7} & The Moth Radio Hour & [11] & fMRI & English & Listening eleven 10-minute stories & 7 & Passive Listening \\ \cline{2-7} & The Moth Radio Hour & [11] & EEG & English & Listening Chapter of dance’s Adventures in Wonderland (2,129 & 33 & Question answering words in 84 sentences) as read by Kristen McQuillan & 26 & Inagine personal experiences \\ \cline{2-7} & & [11] & fMRI & English & Listening one of 20 scenarios names, 5 times & 345 & Passive Listening \\ \cline{2-7} & Narratives & [14] & fMRI & English & Listening 27 diverse naturalistic spoken stories. 891 functional scans & 345 & Passive Listening \\ \cline{2-7} & Natural Stories & [14] & fMRI & English & Listening Mook-Radio-Hour naturalistic spoken stories. & 19 & Passive Listening \\ \cline{2-7} & The Little Pinace & [14] & fMRI & English & Listening audiobook for about 100 minutes. & 112 & Passive Listening \\ \cline{2-7} & MEG-MASC & [11] & MEG & English & Listening two hours of naturalistic stories. 208 MED sensors & 1 & Passive listening \\ \cline{2-7} & BBC's Doctor Who & [15] & fMRI & English & Viewing spatiotemporal visual and auditory videos (30 episodes). 120.8 whole-brain volumes (\(\sim\)23) of using single-spreading and data, and 12 values (11 min) of repeated narrative short episodes. 22 repetitions & 1 & Passive viewing \\ \cline{2-7} & Japanese Ads & [15] & fMRI & Japanese Video 88 and 2427 NYPT Japanese ad movies (15-308). 7200 train and 1200 test m/MRIs for web; MRIs & MRIs & MRIs & MRIs & MRIs \\ \cline{2-7} & Pippi Langkous & [14] & ECG & Swedish Viewing 30 \(\sim\) 35 concepts of a feature film (in total, 6.5 min long), edited & 37 & Passive viewing \\ \cline{2-7} & Algonauts & [14] & fMRI & English & Viewing 1000 short video clips (3 sec each) & 10 & Passive viewing \\ \cline{2-7} & Natural Stories & [11] & fMRI & English & Weaching natural short movie clips & 5 & Passive viewing \\ \cline{2-7} & Natural Stot Clips & [11] & fMRI & English & Watching 170 natural short video clips & 10 & Passive viewing \\ \cline{2-7} & 60 Concrete Nouns & [14] & fMRI & English & Viewing 60 different word-picture pairs from 12 categories, 6 times each & 9 & Passive viewing \\ \cline{2-7} & & [14] & MEG & English & Reading 0 concrete nouns along with line drawings. 20 questions per & 9 & Question answering \\ \cline{2-7} & & [14] & MEG & English & Reading 0 concrete nouns (audisovisual word and picture stimuli: bunny, bear, hark, dog, mouth, food, hand, and nose; 12 times repeated) & 24 & Passive viewing and listening \\ \cline{2-7} & & [14] & fMRI & English & Viewing 180 Words with Picture, Sentences, word clouds; reading 96 & 16 & Passive viewing and reading text passages; 72 passages. 3 times repeated. & 24 & Passive viewing and listening \\ \cline{2-7} & & [14] & fMRI & Chinese & Viewing and listening 50 concrete nouns from 10 semantic categories. & 7 & Passive viewing and listening \\ \cline{2-7} & Neuromond & [1] & fMRI & English & Watching TV series (Friends, MovieID) & 6 & Passive viewing and listening \\ \hline \end{tabular} \end{table} Table 1: Naturalistic Neuroscience Datasets Langkous [1] and Algonauts [1]. Japanese Ads data contains data for two sets of movies were provided by NTT DATA Corp: web and TV ads. There are also four types of cognitive labels associated with the movie datasets: scene descriptions, impression ratings, ad effectiveness indices, and ad preference votes. Algonauts 2021 contains fMRIs from 10 human subjects that watched over 1,000 short (3 sec) video clips. **Other Multimodal Datasets**: Finally, beyond the video datasets, datasets have also been proposed with other kinds of multimodality. These datasets are audiovisual [1, 2], words associated with line drawings [15, 16], pictures along with sentences and word clouds [12]. These datasets have been collected using a variety of methods like fMRIs [15, 12], MEG [10] and fNIRS [14, 15]. Specifically, in [16], subjects were asked to perform a QA task, while their brain activity was recorded using MEG. Subjects were first presented with a question (e.g., "Is it manmade?"), followed by 60 concrete nouns, along with their line drawings, in a random order. For all other datasets, subjects performed passive viewing and/or listening. ## 4 Evaluation Metrics Two metrics are popularly used to evaluate brain encoding models: 2V2 accuracy [13, 14] and Pearson Correlation [17], as shown in Figure 5. They are defined as follows. Given a subject and a brain region, let \(N\) be the number of samples. Let \(\{Y_{i}\}_{i=1}^{N}\) and \(\{\hat{Y}_{i}\}_{i=1}^{N}\) denote the actual and predicted voxel value vectors for the \(i^{th}\) sample. Thus, \(Y\in R^{N\times V}\) and \(\hat{Y}\in R^{N\times V}\) where \(V\) is the number of voxels in that region. **2V2 Accuracy** is computed as \(\frac{1}{N_{C_{2}}}\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}I[\{\text{cosD}(Y_{i},\hat{Y }_{i})+\text{cosD}(Y_{j},\hat{Y}_{j})\}<\{\text{cosD}(Y_{i},\hat{Y}_{j})+\text{ cosD}(Y_{j},\hat{Y}_{i})\}]\) where cosD is the cosine distance function. \(I[c]\) is an indicator function such that \(I[c]=1\) if \(c\) is true, else it is 0. The higher the 2V2 accuracy, the better. **Pearson Correlation** is computed as PC=\(\frac{1}{N}\sum_{i=1}^{n}corr[Y_{i},\hat{Y}_{i}]\) where corr is the correlation function. Brain decoding methods are evaluated using popular metrics like pairwise and rank accuracy [12, 16]. Other metrics used for brain decoding evaluation include R\({}^{2}\) score, mean squared error, and using Representational Similarity Matrix [1, 15]. **Pairwise Accuracy** To measure the pairwise accuracy, the first step is to predict all the test stimulus vector representations using a trained decoder model. Let S = [S\({}_{0}\), S\({}_{1}\),\(\cdots\),S\({}_{n}\)], \(\hat{S}\) = [\(\hat{S}_{0}\), \(\hat{S}_{1}\),\(\cdots\),\(\hat{S}_{n}\)] denote the "true" (stimuli-derived) and predicted stimulus representations for \(n\) test instances resp. Given a pair \((i,j)\) such that \(0\leq i,j\leq n\), score is 1 if \(corr\)(S\({}_{i}\),\(\hat{S}_{i}\)) + \(corr\)(S\({}_{j}\),\(\hat{S}_{j}\)) > \(corr\)(S\({}_{i}\),\(\hat{S}_{j}\)) + \(corr\)(S\({}_{j}\),\(\hat{S}_{i}\)), else 0. Here, \(corr\) denotes the Pearson correlation. Final pairwise matching accuracy per participant is the average of scores across all pairs of test instances. For computing rank accuracy, we first compare each decoded vector to all the "true" stimuli-derived semantic vectors and ranked them by their correlation. The classification performance reflects the rank \(r\) of the stimuli-derived vector for the correct word/picture/stimuli: \(1-\frac{r-1}{\#instances-1}\). The final accuracy value for each participant is the average rank accuracy across all instances. ## 5 Brain Encoding Encoding is the learning of the mapping from the stimulus domain to the neural activation. The quest in brain encoding is for "reverse engineering" the algorithms that the brain uses for sensation, perception, and higher-level cognition. Recent breakthroughs in applied NLP enable reverse engineering the language function of the brain. Similarly, pioneering results have been obtained for reverse engineering the function of ventral visual stream in object recognition founded on the advances and remarkable success of deep CNNs. The overall schema of building a brain encoder is shown in Figure 6. Initial studies on brain encoding focused on smaller data sets and single modality of brain responses. Early models used word representations [13]. Rich contextual representations derived from RNNs such as LSTMs resulted in superior encoding models [17, 16] of narratives. The recent Figure 5: Evaluation Metrics for Brain Encoding and Decoding. (LEFT) Pearson Correlation, (MIDDLE) 2V2 Accuracy [13], and (RIGHT) Pairwise Accuracy. efforts are aimed at utilizing the internal representations extracted from transformer-based language models such as ELMo, BERT, GPT-2, etc for learning encoding models of brain activation [11, 12]. High-grain details such as lexical, compositional, syntactic, and semantic representations of narratives are factorized from transformer-based models and utilized for training encoding models. The resulting models are better able to disentangle the corresponding brain responses in fMRI [12]. Finally, is has been found that the models that integrate task and stimulus representations have significantly higher prediction performance than models that do not account for the task semantics [13, 14]. Similarly, in vision, early models focused on independent models of visual processing (object classification) using CNNs [23]. Recent efforts in visual encoding models focus on using richer visual representations derived from a variety of computer vision tasks [24]. Instead of feed-forward deep CNN models, using shallow recurrence enabled better capture of temporal dynamics in the visual encoding models [15, 16]. Table 2 summarizes various encoding models proposed in the literature related to textual, audio, visual, and multimodal stimuli. Figure 7 classifies the encoding literature along various stimulus domains such as vision, auditory, multimodal, and language and the corresponding tasks in each domain. **Linguistic Encoding**: A number of previous works have investigated the alignment between pretrained language models and brain recordings of people comprehending language. Huth et al. [10] have been able to identify brain ROIs (Regions of Interest) that respond to words that have a similar meaning and have thus built a "semantic atlas" of how the human brain organizes language. Many studies have shown accurate results in mapping the brain activity using neural distributed word embeddings for linguistic stimuli [11, 12, 13, 14]. Unlike earlier models where each word is represented as an independent vector in an embedding space, [13] built encoding models using rich contextual representations derived from an LSTM language model in a story listening task. With these contextual representations, they demonstrated dissociation in brain activation - auditory cortex (AC) and Broca's area in shorter context whereas left Temporal-Parietal junction (TPJ) in longer context. [10] presents the first multimodal framework for evaluating six types of word embedding (Word2Vec, WordNet2Vec, GloVe, FastText, ELMo, and BERT) on 15 datasets, including eye-tracking, EEG and fMRI signals recorded during language processing. With the recent advances in contextual representations in NLP, few studies incorporated them in relating sentence embeddings with brain activity patterns [24, 11, 12]. More recently, researchers have begun to study the alignment of language regions of the brain with the layers of language models and found that the best alignment was achieved in the middle layers of these models [13, 14, 15]. Schrimpf et al. [16] examined the relationship between 43 diverse state-of-the-art language models. They also studied the behavioral signatures of human language processing in the form of self-paced reading times, and a range of linguistic functions assessed via standard engineering tasks from NLP. They found that Transformer-based models perform better than RNNs or word-level embedding models. Larger-capacity models perform better than smaller models. Models initialized with random weights (prior to training) perform surprisingly similarly in neural predictivity as compared to final trained models, suggesting that network architecture contributes as much or more than experience dependent learning to a model's match to the brain. Antonello et al. [16] proposed a "language representation embedding space" and demonstrated the effectiveness of the features from this embedding in predicting fMRI responses to linguistic stimuli. **Disentangling the Syntax and Semantics**: The representations of transformer models like BERT, GPT-2 have been Figure 6: Schema for Brain Encoding shown to linearly map onto brain activity during language comprehension. Several studies have attempted to disentangle the contributions of different types of information from word representations to the alignment between brain recordings and language models. Wang et al. (2020) proposed a two-channel variational autoencoder model to dissociate sentences into semantic and syntactic representations and separately associate them with brain imaging data to find feature-correlated brain regions. To separate each syntactic feature, Zhang et al. (2022) proposed a feature elimination method, called Mean Vector Null space Projection. Compared with word representations, word syntactic features (parts-of-speech, named entities, semantic roles, dependencies) seem to be distributed across brain networks instead of a local brain region. In the previous two studies, we do not know whether all or any of these representations effectively drive the linear mapping between language models (LMs) and the brain. Toneva et al. (2022) presented an approach to disentangle supra-word meaning from lexical meaning in language models and showed that supra-word meaning is predictive of fMRI recordings in two language regions (anterior and posterior temporal lobes). Caucheteux et al. (2021) proposed a taxonomy to factorize the high-dimensional activations of language models into four combinatorial classes: lexical, compositional, syntactic, and semantic representations. They found that (1) Compositional representations recruit a more widespread cortical network than lexical ones, and encompass the bilateral temporal, parietal and prefrontal cortices. (2) Contrary to previous claims, syntax and semantics are not associated with separated modules, but, instead, appear to share a common and distributed neural substrate. While previous works studied syntactic processing as cap \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline **Stimuli** & **Authors** & **Dataset** & **Lang.** & **Stimulus Representations** & \(|\)\(\mathbf{S}|\) & **Dataset** & **Model** \\ \hline \multirow{11}{*}{[1]} & [Main and Huh, 2018] & fMRI & English & LSTM & 6 & Subset Moth Radio Hour & Ridge \\ \cline{2-8} & [Toneva and Webe, 2019] & fMRI / MEG & English & ELMo, BERT, Transformer-XL & 9 & Story understanding & Ridge \\ \cline{2-8} & [Toneva _et al._, 2020] & MEG & English & BERT & 9 & Question-Answering & Ridge \\ \cline{2-8} & [Schrimpf, 2021b] & fMRI/ECG & English & 43 language models (e.g. GloVe, ELMo, BERT, GPT-2, XLNET) & 20 & Neural architecture of language & Ridge \\ \cline{2-8} & [Gautier and Levy, 2019] & fMRI & English & BERT, line-tuned NLP tasks (Sentiment, Natural language inference), Scrambling language model & 7 & Imagine a situation with the noun & Ridge \\ \cline{2-8} & [Deniz _et al._, 2019] & fMRI & English & GloVe & 9 & Subset Moth Radio Hour & Ridge \\ \cline{2-8} & [Jain _et al._, 2020] & fMRI & English & LSTM & 6 & Subset Moth Radio Hour & Ridge \\ \cline{2-8} & [Caucheteux _et al._, 2021] & fMRI & English & GPT-2, Basic syntax features & 345 & Narratives & Ridge \\ \cline{2-8} & [Antonello _et al._, 2021] & fMRI & English & GloVe, BERT, GPT-2, Machine Translation, POS tasks & 6 & Moth Radio Hour & Ridge \\ \cline{2-8} & [Reddy and Wehee, 2021] & fMRI & English & Constitutive, Basic syntax features and BERT & 8 & Harry Potter & Ridge \\ \cline{2-8} & [Goldstein _et al._, 2022] & fMRI & English & GloVe, GPT-2 next word, pre-onset, post-onset & 8 & FC6G & \\ \cline{2-8} & [Goldstein _et al._, 2022] & fMRI & English & BERT and GLUE tasks & 82 & Pereira \& Narratives & Ridge \\ \cline{2-8} & [Mona _et al._, 2022] & fMRI & English & ENSTM, ELMo, Longformer & 82 & Narratives & Ridge \\ \cline{2-8} & [Merlin and Toneva, 2022] & fMRI & English & BERT, Next word prediction, multi-word semantics, & 8 & Harry Potter & Ridge \\ \cline{2-8} & [Toneva _et al._, 2022] & fMRI/ MEG & English & ELMo, BERT, Context Residuals & 8 & Harry Potter & Ridge \\ \cline{2-8} & [Aw and Toneva, 2022] & fMRI & English & BART, Longformer, Long-T5, BigBird, and corresponding Booksum models as well & 8 & Passive reading & Ridge \\ \cline{2-8} & [Zhang _et al._, 2022b] & fMRI & English & Noise Count & 19, 12 & Zhang & Ridge \\ \cline{2-8} & [Mona _et al._, 2023a] & fMRI & English & Constitutive, Dependency trees, Basic syntax features and BERT & 82 & Narratives & Ridge \\ \cline{2-8} & [Oota _et al._, 2023b] & MEG & English & Basic syntax features, GloVe and BERT & 8 & MEG-MASC & Ridge \\ \cline{2-8} & [Tickute _et al._, 2023] & fMRI & English & BERT-Large, GPT-2 XL & 12 & Reading Sentences & Ridge \\ \cline{2-8} & [Kauf _et al._, 2023] & fMRI & English & BERT-Large, GPT-2 XL & 12 & Pereira & Ridge \\ \cline{2-8} & [Singh _et al._, 2023] & fMRI & English & BERT-Large, GPT-2 XL, Text Perturbations & 5 & Pereira & Ridge \\ \cline{2-8} & [Wang _et al._, 2019] & fMRI & & 21 downstream vision tasks & 4 & BOLD 5000 & Ridge \\ \cline{2-8} & [Kublius _et al._, 2019] & fMRI & CNN models AlexNet, ResNet, DenseNet & 7 & Algonauts & Ridge \\ \cline{2-8} & [Dwivedi _et al._, 2021] & fMRI & & 21 downstream vision tasks & 4 & BOLD 5000 & Ridge \\ \cline{2-8} & [Mona and Webbe, 2022] & fMRI & CNN models AlexNet & 4 & BOLD 5000 & Ridge \\ \cline{2-8} & [Conwell _et al._, 2023] & fMRI & & CNN models AlexNet & 4 & BOLD 5000 & Ridge \\ \cline{2-8} & [Miller _et al._, 2022] & fMRI & English & WA2Vec2.0 & 345 & Narratives & Ridge \\ \cline{2-8} & [Vaidya _et al._, 2022] & fMRI & English & ARC, AST, Wav2Vec2.0, and HuBERT & 7 & Moth Radio Hour & Ridge \\ \cline{2-8} & [Tuckute _et al._, 2022] & fMRI & English & 19 Speech Models (e.g. DeepSpeech, WA2Vec2.0, and VQ-VAE) & 19 & Passive listening & Ridge \\ \cline{2-8} & [Oota _et al._, 2023c] & fMRI & English & 5 basic and 25 deep learning based speech models & 6 & Moth Radio Hour & Ridge \\ \cline{2-8} & [Tera, CPC, APC, WA2Vec2.0, HuBERT, DistilHung & & & & \\ \cline{2-8} & [Oota _et al._, 2023d] & fMRI & English & Wav2Vec2.0 and SUPERB tasks & 82 & Narratives & Ridge \\ \cline{2-8} & [Dong and Toneva, 2023] & fMRI & English & Merlo Reserve & 5 & Neuumod & Ridge \\ \cline{2-8} & [Popham _et al._, 2021] & fMRI & English & 985D Semantic Vector & 5 & Moth Radio Hour \& Short Movie & Ridge \\ \cline{2-8} & [Oota _et al._, 2022d] & fMRI & English & CLIP, VisualBERT, LXMERT, CNNs and BERT & 5, 82 & Periez \& Narratives & Ridge \\ \cline{2-8} & [La _et al._, 2022] & fMRI & English & BiVL & 5 & Pereira \& Short Movie Clips & Ridge \\ \cline{2-8} & [Tang _et al._, 2023] & fMRI & English & BridgeTower & 5 & Moth Radio Hour \& Short Movie & Ridge \\ \cline{2-8} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular} \end{table} Table 2: Summary of Representative Brain Encoding Studies tured through complexity measures (syntactic surprisal, node count, word length, and word frequency), very few have studied the syntactic representations themselves. Studying syntactic representations using fMRI is difficult because: (1) representing syntactic structure in an embedding space is a non-trivial computational problem, and (2) the fMRI signal Figure 7: Brain Encoding Survey Tree is noisy. To overcome these limitations, Reddy et al. [2021] proposed syntactic structure embeddings that encode the syntactic information inherent in natural text that subjects read in the scanner. The results reveal that syntactic structure-based features explain additional variance in the brain activity of various parts of the language system, even after controlling for complexity metrics that capture the processing load. Toneva et al. [2021] further examined whether the representations obtained from a language model align with different language processing regions in a similar or different way. **Linguistic properties in LMs and brains**: Understanding the reasons behind the observed similarities between language comprehension in LMs and brains can lead to more insights into both systems. Several works [Schwartz _et al._2019, Kumar _et al._2022, Aw and Toneva, 2022, Merlin and Toneva, 2022, Oota _et al._2022b] have found that using a fine-tuned BERT leads to improved brain predictions. However, it is not clear what type of information in the fine-tuned BERT model led to the improvement. It is unclear whether and how the two systems align in their information processing pipeline. Aw and Toneva [2022] used four pre-trained large language models (BART, Longformer Encoder Decoder, Big-Bird, and LongT5) and also trained them to improve their narrative understanding, using the method detailed in Figure 8. However, it is not understood whether prediction of the next word is necessary for the observed brain alignment or simply sufficient, and whether there are other shared mechanisms or information that is similarly important. Merlin and Toneva [2022] proposed two perturbations to pretrained language models that, when used together, can control for the effects of next word prediction and word-level semantics on the alignment with brain recordings. Specifically, they find that improvements in alignment with brain recordings in two language processing regions-Inferior Frontal Gyrus (IFG) and Angular Gyrus (AG)-are due to next word prediction and word-level semantics. However, what linguistic information actually underlies the observed alignment between brains and language models is not clear. Recently, Oota et al. [2022e] tested the effect of a range of linguistic properties (surface, syntactic and semantic) and found that the elimination of each linguistic property results in a significant decrease in brain alignment across all layers of BERT. **Visual Encoding**: CNNs are currently the best class of models of the neural mechanisms of visual processing [Du _et al._2020, Beliy _et al._2019, Oota _et al._2019, Nishida _et al._2020]. How can we push these deeper CNN models to capture brain processing even more stringently? Continued architectural optimization on ImageNet alone no longer seems like a viable option. Kubilius et al. [2019] proposed a shallow recurrent anatomical network CORnet that follows neuroanatomy more closely than standard CNNs, and achieved the state-of-the-art results on the Brain-score benchmark. It has four computational areas, conceptualized as analogous to the ventral visual areas V1, V2, V4, and IT, and a linear category decoder that maps from the population of neurons in the model's last visual area to its behavioral choices. Despite the effectiveness of CNNs, it is difficult to draw specific inferences about neural information processing using CNN- derived representations from a generic object-classification CNN. Hence, Wang et al. [2019] built encoding models with individual feature spaces obtained from 21 computer vision tasks. One of the main findings is that features from 3D tasks, compared to those from 2D tasks, predict a distinct part of visual cortex. **Auditory Encoding**: Speech stimuli have mostly been represented using encodings of text transcriptions [Huth _et al._2016] or using basic features like phoneme rate, the sum of squared FFT coefficients [Pandey _et al._2022], etc. Text transcription-based methods ignore the raw audio-sensory information completely. The basic speech feature engineering method misses the benefits of transfer learning from rigorously pretrained speech DL models. Recently, several researchers have used popular deep learning models such as APC [Chung _et al._2020], Wav2Vec2.0 [Baevski _et al._2020], HuBERT [Hsu _et al._2021], and Data2Vec [Baevski _et al._2022] for encoding speech stimuli. Millet et al. [2022] used a self-supervised learning model Wav2Vec2.0 to learn latent representations of the speech waveform similar to those of the human brain. They find that the functional hierarchy of its transformer layers aligns with the cortical hierarchy of speech in the brain, and reveals the whole-brain organisation of speech processing with an unprecedented clarity. This means that the first transformer layers map onto the low-level auditory cortices (A1 and A2), the deeper layers (orange and red) map onto brain regions associated with higher-level processes (e.g. STS and IFG). Vaidya et al. [2022] present the first systematic study to bridge the gap between recent four self-supervised speech representation methods (APC, Wav2Vec, Wav2Vec2.0, and HuBERT) and computational models of the human auditory system. Similar to [Millet _et al._2022], they find that self-supervised speech models are the best models of auditory areas. Lower layers best modeled low-level areas, and upper-middle layers were most predictive of phonetic and semantic areas, while layer representations follow the accepted hierarchy of speech processing. Tuckute et al. [2022] analyzed 19 different speech models and find that some audio models derived in engineering contexts (model applications ranged from speech recognition and speech enhancement to audio captioning and audio source separation) produce poor predictions of auditory cortical responses, many task-optimized audio speech deep learning models outpredict a standard spectrotemporal model of the auditory cortex and exhibit hierarchical layer-region correspondence with auditory cortex. **Multimodal Brain Encoding**: Multimodal stimuli can be best encoded using recently proposed deep learning based multimodal models. Oota et al. [2022d] experimented with multimodal models like Contrastive Language-Image Pre-training (CLIP), Learning Cross-Modality Encoder Representations from Transformers (LXMERT), and VisualBERT and found VisualBERT to the best. Similarly, Wang et al. [2022] find that multimodal models like CLIP better predict neural responses in visual cortex, since image captions typically contain the most semantically relevant information in an image for humans. [Dong and Toneva, 2023] present a systematic approach to probe multi-modal video Transformer model by leveraging neuroscientific evidence of multimodal information processing in the brain. The authors find that in termediate layers of a multimodal video transformer are better at predicting multimodal brain activity than other layers, indicating that the intermediate layers encode the most brain-related properties of the video stimuli. Recently, [10] investigated a multimodal Transformer as the encoder architecture to extract the aligned concept representations for narrative stories and movies to model fMRI responses to naturalistic stories and movies, respectively. Since language and vision rely on similar concept representations, the authors perform a cross-modal experiment in which how well the language encoding models can predict movie-fMRI responses from narrative story features (story \(\rightarrow\) movie) and how well the vision encoding models can predict narrative story-fMRI responses from movie features (movie \(\rightarrow\) story). Overall, the authors find that cross-modality performance was higher for features extracted from multimodal transformers than for linearly aligned features extracted from unimodal transformers. ## 6 Brain Decoding Decoding is the learning of the mapping from neural activations back to the stimulus domain. Figure 9 depicts the typical workflow for building an image/language decoder. **Decoder Architectures**: In most cases, the stimulus representation is decoded using typical ridge regression models trained on each voxel and its 26 neighbors in 3D to predict each dimension of the stimulus representation. Also, decoding is usually performed using the most informative voxels [17]. In some cases, a fully connected layer [1] or a multi-layered perceptron [20] has been used. In some studies, when decoding is modeled as multi-class classification, Gaussian Naive Bayes [21, 22] and SVMs [14] have also been used for decoding. Figure 10 summarizes the literature related to various decoding solutions proposed in vision, auditory, and language domains. **Decoding task settings**: The most common setting is to perform decoding to a vector representation using a stimuli of a single mode (visual, text or audio). Initial brain decoding experiments studied the recovery of simple concrete nouns and verbs from fMRI brain activity [23] where the subject watches either a picture or a word. Sun et al. [24] used several sentence representation models to associate brain activities with sentence stimulus, and found InferSent to perform the best. More work has focused on decoding the text passages instead of individual words [20]. Some studies have focused on multimodal stimuli based decoding where the goal is still to decode the text representation vector. For example, Pereira et al. [20] trained the decoder on imaging data of individual concepts, and showed that it can decode semantic vector representations from imaging data of sentences about a wide variety of both concrete and abstract topics from two separate datasets. Further, Oota Figure 8: Comparison of brain recordings with language models trained on web corpora (LEFT) and language models trained on book stories (RIGHT) [20]. et al. [2022c] propose two novel brain decoding setups: (1) multi-view decoding (MVD) and (2) cross-view decoding (CVD). In MVD, the goal is to build an MV decoder that can take brain recordings for any view as input and predict the concept. In CVD, the goal is to train a model which takes brain recordings for one view as input and decodes a semantic vector representation of another view. Specifically, they study practically useful CVD tasks like image captioning, image tagging, keyword extraction, and sentence formation. To understand application of Transformer models for decoding better, Gauthier et al. [2019] fine-tuned a pre-trained BERT on a variety of NLU tasks, asking which lead to im \begin{table} \begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Stimuli** & **Authors** & **Dataset Type** & **Lang.** & **Stimulus Representations** & **S** & **Dataset** & **Model** \\ \hline \multirow{6}{*}{[30]} & Pereira _et al._, 2018 & fMRI & English & Word2Vec, GloVe, BERT & 17 & Pereira & Ridge \\ \cline{2-9} & Wang _et al._, 2020 & fMRI & English & BERT, RoBERTa & 6 & Pereira & Ridge \\ \cline{2-9} & Oota _et al._, 2022c & fMRI & English & GloVe, BERT, RoBERTa & 17 & Pereira & Ridge \\ \cline{2-9} & Tang _et al._, 2022 & fMRI & English & GPT, fine-tuned GPT on Reddit comments and autobiographical stories & 7 & Moth Radio Hour & Ridge \\ \hline \multirow{6}{*}{[30]} & Beliy _et al._, 2019 & fMRI & \multicolumn{2}{p{113.8pt}|}{End-to-End Encoder-Decoder, Decoder-Encoder, AlexNet} & \multicolumn{1}{p{113.8pt}|}{} & \multicolumn{1}{p{113.8pt}|}{} & \multicolumn{1}{p{113.8pt}|}{} & \multicolumn{1}{p{113.8pt}|}{} \\ \cline{2-9} & Takaei and Nishimoto, 2022 & fMRI & Latent Diffusion Model, CLIP & 4 & NSD & Ridge \\ \cline{2-9} & Ozcelik and VanRullen, 2023 & fMRI & \multicolumn{2}{p{113.8pt}|}{VDVAE, Latent Diffusion Model} & \multicolumn{1}{p{113.8pt}|}{7} & NSD & \multicolumn{1}{p{113.8pt}|}{} \\ \cline{2-9} & Chen _et al._, 2023b & fMRI & \multicolumn{2}{p{113.8pt}|}{Latent Diffusion Model, CLIP} & \multicolumn{1}{p{113.8pt}|}{3} & \multicolumn{1}{p{113.8pt}|}{HCP fMRI-Video-Dataset} & \multicolumn{1}{p{113.8pt}|}{} & \multicolumn{1}{p{113.8pt}|}{} \\ \cline{2-9} & Defossez _et al._, 2022 & MEG,EEG & English & \multicolumn{1}{p{113.8pt}|}{MEL Spectrogram, Wav2Vec2.0} & \multicolumn{1}{p{113.8pt}|}{169} & MEG-MASC & Ridge, CLIP \\ \cline{2-9} & (Gwilliams _et al._, 2022) & MEG & English & Phonemes & 7 & MEG-MASC & \\ \hline \end{tabular} \end{table} Table 3: Summary of Representative Brain Decoding Studies Figure 10: Brain Decoding Survey Tree Figure 9: Schema for Brain Decoding. LEFT: Image decoder _[Smith et al., 2011]_, RIGHT: Language Decoder _[Wang et al., 2019]_ provements in brain-decoding performance. They find that tasks which produce syntax-light representations yield significant improvements in brain decoding performance. Toneva et al. (2019) study how representations of various Transformer models differ across layer depth, context length, and attention type. Some studies have attempted to reconstruct words (Affolter et al., 2020), continuous language (Tang et al., 2022), images (Du et al., 2020; Beliy et al., 2019; Fang et al., 2020; Lin et al., 2022), speech (Defossez et al., 2022) or question-answer speech dialogues (Moses et al., 2019) rather than just predicting a semantic vector representation. Lastly, some studies have focused on reconstructing personal imagined experiences (Berezutskaya et al., 2020) or application-based decoding like using brain activity scanned during a picture-based mechanical engineering task to predict individuals' physics/engineering exam results (Cetron et al., 2019) and reflecting whether current thoughts are detailed, correspond to the past or future, are verbal or in images (Smallwood and Schooler, 2015). Table 3 aggregates the brain decoding literature along different stimulus domains such as textual, visual, and audio. ## 7 Conclusion, Limitations, and Future Trends **Conclusion** In this paper, we surveyed important datasets, stimulus representations, brain encoding and brain decoding methods across different modalities. A glimpse of how deep learning solutions throw light on putative brain computations is given. **Limitations** Naturalistic datasets of passive reading/listening offer ecologically realistic settings for investigating brain function. However, the lack of a task (as in a controlled psycholinguistic experiment) that probes the participant's understanding of the narrative limits the inferences that can be made on what the participant's brain is actually engaged in while passively following the stimuli. This becomes even more important when multi-lingual, multispectral participants process stimuli in L2 language or script - it is unclear if the brain activity reflects the processing of L2 or active suppression L1 while focusing on L2 (Malik-Moraleda et al., 2022). **Future Trends** Some of the future areas of work in this field are as follows: (1) While there is work on the text, understanding the similarity in information processing between visual/speech/multimodal models versus natural brain systems remains an open area. (2) Decoding to actual multimodal stimuli seems feasible thanks to recent advances in generation using deep learning models. (3) Deeper understanding of the degree to which damage to different parts of the human brain could lead to the degradation of cognitive skills. (4) How can we train artificial neural networks in novel self-supervised ways such that they compose word meanings or comprehend images and speech like a human brain? (5) How can we leverage improved neuroscience understanding to suggest changes in proposed artificial neural network architectures to make them more robust and accurate? We hope that this survey motivates research along the above directions.
2305.08270
On the equivalence of geometric and descriptor representations of linear port-Hamiltonian systems
We prove a one-to-one correspondence between the geometric formulation of port-Hamiltonian (pH) systems defined by Dirac structures, Lagrange structures, maximal resistive structures, and external ports and a state-space formulation by means of port-Hamiltonian descriptor systems, i.e., differential algebraic equations (DAE) with inputs and outputs.
Hannes Gernandt, Friedrich Philipp, Till Preuster, Manuel Schaller
2023-05-14T22:29:57Z
http://arxiv.org/abs/2305.08270v1
# On the equivalence of geometric and descriptor representations of linear port-Hamiltonian systems+ ###### Abstract We prove a one-to-one correspondence between the geometric formulation of port-Hamiltonian (pH) systems defined by Dirac structures, Lagrange structures, maximal resistive structures, and external ports and a state-space formulation by means of port-Hamiltonian descriptor systems, i.e., differential algebraic equations (DAE) with inputs and outputs. ## 1 Introduction Development and operation of modern technologies requires the deep understanding and control of complex dynamical systems. The class of port-Hamiltonian (pH) systems represents such an elegant mathematical framework for modeling and analysis of multi-physics systems. Due to their inherent energy-based structure, these systems are very well-suited to describe the energy flows, energy conservation and interconnection of physical systems in a wide range of applications. From a modeling perspective, they offer the additional benefit of coupling capability. Port-Hamiltonian systems have found numerous applications in physical domains such as robotics, renewable energy systems, and mechatronics [14, 15]. From a mathematical point of view, there exist different approaches to pH systems fertilized by different areas of mathematics and mathematical physics. On the one hand one can describe this class of systems by geometrical structures [14], leading to the concept of Dirac structures. Moreover the total energy of the system is given by the Hamiltonian density, which can be generalized by so-called Lagrangian subspaces. In the language of system and control theory, pH systems can be characterized as descriptor systems with the physical structure of the system being inscribed in the algebraic properties of the coefficient matrices. This perspective allows the application of numerical methods as well as many results from simulation and solution theory, and interprets pH systems as open Hamiltonian systems interacting with their environment by means of inputs and outputs [1, 15]. For example, the pH structure implies certain restrictions on the Kronecker canonical form of the underlying matrix pencil [16] and also provides robustness of the eigenvalues under structured perturbations [16]. Eventually there is also a functional analytical approach to pH systems theory. This point of view allows the extension of the description of energy-based physical systems on infinite-dimensional state spaces in terms of partial-differential equations and boundary control systems, cf. [13] for one-dimensional state domains and [17, 20] for recent approaches to higher-dimensional state domains. In this paper, our aim is to reveal a connection between the geometric pH formulation by means of Lagrange structures, Dirac structures, and resistive structures and the system theoretic formulation in finite dimensions by means of input-state-output systems given as a differential-algebraic equation (DAE) of the form \[\begin{bmatrix}\frac{\mathrm{d}}{dt}Ez(t)\\ y(t)\end{bmatrix}=\begin{bmatrix}J-R&B-P\\ (B+P)^{*}&S+N\end{bmatrix}\begin{bmatrix}Qz(t)\\ u(t)\end{bmatrix},\quad t\geq 0 \tag{1}\] with \(\mathbb{K}^{m}\)-valued input \(u\) and output \(y\), \(\mathbb{K}^{n}\)-valued state \(z\), and matrices \(E,J,R,Q\in\mathbb{K}^{n\times n}\), \(B,P\in\mathbb{K}^{n\times m}\), \(S,N\in\mathbb{K}^{m\times m}\) having additional structural properties, cf. Definition 13. In [21] a first link between the geometric modeling of pH systems outlined in [21] and the state-space representations for DAE-systems from [1] was established, where the authors only considered Lagrange and Dirac structures without any dissipation or external ports. This results in a state space system of the form \[K\tfrac{d}{dt}Pz(t)=LSz(t),\quad t\geq 0, \tag{2}\] where the matrices \(K,L\in\mathbb{K}^{n\times n}\) are given by the kernel representation of the Dirac structure and \(P,S\in\mathbb{K}^{n\times n}\) are given by a range representation of the Lagrange structure. The case with dissipation was considered in [19] for pH descriptor systems. To this end, the Dirac structure was replaced by a dissipative subspace and for structural results on the underlying matrix pencils nonnegativity of the Lagrange structure was assumed. The geometric setting in [19] was further generalized in [21] where, contrary to the dissipative subspace in [19], in addition to the Dirac and Lagrange structures a resistive structure was used to model the dissipation. The relation to state space systems of the form (2) was studied and the index as well as the Kronecker canonical form of (2) was investigated. However no external port variables were considered. Recently, in [21] a geometric description of dissipative pH descriptor systems including port variables was given. It was shown that the previously used geometric definition of pH systems, either via a separate resistive structure or via a dissipative structure (called _monotone_ in [21]) are in fact equivalent. This was used to obtain a state space formulation (1) from the geometric description using a monotone structure. The main contribution of the present note is to also provide a converse result, i.e. for pH descriptor systems (1) satisfying \(\ker E\cap\ker Q=\{0\}\) we derive an equivalent geometric formulation. This extends previous results from [21], where no additional Lagrange structure was considered, leading to a one-to-one correspondence of the two formulations in the behavioral sense. This means that for each solution of the geometric pH descriptor system there is a corresponding solution of the state-space formulation and vice versa. Furthermore, in comparison to [21] we show that geometric pH systems have a state space formulation (1), where \(Q\) equals the identity, which is often assumed in pH literature [21]. Incorporating the converse direction, it follows that each pH descriptor system is equivalent to another one in a possibly larger state-space with \(Q=I\). The paper is organized as follows: In Section 2 we recall notations and well known facts from multi-valued linear algebra. After presenting both the geometric and the descriptor formulation of pH descriptor systems in Section 3, the one-to-one correspondence between these two formulations is shown in Section 4. We conclude the paper and discuss open problems in Section 5. ## 2 Preliminaries from multi-valued linear algebra **Notation:**\(\mathbb{K}\) denotes either \(\mathbb{C}\) or \(\mathbb{R}\)--consistently throughout this article. The graph \(\{(x,Ax):x\in\mathbb{K}^{n}\}\) of a linear map \(A:\mathbb{K}^{n}\to\mathbb{K}^{m}\) is denoted by \(\operatorname{gr}A\). Its inverse (as a linear relation) is given and denoted by \(\operatorname{gr}^{-1}A=\{(Ax,x):x\in\mathbb{K}^{n}\}\). For \(A\in\mathbb{K}^{n\times n}\) we write \(A^{*}:=\overline{A}^{\top}\) where \(\overline{A}\) is the entry-wise complex conjugate of \(A\), i.e. if \(A\in\mathbb{R}^{n\times n}\) we have \(A^{*}=A^{\top}\). The Euclidean inner product in \(\mathbb{K}^{n}\) will be denoted by \(\langle x,y\rangle:=y^{*}x\) for all \(x,y\in\mathbb{K}^{n}\) with the resulting Euclidean norm \(\|x\|^{2}:=\langle x,x\rangle\). Recall the notions of kernel, domain, multivalued part, and range of a linear subspace of a product space (also called _linear relation_). **Definition 1**.: The _kernel, domain, multivalued part,_ and _range_ of a linear subspace \(\mathcal{A}\subset\mathbb{K}^{n}\times\mathbb{K}^{m}\) are defined by \[\ker\mathcal{A} :=\{f\in\mathbb{K}^{n}:(f,0)\in\mathcal{A}\,\},\] \[\operatorname{dom}\mathcal{A} :=\{f\in\mathbb{K}^{n}:\exists\,e\in\mathbb{K}^{m}\text{ s.t. }(f,e)\in\mathcal{A}\,\},\] \[\operatorname{mul}\mathcal{A} :=\{e\in\mathbb{K}^{m}:(0,e)\in\mathcal{A}\,\},\] \[\operatorname{ran}\mathcal{A} :=\{e\in\mathbb{K}^{m}:\exists\,f\in\mathbb{K}^{n}\text{ s.t. }(f,e)\in\mathcal{A}\,\},\] respectively. The _inverse_\(\mathcal{A}^{-1}\), the _adjoint_\(\mathcal{A}^{*}\), and _scalar multiples_\(\alpha\mathcal{A}\) of \(\mathcal{A}\) are defined by \[\mathcal{A}^{-1} :=\{(e,f):(f,e)\in\mathcal{A}\},\] \[\mathcal{A}^{*} :=\{(e^{\prime},f^{\prime}):\langle f^{\prime},e\rangle=\langle e ^{\prime},f\rangle\ \forall(e,f)\in\mathcal{A}\}\,,\] \[\alpha\mathcal{A} :=\{(e,\alpha f):(e,f)\in\mathcal{A}\},\quad\alpha\in\mathbb{K}.\] In particular, non-invertible matrices \(A\) can be inverted in the sense of linear relations by considering \(\operatorname{gr}^{-1}A\) which might then be multi-valued if \(A\) is not injective or not-everywhere defined if \(A\) is not surjective. In the following, we collect some notions for subspaces which have additional structural properties. **Definition 2**.: Let \(\mathcal{D}\), \(\mathcal{L}\), \(\mathcal{M}\), and \(\mathcal{R}\) be subspaces of \(\mathbb{K}^{2n}\). 1. \(\mathcal{L}\) is called a _Lagrange structure_ if \(\mathcal{L}=\mathcal{L}^{*}\). 2. \(\mathcal{D}\) is called a _Dirac structure_ if \(\mathcal{D}=-\mathcal{D}^{*}\). 3. \(\mathcal{R}\) is called a _(maximal) resistive structure_ if \(\mathcal{R}\subset\mathcal{R}^{*}\), \[\langle e,f\rangle\leq 0\quad\text{for all }[\,\overset{\epsilon}{f}\,]\in \mathcal{R}\quad(\text{and dim}\,\mathcal{R}=n).\] 4. \(\mathcal{M}\) is called a _(maximal) monotone structure_ if \[\operatorname{Re}\langle e,f\rangle\geq 0\quad\text{for all }[\,\overset{ \epsilon}{f}\,]\in\mathcal{M}\quad(\text{and dim}\,\mathcal{M}=n);\] **Remark 3**.: **(a)** In the language of linear relations slightly different nomenclature is used. Dirac, Lagrange and (maximal) monotone structures are called skew-adjoint, self-adjoint and (maximal) accretive, respectively, cf. [1]. A resistive structure would be called a non-positive symmetric relation. **(b)**\(\mathcal{L}\) is a Lagrange structure if and only if \(\mathcal{L}\subset\mathcal{L}^{*}\) and \(\dim\mathcal{L}=n\). In particular, a maximal resistive structure is also a Lagrange structure. Similarly, \(\mathcal{D}\) is a Dirac structure if and only if \(\mathcal{D}\subset-\mathcal{D}^{*}\) and \(\dim\mathcal{D}=n\). **Remark 4**.: In the case \(\mathbb{K}=\mathbb{C}\), resistive structures always have maximal resistive extensions. This follows directly from [1, Theorem 5.3.1]. Similarly, monotone structures always have maximal monotone extensions. Indeed, if \(\mathcal{M}\) is a monotone structure in \(\mathbb{C}^{2n}\), then its Cayley transform is a linear contraction \(V:\operatorname{dom}V\to\mathbb{C}^{n}\) with \(\operatorname{dom}V\subset\mathbb{C}^{n}\), cf. [1, Proposition 1.6.6]. Let \(\tilde{V}:\mathbb{C}^{n}\to\mathbb{C}^{n}\) be a contractive extension of \(V\). Then the inverse Cayley transform of \(\tilde{V}\) is an extension as desired. There are two common representations which will be referred to in this article as _kernel_ and _image representation_, see [1, Theorem 3.3] and also [1]. **Proposition 5**.: _Let \(\mathcal{M}\) be a subspace of \(\mathbb{K}^{2n}\) of dimension \(d\). Then there exists matrices \(K,L\in\mathbb{K}^{(2n-d)\times n}\) and \(F,G\in\mathbb{K}^{n\times d}\) such that the following holds_ \[\mathcal{M}=\ker[K,L]=\operatorname{ran}\begin{bmatrix}F\\ G\end{bmatrix}. \tag{3}\] If \(\mathcal{D}=\ker[K,L]\) is a Dirac structure for some \(K,L\in\mathbb{K}^{n\times n}\), then [20] used the notion _Dirac algebraic constraint_ if \(K\) is not invertible. This is equivalent to the existence of \((z,0)\in\mathcal{D}\) with \(z\neq 0\), or in the language of linear relations \(\ker\mathcal{D}\neq\{0\}\). A special case of such constraints are kinematic constraints (see Example 2.7 in [11]). Analogously, for a Lagrange structure \(\mathcal{L}=\operatorname{ran}\left[\begin{smallmatrix}P\\ S\end{smallmatrix}\right]\) for some \(P,S\in\mathbb{K}^{n\times n}\) there are said to be _Lagrange algebraic constraints_ if \(P\) is not invertible. These can be used to model algebraic state constraints. The matrices in the kernel and range representations (3) can be used to characterize the structural properties from Definition 2. In the next proposition, we restrict ourselves to the range representation. **Proposition 6**.: _Let \(\mathcal{M}\) be a subspace of \(\mathbb{K}^{2n}\) which is given by \(\mathcal{M}=\operatorname{ran}\left[\begin{smallmatrix}P\\ S\end{smallmatrix}\right]\) for some \(P,S\in\mathbb{K}^{n\times m}\). Then the following equivalences hold:_ 1. \(\mathcal{M}=\operatorname{ran}\left[\begin{smallmatrix}P\\ S\end{smallmatrix}\right]\) _is a Lagrange structure if and only if_ \(S^{*}P=P^{*}S\) _and_ \(\operatorname{rank}\left[\begin{smallmatrix}P\\ S\end{smallmatrix}\right]=n\)_;_ 2. \(\mathcal{M}=\operatorname{ran}\left[\begin{smallmatrix}P\\ S\end{smallmatrix}\right]\) _is a Dirac structure if and only if_ \(S^{*}P=-P^{*}S\) _and_ \(\operatorname{rank}\left[\begin{smallmatrix}P\\ S\end{smallmatrix}\right]=n\)_;_ 3. \(\mathcal{M}=\operatorname{ran}\left[\begin{smallmatrix}P\\ S\end{smallmatrix}\right]\) _is (maximal) monotone if and only if_ \(S^{*}P+P^{*}S\geq 0\) _(and_ \(\operatorname{rank}\left[\begin{smallmatrix}P\\ S\end{smallmatrix}\right]=n\)_);_ 4. \(\mathcal{M}=\operatorname{ran}\left[\begin{smallmatrix}P\\ S\end{smallmatrix}\right]\) _is (maximal) resistive if and only if_ \(S^{*}P=P^{*}S\leq 0\) _(and_ \(\operatorname{rank}\left[\begin{smallmatrix}P\\ S\end{smallmatrix}\right]=n\)_)._ **Remark 7**.: Similar characterizations as in Proposition 6 can also be derived for subspaces \(\mathcal{M}\) given in kernel representations \(\mathcal{M}=\ker[K,L]\) for some \(K,L\in\mathbb{K}^{(2n-d)\times n}\). Then the adjoint relation \(\mathcal{M}^{*}\) is given in range representation \[\mathcal{M}^{*}=\operatorname{ran}\left[\begin{smallmatrix}L^{*}\\ -K^{*}\end{smallmatrix}\right].\] Furthermore, \(\mathcal{M}\) is Lagrange (resp. Dirac, maximal monotone, maximal resistive) if and only if \(\mathcal{M}^{*}\) has this property. Indeed, Lagrange and maximal resistive structures satisfy \(\mathcal{M}=\mathcal{M}^{*}\), Dirac structures satisfy \(\mathcal{M}=-\mathcal{M}^{*}\), see e.g. [1] and it was shown in [1, Proposition 1.6.7] that \(\mathcal{M}\) is maximal monotone if and only if \(\mathcal{M}^{*}\) is maximal monotone. Therefore, can apply Proposition 6 to \(\mathcal{M}^{*}\) which implies that \(\mathcal{M}=\ker[K,L]\) is Lagrange (resp. Dirac, maximal monotone, maximal resistive) if and only if \((KL^{*}=LK^{*},\,KL^{*}=-LK^{*},\,KL^{*}+LK^{*}\leq 0,\,KL^{*}=LK^{*}\geq 0)\). The following result is the key to rewrite maximal subspaces as graphs of matrices in a larger subspace. **Proposition 8**.: _Let \(\mathcal{M}\subset\mathbb{K}^{N}\times\mathbb{K}^{N}\) be a Dirac (resp. Lagrange, maximal resistive, maximal monotone) structure and let \(l:=\dim\ker\mathcal{M}\). Then there exist matrices \(G\in\mathbb{K}^{N\times l}\) with \(\ker G=\{0\}\) and \(M\in\mathbb{K}^{N\times N}\) satisfying \(M=-M^{*}\) (resp. \(M=M^{*}\), \(M=M^{*}\leq 0\), \(M+M^{*}\geq 0\)) and \(M\ker G^{*}\subset\ker G^{*}\) such that_ \[\mathcal{M}=\left\{\begin{bmatrix}Me-G\lambda\\ e\end{bmatrix}:G^{*}e=0,\,\lambda\in\mathbb{K}^{l}\right\}. \tag{4}\] Proof.: The claim has been proven for Dirac structures in [11, Proposition 3.8] (see also [11, Theorem 3.1]), for Lagrange structures in [11, Proposition 5.3] and for maximal resistive and monotone subspaces in [1]. We conclude this section with a remark. **Remark 9**.: 1. If \(\left[\begin{smallmatrix}x\\ y\end{smallmatrix}\right]\in\mathcal{M}\), where \(\mathcal{M}\) is as in (4), then both \(e\) and \(\lambda\) are uniquely determined: \(e=y\) and \(\lambda=G^{\dagger}(My-x)\), where \(G^{\dagger}\) is any left-inverse of \(G\). 2. Since the inverse relation \(\mathcal{M}\) of Dirac, Lagrange, maximal resistive and maximal monotone structures inherits the particular property, we can apply Proposition 8 to \(\mathcal{M}^{-1}\) and obtain the existence of \(\hat{M}\) and an injective \(\hat{G}\) such that \[\mathcal{M}=\left\{\begin{bmatrix}e\\ \hat{M}e-\hat{G}\lambda\end{bmatrix}:\hat{G}^{*}e=0,\,\lambda\in\mathbb{K}^{l} \right\}.\] ## 3 Two formulations of pH systems In this part we introduce the two formulations of pH systems we will consider in the remainder of this work. In the upcoming Subsection 3.1 we introduce the geometric representation, whereas in Subsection 3.2 we recall the formulation by means of a differential algebraic descriptor system. ### Geometric representation of pH systems The following geometric description of pH systems was recently introduced in [20] and extends the geometric formulation from [20] by incorporating resistive variables, inputs and outputs. **Definition 10** ([20]).: A _geometric representation_ of a pH system (in short: a _geometric pH system_) with state space \(\mathbb{K}^{n}\) and external dimension \(m\) is given by a triple \((\mathcal{D},\mathcal{L},\mathcal{R})\) consisting of * a Dirac structure \(\mathcal{D}\subseteq\mathbb{K}^{n+r+m}\times\mathbb{K}^{n+r+m}\), * a Lagrange structure \(\mathcal{L}\subset\mathbb{K}^{n}\times\mathbb{K}^{n}\), and * a maximal resistive structure \(\mathcal{R}\subset\mathbb{K}^{r}\times\mathbb{K}^{r}\). By a _solution_ of the pH system \((\mathcal{D},\mathcal{L},\mathcal{R})\) we understand an input-state-output trajectory \((u,x,y)\in C([0,\infty);\mathbb{K}^{m})\times C^{1}([0,\infty);\mathbb{K}^{n} )\times C([0,\infty);\mathbb{K}^{m})\) for which there exist continuous functions \(f_{R}\), \(e_{R}\), and \(e_{L}\) such that for all \(t\geq 0\) we have \[\big{(}-\dot{x}(t),f_{R}(t),y(t),e_{L}(t),e_{R}(t),u(t)\big{)}\in\mathcal{D}, \quad(x(t),e_{L}(t))\in\mathcal{L},\quad(f_{R}(t),e_{R}(t))\in\mathcal{R}. \tag{5}\] The functions \(f_{R}\) and \(e_{R}\) are called the _resistive flow and effort variables_, respectively, and \(e_{L}\) is the _Lagrangian effort_. We briefly comment on this definition in view of previous works and generalizations. **Remark 11**.: In a more general setting, the structures in Definition 10 might also depend on time \(t\) and state \(x\), cf. [20]. However, here we only consider stationary structures. Furthermore, the variables \(u\) and \(y\), usually denoting inputs and outputs in systems and control theory frameworks, were called \(f_{P}\) and \(e_{P}\) in [20], respectively. Last, we note that in, e.g., [20, Definition 2.1] or in [20, Definition 14], negated maximal resistive structures are called non-negative Lagrange structures. To avoid confusion with the Lagrange structure \(\mathcal{L}\), we will utilize the notion maximal resistive structure for \(\mathcal{R}\). Solutions of geometric pH systems obey a _power-balance_, as the following elementary result shows. **Lemma 12**.: _Let \((u,x,y)\in C([0,\infty);\mathbb{K}^{m})\times C^{1}([0,\infty);\mathbb{K}^{n}) \times C([0,\infty);\mathbb{K}^{m})\) be a solution of the geometric pH system \((\mathcal{D},\mathcal{L},\mathcal{R})\). Then, for all \(t\geq 0\), the following power balance holds:_ \[\operatorname{Re}\langle\dot{x}(t),e_{L}(t)\rangle=\operatorname{Re}\langle f _{R}(t),e_{R}(t)\rangle+\operatorname{Re}\langle y(t),u(t)\rangle\leq \operatorname{Re}\langle y(t),u(t)\rangle.\] Proof.: Since \(\mathcal{D}\) is a Dirac structure as defined in Definition 2(ii), and due to (5), we compute for all \(t\geq 0\) \[0=\operatorname{Re}\left\langle\begin{bmatrix}-\dot{x}(t)\\ f_{R}(t)\\ y(t)\end{bmatrix},\begin{bmatrix}e_{L}(t)\\ e_{R}(t)\\ u(t)\end{bmatrix}\right\rangle=-\operatorname{Re}\langle\dot{x}(t),e_{L}(t) \rangle+\operatorname{Re}\langle f_{R}(t),e_{R}(t)\rangle+\operatorname{Re} \langle y(t),u(t)\rangle. \tag{6}\] We have \(\operatorname{Re}\langle f_{R}(t),e_{R}(t)\rangle\leq 0\) as \((f_{R}(t),e_{R}(t))\in\mathcal{R}\) and \(\mathcal{R}\) is resistive, cf. Definition 2(iii). ### Port-Hamiltonian descriptor systems A second formulation of linear pH systems is given in a somewhat more explicit form, involving a differential-algebraic equation (DAE), see e.g. [20, Definition 4.9], see also [1, 20]. **Definition 13** ([20]).: A pH descriptor system is a DAE with inputs and outputs of the form \[\begin{bmatrix}\frac{\mathrm{d}}{\mathrm{d}t}Ez(t)\\ y(t)\end{bmatrix}=\begin{bmatrix}J-R&B-P\\ (B+P)^{*}&S+N\end{bmatrix}\begin{bmatrix}Qz(t)\\ u(t)\end{bmatrix} \tag{7}\] with \(\mathbb{K}^{m}\)-valued input \(u\) and output \(y\), \(\mathbb{K}^{n}\)-valued state \(z\), matrices \(E,J,R,Q\in\mathbb{K}^{n\times n}\), \(B,P\in\mathbb{K}^{n\times m}\), \(S,N\in\mathbb{K}^{m\times m}\) satisfying \[E^{*}Q=Q^{*}E,\quad J=-J^{*},\quad N=-N^{*},\quad R=R^{*},\quad S=S^{*}\] such that \[W:=\begin{bmatrix}Q^{*}&0\\ 0&I\end{bmatrix}\begin{bmatrix}R&P\\ P^{*}&S\end{bmatrix}\begin{bmatrix}Q&0\\ 0&I\end{bmatrix}\geq 0. \tag{8}\] The _Hamiltonian_ of the system is defined as \(H(z)=z^{*}Q^{*}Ez\). A solution of (7) is an input-state-output trajectory \((u,z,y)\in C([0,\infty);\mathbb{K}^{n+2m})\) with \(Ez\in C^{1}([0,\infty);\mathbb{K}^{n})\) such that (7) is satisfied for all \(t\geq 0\). Note that one could also generalize the above definition to inputs \(u\in L^{1}_{\mathrm{loc}}((0,\infty);\mathbb{K}^{m})\) when considering \(Ez\in W^{1,1}_{\mathrm{loc}}([0,\infty);\mathbb{K}^{n})\). The following result yields a regularity result of the Hamiltonian along solutions and power balance for the DAE system (7). Its proof follows by straightforward modifications of [13, Lemma 2.2], where a similar result was shown for solutions in \(W^{1,1}_{\mathrm{loc}}([0,\infty);\mathbb{K}^{m})\) and we state it here for completeness. **Lemma 14**.: _If \((u,z,y)\) is a solution of (7), then \(H\circ z\in C^{1}([0,\infty);\mathbb{K}^{n})\), and the following power balance holds:_ \[\frac{\mathrm{d}}{\mathrm{d}t}H(z(t))=\mathrm{Re}\left[u(t)^{*}y(t)\right]- \left\|W^{\frac{1}{2}}\begin{bmatrix}z(t)\\ u(t)\end{bmatrix}\right\|^{2}. \tag{9}\] Proof.: Let \(\mathcal{P}\) denote the orthogonal projection onto \(\mathrm{ran}\,E^{*}\), i.e., \(I-\mathcal{P}\) maps onto \((\mathrm{ran}\,E^{*})^{\perp}=\ker E\). Hence, have \(E=E\mathcal{P}+E(I-\mathcal{P})=E\mathcal{P}\). Let \(E^{\dagger}\) denote the Moore-Penrose inverse of \(E\). Then \(\mathcal{P}=E^{\dagger}E\) and therefore \[Ez\in C^{1}([0,\infty),\mathbb{K}^{n})\quad\Longleftrightarrow\quad\mathcal{ P}z\in C^{1}([0,\infty),\mathbb{K}^{n}).\] Since \(E^{*}Q=Q^{*}E\), we have \(H(z)=\frac{1}{2}z^{*}\mathcal{P}E^{*}Qz=\frac{1}{2}(\mathcal{P}z)^{*}Q^{*}E( \mathcal{P}z)\) and thus \(H\circ z\in C^{1}([0,\infty);K^{n})\). Consequently, and as \(\mathrm{Re}(z^{*}Q^{*}JQz)=\mathrm{Re}(u^{*}Nu)=0\), we obtain \[\frac{\mathrm{d}}{\mathrm{d}t}(H\circ z) =\mathrm{Re}\left[\left(\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{ P}z\right)^{*}Q^{*}E\mathcal{P}z\right]=\mathrm{Re}\left[\left(\frac{ \mathrm{d}}{\mathrm{d}t}\mathcal{P}z\right)^{*}E^{*}Qz\right]=\mathrm{Re}\left[ \left(\frac{\mathrm{d}}{\mathrm{d}t}Ez\right)^{*}Qz\right]\] \[=\mathrm{Re}\left[(J-R)Qz+(B-P)u\right]^{*}Qz=-z^{*}Q^{*}RQz+ \mathrm{Re}(u^{*}(B-P)^{*}Qz)\] \[=\mathrm{Re}\left[u^{*}\big{(}(B+P)^{*}Qz+(S+N)u\big{)}-2u^{*}P^{* }Qz-u^{*}Su\right]-z^{*}Q^{*}RQz\] \[=\mathrm{Re}\left[u^{*}y\right]-\begin{bmatrix}z^{*}&u^{*}\end{bmatrix} W\begin{bmatrix}z\\ u\end{bmatrix},\] which is the claimed power balance. ## 4 Equivalence of the two formulations In this section, we associate a pH descriptor system in the sense of Definition 13 with a geometric pH system as defined in Definition 10 and vice-versa. This shows that the two formulations introduced in Section 3 are equivalent. ### From geometric pH to descriptor pH The next theorem shows that geometric pH systems \((\mathcal{D},\mathcal{L},\mathcal{R})\) can be associated with particular pH descriptor systems such that solutions of the geometric pH system are uniquely determined parts of solutions of the descriptor system and vice versa. **Theorem 15**.: _Let a geometric pH system \((\mathcal{D},\mathcal{L},\mathcal{R})\) be given as in Definition 10 and set \(p=\dim\ker\mathcal{D}+\dim\ker\mathcal{R}+\dim\ker\mathcal{L}\). Then there exists a pH descriptor system of the form_ \[\begin{bmatrix}\frac{d}{dt}Ez(t)\\ y(t)\end{bmatrix}=\begin{bmatrix}J-R&B\\ B^{*}&N\end{bmatrix}\begin{bmatrix}z(t)\\ u(t)\end{bmatrix} \tag{10}\] _as in (7) with \(Q=I\), \(P=0\), \(S=0\) with the state \(z\in\mathbb{K}^{n+r+p}\) such that the following hold:_ 1. _If_ \((u,x,y)\) _is a solution of_ \((\mathcal{D},\mathcal{L},\mathcal{R})\) _then there exists_ \(z\) _such that_ \((u,z,y)\) _solves_ \((0)\)_._ 2. _If_ \((u,z,y)\) _is a solution of (_10_), then for every_ \((x_{0},e_{L}(0))\in\mathcal{L}\) _there exists_ \(x\) _such that_ \((u,x,y)\) _solves_ \((\mathcal{D},\mathcal{L},\mathcal{R})\) _with_ \(x(0)=x_{0}\)_._ _Furthermore, if \(-\mathcal{L}\) is resistive, then \(E=E^{*}\geq 0\) holds._ Proof.: Let \(d=\dim\ker\mathcal{D}\), \(k=\dim\ker\mathcal{R}\), \(l=\dim\ker\mathcal{L}\), and \(N=n+r+m\). By Proposition 8, there exist an injective \(G\in\mathbb{K}^{N\times d}\) and a skew-adjoint \(\widetilde{J}\in\mathbb{K}^{N\times N}\) such that \[\mathcal{D}=\left\{\begin{bmatrix}\widetilde{J}e-G\lambda\\ e\end{bmatrix}:G^{*}e=0,\,\lambda\in\mathbb{K}^{d},\,e\in\mathbb{K}^{N} \right\}. \tag{11}\] Let \((u,x,y)\) be a solution of \((\mathcal{D},\mathcal{L},\mathcal{R})\) with \(f_{R}\), \(e_{R}\), and \(e_{L}\) as in (5), i.e., \[\big{(}-\dot{x}(t),f_{R}(t),y(t),e_{L}(t),e_{R}(t),u(t)\big{)}\in\mathcal{D}, \quad(x(t),e_{L}(t))\in\mathcal{L},\quad(f_{R}(t),e_{R}(t))\in\mathcal{R}. \tag{12}\] Hence, we find that there exists \(\lambda:[0,\infty)\to\mathbb{K}^{d}\) such that \[\begin{bmatrix}-\dot{x}(t)\\ f_{R}(t)\\ y(t)\\ 0\end{bmatrix}=\begin{bmatrix}\widetilde{J}&-G\\ G^{*}&0\end{bmatrix}\begin{bmatrix}e_{L}(t)\\ e_{R}(t)\\ u(t)\\ \lambda(t)\end{bmatrix}=\begin{bmatrix}J_{11}&J_{12}&J_{13}&-G_{1}\\ -J_{12}^{*}&J_{22}&J_{23}&-G_{2}\\ -J_{13}^{*}&-J_{23}^{*}&J_{33}&-G_{3}\\ G_{1}^{*}&G_{2}^{*}&G_{3}^{*}&0\end{bmatrix}\begin{bmatrix}e_{L}(t)\\ e_{R}(t)\\ u(t)\\ \lambda(t)\end{bmatrix}. \tag{13}\] Making use of Proposition 8 again, we find that the maximal resistive structure \(\mathcal{R}\) has a representation \[\mathcal{R}=\left\{\begin{bmatrix}-\widetilde{R}x_{R}+G_{R}\lambda_{R}\\ x_{R}\end{bmatrix}:G_{R}^{*}x_{R}=0,\,\lambda_{R}\in\mathbb{K}^{k},\,x_{R} \in\mathbb{K}^{r}\right\}, \tag{14}\] where \(G_{R}\in\mathbb{K}^{r\times k}\) is injective and \(\widetilde{R}\in\mathbb{K}^{r\times r}\) is a positive semi-definite Hermitian matrix. Hence, (14) implies \[f_{R}(t)=-\widetilde{R}e_{R}(t)+G_{R}\lambda_{R}(t),\qquad G_{R}^{*}e_{R}(t)=0.\] Therefore, (13) can be equivalently rewritten as \[\begin{bmatrix}\dot{x}(t)\\ 0\\ y(t)\\ 0\\ 0\end{bmatrix}=\begin{bmatrix}-J_{11}&-J_{12}&-J_{13}&G_{1}&0\\ J_{12}^{*}&-J_{22}-\widetilde{R}&-J_{23}&G_{2}&G_{R}\\ -J_{13}^{*}&-J_{23}^{*}&J_{33}&-G_{3}&0\\ -G_{1}^{*}&-G_{2}^{*}&-G_{3}^{*}&0&0\\ 0&-G_{R}^{*}&0&0&0\end{bmatrix}\begin{bmatrix}e_{L}(t)\\ e_{R}(t)\\ u(t)\\ \lambda(t)\\ \lambda_{R}(t)\end{bmatrix}\] and after an additional permutation of the rows and columns we obtain \[\begin{bmatrix}\dot{x}(t)\\ 0\\ 0\\ 0\\ y(t)\end{bmatrix}=\begin{bmatrix}-J_{11}&-J_{12}&G_{1}&0&-J_{13}\\ J_{12}^{*}&-J_{22}-\widetilde{R}&G_{2}&G_{R}&-J_{23}\\ -G_{1}^{*}&-G_{2}^{*}&0&0&-G_{3}^{*}\\ 0&-G_{R}^{*}&0&0&0\\ -J_{13}^{*}&-J_{23}^{*}&-G_{3}&0&J_{33}\end{bmatrix}\begin{bmatrix}e_{L}(t)\\ e_{R}(t)\\ \lambda(t)\\ \lambda_{R}(t)\\ u(t)\end{bmatrix}. \tag{15}\] Leveraging Proposition 8 one more time, we may express the Lagrange structure \(\mathcal{L}\) as \[\mathcal{L}=\left\{\begin{bmatrix}Lx_{L}-G_{L}\lambda_{L}\\ x_{L}\end{bmatrix}:G_{L}^{*}x_{L}=0,\,\lambda_{L}\in\mathbb{K}^{l},\,x_{L} \in\mathbb{K}^{n}\right\}, \tag{16}\] where \(G_{L}\in\mathbb{K}^{n\times l}\) is injective and \(L\in\mathbb{K}^{n\times n}\) is a Hermitian matrix. Hence, for given \(e_{L}(t)\in\mathbb{K}^{n}\) there exists a unique \(\lambda_{L}(t)\in\mathbb{K}^{l}\) satisfying \[x(t)=Le_{L}(t)-G_{L}\lambda_{L}(t),\qquad G_{L}^{*}e_{L}=0 \tag{17}\] By Proposition 8, \(Le_{L}(t)\) is orthogonal to \(G_{L}\lambda_{L}(t)\) for all \(t\geq 0\) and therefore \(x\in C^{1}([0,\infty),\mathbb{K}^{n})\) holds if and only if \(Le_{L},G_{L}\lambda_{L}\in C^{1}([0,\infty),\mathbb{K}^{n})\) holds. Moreover, as \(G_{L}\) is injective, we have \(G_{L}^{\dagger}G_{L}=I_{l}\) and so \(G_{L}\lambda_{L}\in C^{1}([0,\infty),\mathbb{K}^{n})\) is equivalent to \(\lambda_{L}\in C^{1}([0,\infty),\mathbb{K}^{l})\). Using (17), the system (15) is equivalent to \[\begin{bmatrix}\frac{d}{dt}Le_{L}(t)-G_{L}\dot{\lambda}_{L}(t)\\ 0\\ 0\\ y(t)\end{bmatrix}=\begin{bmatrix}-J_{11}&-J_{12}&G_{1}&0&-J_{13}\\ J_{12}^{*}&-J_{22}-R&G_{2}&G_{R}&-J_{23}\\ -G_{1}^{*}&-G_{2}^{*}&0&0&-G_{3}^{*}\\ 0&-G_{R}^{*}&0&0&0\\ -J_{13}^{*}&-J_{23}^{*}&-G_{3}&0&J_{33}\end{bmatrix}\begin{bmatrix}e_{L}(t)\\ e_{R}(t)\\ \lambda(t)\\ \lambda_{R}(t)\\ u(t)\end{bmatrix}\] which can be rewritten as \[\begin{bmatrix}\frac{d}{dt}Le_{L}(t)\\ 0\\ 0\\ 0\\ y(t)\end{bmatrix}=\begin{bmatrix}-J_{11}&-J_{12}&G_{1}&0&G_{L}&-J_{13}\\ J_{12}^{*}&-J_{22}-\widetilde{R}&G_{2}&G_{R}&0&-J_{23}\\ -G_{1}^{*}&-G_{2}^{*}&0&0&0&-G_{3}^{*}\\ 0&-G_{R}^{*}&0&0&0&0\\ -G_{L}^{*}&0&0&0&0&0\\ -J_{13}^{*}&-J_{23}^{*}&-G_{3}&0&0&J_{33}\end{bmatrix}\begin{bmatrix}e_{L}(t) \\ e_{R}(t)\\ \lambda(t)\\ \lambda_{R}(t)\\ \lambda_{L}(t)\\ u(t)\end{bmatrix}. \tag{18}\] We now define \[J:=\begin{bmatrix}-J_{11}&-J_{12}&G_{1}&0&G_{L}\\ J_{12}^{*}&-J_{22}&G_{2}&G_{R}&0\\ -G_{1}^{*}&-G_{2}^{*}&0&0&0\\ 0&-G_{R}^{*}&0&0&0\\ -G_{L}^{*}&0&0&0&0\end{bmatrix},\quad R:=\begin{bmatrix}0&0&0&0\\ 0&\widetilde{R}&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\end{bmatrix},\quad E=\begin{bmatrix}L&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\end{bmatrix}, \tag{19}\] \[B:=\begin{bmatrix}-J_{13}\\ -J_{23}\\ -G_{3}^{*}\\ 0\\ 0\end{bmatrix},\quad N=J_{33},\quad S=0,\quad Q=I,\quad P=0.\] If \((u,x,y)\) solves \((\mathcal{D},\mathcal{L},\mathcal{R})\), then \(Le_{L}\in C^{1}([0,\infty),\mathbb{K}^{n})\) holds and \((u,z,y)\) with \(z=(e_{L},e_{R},\lambda,\lambda_{R},\dot{\lambda}_{L})\) solves (18). This proves (i). To show (ii), let \((u,z,y)\) with \(z=(e_{L},e_{R},\lambda,\lambda_{R},\mu_{L})\) solve (18). Then for \((x_{0},e_{L}(0))\in\mathcal{L}\) there exists unique \(\lambda_{L}^{0}\in\mathbb{K}^{l}\) such that \(x_{0}=Le_{L}(0)+G_{L}\lambda_{L}^{0}\) holds. We set \(\lambda_{L}(t)=\lambda_{L}^{0}+\int_{0}^{t}\mu_{L}(s)\,ds\) and define \(x(t)=Le_{L}(t)-G_{L}\lambda_{L}(t)\). Since \(Le_{L}(0)\) is given, we have \(x(0)=Le_{L}(0)+G_{L}\lambda_{L}^{0}=x_{0}\). Furthermore, \(f_{R}(t)=-\widetilde{R}e_{R}(t)+G_{R}\lambda_{R}(t)\) fulfills \((f_{R}(t),e_{R}(t))\in\mathcal{R}\) and consequently (5) is satisfied. Therefore, \((u,x,y)\) is a solution of \((\mathcal{D},\mathcal{L},\mathcal{R})\) which proves (ii). If \(-\mathcal{L}\) is resistive, then Proposition 8 implies that \(L\) as in (16) fulfills \(L=L^{*}\geq 0\) and therefore \(E\) given by (19) satisfies \(E=E^{*}\geq 0\). **Remark 16**.: Assume that for the Lagrange structure \(\mathcal{L}\) in Definition 10 the subspace \(-\mathcal{L}\) is resistive. Theorem 15 yields \(E\geq 0\). Therefore, the pencil \((E,J-R)\) is regular1, if and only if \(\ker E\cap\ker(J-R)=\{0\}\) holds, see e.g. [1, Lemma 6.1.4]. Furthermore, it was shown in [1] that the following holds Footnote 1: I.e., \(\lambda E-(J-R)\) is invertible for some \(\lambda\in\mathbb{C}\) \[(sE-(J-R))^{-1}+(sE^{*}-(J-R)^{*})^{-1}\geq 0\quad\text{for }\text{Re}(s)>0.\] Therefore the resulting descriptor system is positive real, i.e. that the transfer function \(G(s)=B^{*}(sE-(J-R))^{-1}B+N\) fulfills \[G(s)+G(s)^{*}=B^{*}((sE-(J-R))^{-1}+(sE^{*}-(J-R)^{*})^{-1})B\geq 0\quad\text{ for }\text{Re}(s)>0.\] More details on the relation of positive real and pH descriptor systems, as well as their relation to passive descriptor systems can be found in [1]. **Remark 17**.: In the proof of Theorem 15, we applied Proposition 8 to each of the subspaces \(\mathcal{D}\), \(\mathcal{L}\), and \(\mathcal{R}\). Since Proposition 8 holds for monotone subspaces as well, one might generalize and assume that \(\mathcal{L}\) is monotone. In this case, the corresponding pH descriptor system can be derived as in Theorem 15 and \(E+E^{*}\geq 0\) holds. Then, the resulting matrix pair \((E,J-R)\) is said to have positive Hermitian part. In [13], the spectral properties as well as the regularity and the Kronecker canonical form of these pencils were further analysed. ### From descriptor pH to geometric pH Next, we show how to associate a geometric pH system with a given pH descriptor system such that there is a one-to-one correspondence between the solutions of the two systems. The following result is a slight extension of [16, Theorem 3] where no additional Lagrange structure was considered. **Theorem 18**.: _Let a pH descriptor system as in Definition 13 be given with_ \[\ker E\cap\ker Q=\{0\}.\] _Let \(W=\begin{bmatrix}R&P\\ P^{*}&S\end{bmatrix}\) as in (8), \(\Gamma:=\begin{bmatrix}J&B\\ -B^{*}&-N\end{bmatrix}\), set \(r=n+m\), and define_ \[\mathcal{L}:=\operatorname{ran}\begin{bmatrix}E\\ Q\end{bmatrix}\subset\mathbb{K}^{n}\times\mathbb{K}^{n},\qquad\mathcal{R}:= \operatorname{gr}(-W)=\operatorname{ran}\begin{bmatrix}I_{r}\\ -W\end{bmatrix}\subset\mathbb{K}^{r}\times\mathbb{K}^{r}.\] _Further, with the matrices_ \[U:=\begin{bmatrix}I_{n}&0&0\\ 0&0&I_{r}\\ 0&I_{m}&0\end{bmatrix}\in\mathbb{K}^{(n+r+m)\times(n+r+m)}\qquad\text{and} \qquad\widetilde{D}:=\begin{bmatrix}-\Gamma&-I_{r}\\ I_{r}&0\end{bmatrix}\in\mathbb{K}^{(n+r+m)\times(n+r+m)}\] _define the subspace_ \[\mathcal{D}:=\operatorname{gr}^{-1}\bigl{(}U\widetilde{D}U^{*}\bigr{)}.\] _Then \((\mathcal{D},\mathcal{L},\mathcal{R})\) is a geometric pH system. Moreover, the following hold:_ 1. _If_ \((u,z,y)\) _solves the DAE (_7_), then_ \((u,Ez,y)\) _solves the geometric pH system_ \((\mathcal{D},\mathcal{L},\mathcal{R})\)_._ 2. _If_ \((u,x,y)\) _solves the geometric pH system_ \((\mathcal{D},\mathcal{L},\mathcal{R})\) _with_ \(f_{R}\)_,_ \(e_{R}\)_, and_ \(e_{L}\) _as in Definition_ 10_, then_ \((u,z,y)\) _solves the DAE (_7_), where_ \(z\) _is the unique function satisfying_ \(x=Ez\) _and_ \(e_{L}=Qz\)_._ Proof.: Assume that \((u,z,y)\) solves (7). Then, setting \[e_{L}:=Qz,\quad f_{R}:=\begin{bmatrix}Qz\\ u\end{bmatrix},\quad e_{R}:=-Wf_{R},\] we have \[\begin{bmatrix}-\dot{x}\\ y\\ f_{R}\end{bmatrix}=\begin{bmatrix}-\frac{d}{dt}Ez\\ y\\ Qz\\ u\end{bmatrix}=\begin{bmatrix}(-J+R)Qz+(-B+P)u\\ (B+P)^{*}Qz+(S+N)u\\ Qz\\ u\end{bmatrix}=\begin{bmatrix}(-\Gamma+W)f_{R}\\ f_{R}\end{bmatrix}=\begin{bmatrix}-\Gamma f_{R}-e_{R}\\ f_{R}\end{bmatrix}=\widetilde{D}\begin{bmatrix}Qz\\ u\\ e_{R}\end{bmatrix}.\] Now applying \(U\) from the left to this equation shows that \((-\dot{x},f_{R},y,e_{L},e_{R},u)\in\mathcal{D}\). Conversely, if \((u,x,y)\) solves the geometric pH system \((\mathcal{D},\mathcal{L},\mathcal{R})\), then there exist functions \(f_{R},e_{L},e_{R}\) such that \((-\dot{x},f_{R},y,e_{L},e_{R},u)\in\mathcal{D}\), \((x,e_{L})\in\mathcal{L}\), and \((f_{R},e_{R})\in\mathcal{R}\). By the definition of \(\mathcal{L}\) and \(\mathcal{R}\), there exists a unique function \(z\) such that \(x=Ez\) and \(e_{L}=Qz\). Moreover, \(e_{R}=-Wf_{R}\), and we obtain \[\begin{bmatrix}-\frac{d}{dt}Ez\\ y\\ f_{R}\end{bmatrix}=\begin{bmatrix}-\dot{x}\\ y\\ f_{R}\end{bmatrix}=\widetilde{D}\begin{bmatrix}e_{L}\\ u\\ e_{R}\end{bmatrix}=\begin{bmatrix}-\Gamma&-I_{r}\\ I_{r}&0\end{bmatrix}\begin{bmatrix}Qz\\ u\\ -Wf_{R}\end{bmatrix}=\begin{bmatrix}-\Gamma\begin{bmatrix}Qz\\ u\\ -Wf_{R}\end{bmatrix}\begin{bmatrix}Qz\\ u\\ \end{bmatrix}+Wf_{R}\\ \begin{bmatrix}Qz\\ u\end{bmatrix}\end{bmatrix}.\] This implies that \(f_{R}=[\begin{smallmatrix}Qz\\ u\end{smallmatrix}]\) and thus \[\begin{bmatrix}-\frac{d}{dt}Ez\\ y\end{bmatrix}=(W-\Gamma)\begin{bmatrix}Qz\\ u\end{bmatrix},\] which means that \((u,z,y)\) solves (7). **Remark 19**.: For pH descriptor system as in Definition 13 satisfying \(\ker E\cap\ker Q=\{0\}\), we obtain a geometric pH system from Theorem 18. Applying Theorem 15 to this geometric system, leads back to a pH descriptor system as in Definition (13) which fulfills \(Q=Id\). Hence, we obtain a pH descriptor system with invertible \(Q\), that is equivalent to the original descriptor system, but with a larger state space dimension, see also [16, 16] for alternative methods on achieving invertibility of \(Q\). Conclusion, extensions and open problems We have shown that the geometric formulation of port-Hamiltonian systems is equivalent to the state-space representation by means of differential algebraic equations. To this end, we utilized tools from multilinear algebra and provided constructive proofs to transfer either of the formulation to the other. The main assumption to derive a geometric representation from a pH-DAE was that the matrices in the Hamiltonian \(\mathcal{H}(x)=\frac{1}{2}x^{*}Q^{*}Ex\), \(x\in\mathbb{K}^{n}\), satisfy \(\ker E\cap\ker Q=\{0\}\). Concerning future research, a first extension could be the investigation of the case of a non-trivial kernel intersection. In this case, one cannot directly define a Lagrange subspace by means of these two matrices as it was done in the proof of Theorem 18. One possible remedy could be to isolate the common kernel by means of a common singular value decomposition of \(E\) and \(Q\), similarly as in [10, Lemma 3.6]. Furthermore, the geometric pH representation obtained in Theorem 18 could be not optimal in the sense that the dimensions of constructed Dirac and the maximal resistive structure, which is equal to \(2(n+m)\), might be further reduced. In view of an extension to the infinite-dimensional case, a first step would be to define an infinite-dimensional differential algebraic formulation of pH systems. Whereas infinite-dimensional DAEs are a very delicate issue [11, 12, 13, 14], a definition for closed systems was given in [10] by incorporating the pH structure, i.e., the dissipativity of the main operator. From a geometric point of view, infinite-dimensional pH systems give rise to (Stokes)-Dirac structures [12] including also boundary port variables. An analytical viewpoint on Dirac structures for skew-symmetric differential operators was provided in [11, 13]. In the present note, we focused on continuous time pH systems. Recently, in [10] a definition for discrete-time pH descriptor systems was given. Therein the discrete time pH system was obtained from a Cayley transformation of continuous time systems. Furthermore, the Cayley transform was applied to the underlying Dirac structures which results in contractive subspaces as discrete-time counter parts of Dirac structures. It remains an open problem to compare the geometric pH formulation and the descriptor pH formulation for discrete-time systems.
2304.08159
WaterLily.jl: A differentiable fluid simulator in Julia with fast heterogeneous execution
Integrating computational fluid dynamics (CFD) software into optimization and machine-learning frameworks is hampered by the rigidity of classic computational languages and the slow performance of more flexible high-level languages. WaterLily.jl is an open-source incompressible viscous flow solver written in the Julia language. The small code base is multi-dimensional, multi-platform and backend-agnostic (serial CPU, multi-threaded, & GPU execution). The simulator is differentiable and uses automatic-differentiation internally to immerse solid geometries and optimize the pressure solver. The computational time per time step scales linearly with the number of degrees of freedom on CPUs, and we see up to a 182x speed-up using CUDA kernels. This leads to comparable performance with Fortran solvers on many research-scale problems opening up exciting possible future applications on the cutting edge of machine-learning research.
Gabriel D. Weymouth, Bernat Font
2023-04-17T11:11:44Z
http://arxiv.org/abs/2304.08159v1
# Waterlily.Jl: A Differentiable Fluid Simulator In Julia With Fast Heterogeneous Execution ###### Abstract Integrating computational fluid dynamics (CFD) software into optimization and machine-learning frameworks is hampered by the rigidity of classic computational languages and the slow performance of more flexible high-level languages. WaterLily.jl is an open-source incompressible viscous flow solver written in the Julia language. The small code base is multi-dimensional, multi-platform and backend-agnostic (serial CPU, multi-threaded, & GPU execution). The simulator is differentiable and uses automatic-differentiation internally to immerse solid geometries and optimize the pressure solver. The computational time per time step scales linearly with the number of degrees of freedom on CPUs, and we see up to a 182x speed-up using CUDA kernels. This leads to comparable performance with Fortran solvers on many research-scale problems opening up exciting possible future applications on the cutting edge of machine-learning research. \(34^{th}\) International Conference on Parallel Computational Fluid Dynamics May 29-31 2023, Cuenca Ecuador **Key words:** automatic differentiation, heterogeneous-programming, Cartesian-grid methods, Julia, GPU ## 1 Introduction During the last decade, the computational fluid dynamics (CFD) community has embraced the surge of machine learning (ML) and the new developments in hardware architecture, such as general-purpose GPUs. Hence, classic CFD solvers based on low-level programming languages (C, Fortran) and CPU memory-distributed computations are now adapted to accommodate these new tools. On one hand, the integration of high-level ML libraries and low-level CFD solvers is not straight-forward, aka. the two-language problem. When deploying a ML model online with the CFD solver, data exchange is often performed at disk level, significantly slowing down the overall runtime because of disk i/o operations. An improved way to exchange data is performed through memory, either using Unix sockets [1, 2] or an in-memory distributed database [3], which increases the software complexity. On the other hand, porting classic CFD solvers to GPU is also a non-trivial task which often requires the input and expertise of GPU vendors [4]. Julia [5] is an open-source, compiled, dynamic, and composable programming language specifically designed for scientific computing which can help tackle such software challenges. High-level libraries and low level code can co-exist without compromising computing performance. Moreover, its excellent meta-programming capabilities, dynamic types, and multiple-dispatch strategy maximizes code re-usability. A great example of this is the KernelAbstractions.jl library [6], which enables writing heterogeneous kernels for different backends (multithreaded CPU, NVIDIA, AMD, and others) in a single framework. Julia has been also tested in many HPC systems, and the reader is referred to [7] for a comprehensive review. ## 2 Computational approach WaterLily solves the incompressible Navier-Stokes equations on a uniform Cartesian background grid using a third-order finite volume scheme with explicit adaptive time stepping and implicit Large Eddy Simulation turbulence modelling [8]. The Boundary Data Immersion Method [9] which accurately enforces general boundary conditions through modification of the pressure Poisson matrix [10] is used. The pressure system is solved using Geometric-MultiGrid (GMG), with a solution time that scales linearly with the number of cells [11]. Julia's flexible and fast programming capabilities enabled the implementation of WaterLily to have many special features. For example, Automatic Differentiation (AD) to define all of the properties of the immersed geometry from a user-defined signed-distance function and coordinate mapping function. Indeed, the entire solver is differentiable, which has been used to develop accelerated data-driven GMG methods [11]. However, the most important Julia features for implementing the solver to run on heterogeneous back-ends are (i) the typing system, (ii) the meta-programming capabilities, and (iii) the rich open-source packages. Multiple-dispatch enables simple functions (such as array-scaling or reduction) to be written at high-level by the user and the compiler will specialize the code for efficient execution on the CPU or GPU. For more specialized tasks, meta-programming is used to generate efficient code based on a general kernel. As an example, the gradient of the n-dimensional pressure field p is applied to the velocity field u using for i in 1:n # apply pressure gradient @loop u[I,i] -= coeff[I,i]*(p[I]-p[I-del(i)]) over I in inside(p) end del(i) = ntuple(j -> j==i? 1 : 0,n) where del(i) defines a step in direction i. @loop is a macro which evaluates this kernel at the points I inside the field using the KernelAbstractactions.jl package [6] to generate optimized code for each back-end. This macro is used for nearly every loop in the code-base, enabling the efficient heterogeneous flow solver to be written in only around 800 lines of code! Note that there are drawbacks to this simple kernel-driven approach. For one - each kernel tends to be fairly short. While this makes the code easy to read, it is not great for loading up the GPU or CPU threads. In the example above, the n-loops _can_ be combined into a single loop, improving the loading. However, this isn't possible for many loops in the code, and would require significant refactoring for many others, meaning only large array operations will see a speed up. The second drawback is that kernels cannot be used for loops that must iterate through points in order, such as in many relaxation methods. Because of this, we now use Conjugate-Gradient smoothing within the GMG levels. ## 3 Applications and results A performance comparison between baseline serial execution, multi-threaded CPU execution, and GPU execution is presented next. The baseline execution does not make use of the KernelAbstractions.jl library, whereas the parallel executions do. Two different 3D cases are considered: the Taylor-Green vortex (TGV) at \(Re=10^{5}\) and flow past a donut at \(Re=10^{3}\) (see Figure 2). A total of 0.1 convective time units is simulated for each case. The main difference between the cases is the presence of a solid boundary, which makes the pressure solver dominate the execution time of the simulation. Otherwise, the convection-diffusion routine becomes the most expensive. Different grid sizes are considered for each case. For the TGV, \(N=(2^{p},2^{p},2^{p})\) where \(p=\{5,6,7,8\}\) is considered. For the donut case, \(N=(2^{p+1},2^{p},2^{p})\) where \(p=\{4,5,6,7\}\) is considered. The grids are selected so that the tests fit in a NVIDIA GeForce GTX 1650 Ti GPU card. The CPU execution is performed on an Intel Core i7-10750H x6 processor, with thread-affinity set to each physical core (following, multi-threaded CPU execution is denoted as "CPU" and serial CPU execution as "serial"). The TGV results are presented first. As observed in Figure 1, the runtime of the time-stepping routine simstep! increases linearly with the grid size in the serial execution. The CPU multi-threaded execution and the GPU execution significantly speed up this runtime. Specifically, it can be observed in Table 1 that the CPU and GPU executions can outperform the serial execution by a factor of 9 and 70 respectively. Table 2 breaks down the main routines in simstep!, showing prominent speed-ups for the convection-diffusion routine convdiff! and pressure solver project!, which are the most expensive kernels in the incompressible flow solver. The convdiff! routine dominates the TGV test case, and so the speed-ups of around 10x on CPU and 70x on GPU are reflected in the overall simulation speed up. The donut test is dominated by project! and sees up to 23x speed-up on CPU and 182x on GPUs. Note that the CPU execution stagnates at \(log_{2}(N)=21\) whereas the GPU execution still improves up to the finest grid.
2302.13191
DeepCPG Policies for Robot Locomotion
Central Pattern Generators (CPGs) form the neural basis of the observed rhythmic behaviors for locomotion in legged animals. The CPG dynamics organized into networks allow the emergence of complex locomotor behaviors. In this work, we take this inspiration for developing walking behaviors in multi-legged robots. We present novel DeepCPG policies that embed CPGs as a layer in a larger neural network and facilitate end-to-end learning of locomotion behaviors in deep reinforcement learning (DRL) setup. We demonstrate the effectiveness of this approach on physics engine-based insectoid robots. We show that, compared to traditional approaches, DeepCPG policies allow sample-efficient end-to-end learning of effective locomotion strategies even in the case of high-dimensional sensor spaces (vision). We scale the DeepCPG policies using a modular robot configuration and multi-agent DRL. Our results suggest that gradual complexification with embedded priors of these policies in a modular fashion could achieve non-trivial sensor and motor integration on a robot platform. These results also indicate the efficacy of bootstrapping more complex intelligent systems from simpler ones based on biological principles. Finally, we present the experimental results for a proof-of-concept insectoid robot system for which DeepCPG learned policies initially using the simulation engine and these were afterwards transferred to real-world robots without any additional fine-tuning.
Aditya M. Deshpande, Eric Hurd, Ali A. Minai, Manish Kumar
2023-02-25T23:16:57Z
http://arxiv.org/abs/2302.13191v1
# DeepCPG Policies for Robot Locomotion ###### Abstract Central Pattern Generators (CPGs) form the neural basis of the observed rhythmic behaviors for locomotion in legged animals. The CPG dynamics organized into networks allow the emergence of complex locomotor behaviors. In this work, we take this inspiration for developing walking behaviors in multi-legged robots. We present novel DeepCPG policies that embed CPGs as a layer in a larger neural network and facilitate end-to-end learning of locomotion behaviors in deep reinforcement learning (DRL) setup. We demonstrate the effectiveness of this approach on physics engine-based insectoid robots. We show that, compared to traditional approaches, DeepCPG policies allow sample-efficient end-to-end learning of effective locomotion strategies even in the case of high-dimensional sensor spaces (vision). We scale the DeepCPG policies using a modular robot configuration and multi-agent DRL. Our results suggest that gradual complexification with embedded priors of these policies in a modular fashion could achieve non-trivial sensor and motor integration on a robot platform. These results also indicate the efficacy of bootstrapping more complex intelligent systems from simpler ones based on biological principles. Finally, we present the experimental results for a proof-of-concept insectoid robot system for which DeepCPG learned policies initially using the simulation engine and these were afterwards transferred to real-world robots without any additional fine-tuning. Developmental robotics, Central pattern generator, Locomotion, Reinforcement learning; Deep neural networks ## I Introduction Biologically inspired robots often have many degrees of freedom. Locomotion in such legged robots involves the interaction of a multi-body system with the surrounding environments through multiple contact points. This presents a formidable challenge for traditional control approaches. Self-organization of complex behaviors is seen as a much more viable approach for these. While it may be possible to solve some of the challenges of autonomous locomotion in controlled environments, traditional approaches are not useful for the real world situations where multi-task generalization of the system is required. Biologically, it is known that Central Pattern Generators (CPGs) are the neural modules primarily responsible for generating rhythmic responses that result in oscillatory functions [1, 2, 3, 4]. CPGs are used in a number of biological functions like walking, swimming, flying, etc. [5] and have inspired development of elegant biomimetic control approaches for locomotion of legged robots [6, 7, 8]. Furthermore, sensory feedback also plays an important role in regulating the oscillatory behaviors of CPGs [9]. It has been reported that CPGs, sensory information and descending brain inputs interact with each other to orchestrate coordinated movement of the six legs of a walking insect [10, 11]. DeAngelis et al. [12] observed that sensory perturbations in walking Drosophila are responsible for altering their periodic walking gaits. Their findings suggested the variablity in Drosophila walking gaits could be a result of low-dimensional control architecture, which provides a framework for understanding the neural circuits that regulate hexapod legged locomotion CPG networks are capable of generative encoding, and have inherent flexibility in combining phase coupling with traditional kinematic control to produce a variety of coordinated behaviors [13, 14, 15, 16, 17, 18]. The incorporation of sensory feedback into CPGs has also been investigated, but this often requires extensive engineering. Thus, such methods have been developed for controlled scenarios including salamander-inspired Pleurobot [15], quadruped robots [19], worm-like robots [20], stick-insect robots [21], and dung beetle-like robot [22]. With advances in robotics, deep learning, and neuroscience, we are starting to see real-world robots that not only look like but also interact with the environment just as living creatures do [15, 23, 24, 25, 26]. However, most of these need hand-tuned parameter sets and are only evaluated in the constrained or supervised setting. The models used in these robots are simplified further with various assumptions to reduce the number of tunable parameters. Although such methods have beautifully demonstrated how higher-level neural modulations in cortical centers could enable the emergence of various locomotion strategies [15], there has been limited work on how to extend these models for actively using high-dimensional complex observations from various on-board sensors to modulate the cortical signals. Previous studies have demonstrated the usefulness of biomimetic movement templates such as movement primitives and CPGs in robots [27, 28, 29, 30, 31, 32, 33, 34]. The Hodgkin-Huxley model of action potential generation in single neurons [35, 36] has been used for developing locomotion strategies in snake-like [37, 38] and quadruped robots [39]. The Matsuoka model [40] has been used to produce robot locomotion [41, 17]. The Van der Pol oscillator [42] and the Hopf oscillator [43] were used in swinging robots [44] and for generating walking gaits [45, 46]. The Kuramoto model of coupled phase oscillators is one of the most abstract and simple phenomenological models [47, 48] and is widely used in the robotics community to develop locomotion strategies [15, 17, 49, 50]. Most of these approaches have been developed with a behavior-specific objective for robots and employed parameter tuning for behavioral policies either using manual effort or auto-tuning with a genetic algorithm. For high-dimensional sensor and action spaces, deep learning methods embedded with movement primitive have been investigated in the context of imitation learning or supervised learning [51, 52]. Model-based control guided imitation learning has also been proposed for learning various quadruped robot gaits [53]. The end-to-end learning of behavioral control policies based on high-dimensional sensor feedback along with movement priors has received limited attention. In this work, we focus on the problem of generality in controlling legged robots with high-dimensional environment sensing (observation space) and interaction modalities (action space) using CPG-inspired movement priors. We propose to bring together current ideas in reinforcement learning and deep learning, and expressive parameterizations resulting from CPGs in developing novel models that are capable of learning expressive behaviors for walking robots. Brooks et al. [54] presented one of the first works that demonstrated feedback and reactive control laws alone can generate the full locomotive behaviors. Reinforcement learning (RL), on the other hand, offers a framework for artificial agents to _learn_ such decision-making in their environments by trial and error using the feedback from their experiences [55]. The paradigm of modern deep reinforcement learning (DRL) algorithms has emerged with promising approaches to teach robots complex end-to-end control tasks such as locomotion [56, 57]. Typically, an RL agent (i.e. control policy) is trained to predict in raw action spaces and outputs actions in terms of motor torques or joint angles that may lead to non-smooth trajectories. It has been suggested previously by various works [58, 59, 60] that if we model the policy to predict actions in the trajectory space of the system, its response could be constrained to remain smooth. For example, in the case of a robotic system, if its control policy predicts the actions as trajectories of its various motor joints, the system response remains smooth even when these actions change (see cpg \(0^{\prime}\) in Fig. 1). We propose to use this principle in training the locomotion policies for the walking robots. Various DRL algorithms have been proposed for learning legged robot locomotion where the control policies are trained from scratch. The work in [61] presented an attention-based recurrent encoder network for learning robust quadruped walking in challenging terrain. Authors in [62] presented a meta-RL approach that could adapt to different quadruped designs. Using locomotion priors with DRL has also been investigated in the past. Structured control networks proposed separation of linear and nonlinear parts of the control policy and using sinusoidal activations for the nonlinear part of the policy while training the locomotion agents [63]. Although these approaches showed improved performance compared to conventional DRL training, the policy architecture did not preserve the structure of the dynamical system defined by the CPG models. The work in [63] was extended in [64] to use recurrent neural networks. Authors in [64] fine-tuned locomotion priors using an evolutionary approach. Authors in [65, 41] treated CPG modules as part of the environment making it a black-box while training the policy, and pre-tuned their CPG modules either manually or by genetic algorithm. Although the work in [66] presented a similar approach, the policy architecture requires this method to execute the actions every time-step of the control task, and this study was also limited to a single-legged system for a hopping task. In this work, we propose to address the limitations of previous works by developing a hierarchical locomotion policy architecture that embeds a CPG model (refer Fig. 2). We embed the locomotion policies, that we call DeepCPG, with Kuramoto oscillators [47] representing the CPG models. In this hierarchical setting, the artificial neural network predicts the parameters that define the CPG specifications, while the recurrent CPG layer outputs the action commands to the robots. We show the effectiveness of the proposed approach for developing the end-to-end control strategies for walking robots in high-dimensional observation and action spaces. The rest of this paper is outlined as follows: Section II briefly discusses the RL basics. Section III provides the details of the proposed hierarchical policy architecture followed by Section IV describing the scaling strategy proposed for DeepCPG. Section V describes the training and deployment details for the proposed policy. Section VI presents the results and discussions. The conclusions and future directions are discussed in Section VII. ## II Background The standard continuous control RL setting is adopted for the work reported in this paper and the terminology is adopted from the standard RL textbook by Sutton et al. [55]. The RL agent interacts with an environment according to a behavior policy \(\pi\). The environment produces state (or observation) \(s_{t}\) for each step \(t\). The agent samples the action \(u_{t}\sim\pi\) and applies it to the environment. For every action, the environment yields a reward \(r_{t}\). The aim of the RL agent is given by Eq. (1): \[\pi^{*} =\arg\max_{\pi}\mathbb{E}_{\tau\sim\pi,p_{\mathrm{s}_{0}}}\left[R (\tau)\right]\] \[=\arg\max_{\pi}\mathbb{E}_{\tau\sim\pi,p_{\mathrm{s}_{0}}}\left[ \sum_{t=0}^{\infty}\gamma^{t}r_{t}\mid\pi\right] \tag{1}\] where \(\tau=(s_{0},u_{0},s_{1},u_{1},\dots)\) is the state-action trajectory sampled using policy \(\pi\), the initial state is sampled from a fixed distribution \(s_{0}\sim p_{s_{0}}\) and \(0\leq\gamma<1\) is the discount factor. \(R(\tau)\) is the return of the agent over the complete episode. One may efficiently learn a good policy from state-action-reward transitions collected by the RL agent by interaction with the environment. Temporal difference learning, a model-free RL approach, provides the framework to learn a control policy based on these collected interactions and by bootstrapping from the current estimation of the value function [55]. In our approach, we make use of the popular Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm [67] for Fig. 1: Kuramoto model response for state and parameter perturbations: CPG \(0\) and CPG \(0^{\prime}\) are initialized at identical states. Plot contains CPG outputs \(y_{0}\) and \(y_{0^{\prime}}\) (cpg \(0\)) CPG states \(\phi_{0},a_{0},b_{0}\) were perturbed to random values at iterations \(500\) and \(1500\). (cpg \(0^{\prime}\)) CPG parameters \(\omega_{0^{\prime}},A_{0^{\prime}},B_{0^{\prime}}\) were perturbed to random values at iterations \(500\) and \(1500\). Refer equations (3)-(6) for details. learning in a continuous action space. This algorithm is a variant of Deep Deterministic Policy Gradient (DDPG) [68]. Using DDPG, a policy \(\pi_{\theta}\) (parameterized by \(\theta\)) and the state-action value function \(Q_{\kappa}\) (parameterized by \(\kappa\)) are concurrently learned. The Q-function represents the value of being in a state \(s_{t}\) and taking action \(u_{t}\), and it is trained to minimize the Bellman error over all the sampled transitions from the collected data given by Eq. (2). The policy \(\pi_{\theta}\) is trained to maximize the \(Q_{\kappa}(s_{t},\pi_{\theta}(s_{t}))\) over all the observed states collected by environment interactions. \[L(s_{t},u_{t},s_{t+1}) = (Q_{\kappa}(s_{t},u_{t})-r_{t}-\gamma Q_{\kappa}(s_{t+1},\pi_{ \theta}(s_{t+1})))^{2} \tag{2}\] Although TD3 and DDPG train a deterministic policy, experience replay is collected by augmenting the actions with Gaussian Noise. We note that TD3 makes several modifications to the DDPG algorithm to yield a robust and stable policy learning procedure. These modifications include (1) ensemble learning over Q-functions (two Q-functions are used), (2) policy and target networks update less frequently than Q-functions and (3) addition of noise to target actions, to make it harder for the control policy to exploit Q-function errors by smoothing out Q-values along with changes in action. Out of the two Q-functions \(Q_{\kappa_{1}}\) and \(Q_{\kappa_{2}}\) used for training the control policy in TD3, minimum of the two is chosen as a target value to prevent the over estimation [69]. For further details regarding TD3, we refer the readers to [67]. ## III Approach In this work, we propose a hierarchical policy architecture to connect the DRL-based policies to CPG models. Figure 2 provides an overview of the control architecture using this policy. In this three-level hierarchy, the high-level policy is a neural network \(\pi_{\theta}\) parameterized by \(\theta\). The policy \(\pi_{\theta}\) generates goals for the mid-level CPG policy \(\pi_{CPG}\) that is based on the Kuramoto oscillator model [15]. The Kuramoto Model was a design choice but any other CPG models can be chosen without loss of generality. Finally, \(\pi_{CPG}\) generates goals for the low-level primitive controller \(\pi_{PC}\) that generates motor commands for the robot motor joints. The proposed method enables two objectives: (1) The neural network enables acting in the CPG parameter space, so the trajectories generated by CPGs remain smooth and can direct lower-level controllers generating joint motor commands for the robot safely as illustrated in Fig.1. (2) The CPG models allow embedding **intrinsic primitive behaviors** in the policy, enabling faster learning of the goal-directed behaviors. ### _Neural Policy \(\pi_{\theta}\)_ There are complex interactions between the central nervous system and the peripheral nervous system in animals. One of the basic and most important functions of the central nervous system is processing sensory feedback. Observations sensed from the environment need to be processed along with the agent's desired goals to generate appropriate responses. For the presented approach, this process is simulated using the neural policy \(\pi_{\theta}\). This policy takes in the sensor feedback from the robot and the desired goal to generate the response \(g_{CPG}\) that is relayed to the CPG policy \(\pi_{CPG}\), where \(g_{CPG}\) represents the parameters that drive the CPG behavior (see Fig. 2). Section III-B provides further details regarding this part of the system. ### _Central Pattern Generators for Motor Trajectories_ We simulate the central pattern generator for motor dynamics using the Kuramoto Model. Equations (3)-(6) describe this dynamical system. Although simple, this CPG network model enables replication of many behaviors observed in vertebrates. It also allows modeling of inter-oscillator couplings and assumes the presence of underlying oscillatory mechanisms. Simple modulation of CPG parameters can lead to emergence of various useful behaviors as shown in several studies [15, 16, 17, 70]. In this work, a single CPG controls a single joint on the robot. \[\dot{\phi}_{i} = \alpha_{\omega}\omega_{i}+\sum_{i^{\prime}\neq i}a_{i^{\prime}} \alpha_{w}w_{ii^{\prime}}\sin(\phi_{i^{\prime}}-\phi_{i}-\alpha_{\varphi} \varphi_{ii^{\prime}}) \tag{3}\] \[\ddot{a}_{i} = \alpha_{a}(\beta_{a}(\alpha_{A}A_{i}-a_{i})-\dot{a}_{i})\] (4) \[\ddot{b}_{i} = \alpha_{b}(\beta_{b}(\alpha_{B}B_{i}-b_{i})-\dot{b}_{i})\] (5) \[y_{i} = b_{i}+a_{i}\sin(\phi_{i}) \tag{6}\] Fig. 2: Overview of Hierarchical Policy architecture used for Quadruped Robot In Equations (3)-(6), for CPG \(i\), \(\phi_{i}\) is the phase, \(\omega_{i}\) is the natural frequency, \(A_{i}\) is the desired amplitude and \(B_{i}\) is the desired offset, \(a_{i}\) is the current amplitude and \(b_{i}\) is the current offset. \(w_{ii^{\prime}}\geq 0\) represents the coupling weight between the incoming connection from CPG \(i^{\prime}\) to CPG \(i\), and \(\varphi_{ii^{\prime}}\) is the connection phase bias between the CPGs \(i\) and \(i^{\prime}\). Parameters \((\alpha_{a},\beta_{a})\) and \((\alpha_{b},\beta_{b})\) are constants, whereas \(\dot{\phi}_{i},\dot{a}_{i},\dot{b}_{i}\) represents the first time derivatives and \(\ddot{a_{i}},\ddot{b_{i}}\) the second time derivatives of respective variables. The external modulation constants \(\alpha_{x}\geq 0\forall x\in\{w,\varphi,\omega,A,B\}\) account for possibly user-defined external influence on the parameters that drive the \(\pi_{CPG}\) policy. For instance, \(\alpha_{A}\) and \(\alpha_{B}\) could depend on the motor joint limits on the robot and could be set to match these limits. Parameters \(\alpha_{w}\), \(\alpha_{\varphi}\) and \(\alpha_{\omega}\) influence the rate of change of joint angles. We kept these values constant for the robot in our implementation. The \(\pi_{\theta}\) generates \(g_{CPG}\equiv\{w,\varphi,\omega,A,B\}\) to govern the CPG behavior for the robot. Thus, \(A_{i}\in A\) is then scaled by \(\alpha_{A}\) depending on the system requirement after a prediction from \(\pi_{\theta}\). The output of the CPG \(i\) is given by \(y_{i}\). The CPG network produces the desired joint trajectories for all the motor joints on the robot. The parameters \((w_{ii^{\prime}},\varphi_{ii^{\prime}},\omega_{i},A_{i},B_{i})\) define the behavior for CPG node \(i\) in this network. We assume that each node in the CPG network influences every other node symmetrically. Therefore, for a robot with \(N\) motor joints, we have a symmetric weight matrix \(w=\{w_{ii^{\prime}}\}\forall i,i^{\prime}\in\{1,\ldots,N\}\) is \(N\times N\) with zero diagonal elements, a phase bias matrix \(\varphi=\{\varphi_{ii^{\prime}}\}\forall i,i^{\prime}\in\{1,\ldots,N\}\) that is \(N\times N\) skew-symmetric, and natural frequencies \(\omega\), desired amplitudes \(A\), and desired offsets \(B\) each forming \(N\)-dimensional vectors. The zero diagonals of matrices \(\varphi\) and \(w\) signify that CPG nodes do not have recurrent connections to themselves and the network formed by these nodes is a bidirectional network. In this bidirectional CPG network, each CPG node \(i\) influences its neighboring CPG node \(i^{\prime}\) proportional to the weight \(w_{ii^{\prime}}\). Node \(i\) being out of phase to node \(i^{\prime}\) by \(\varphi_{ii^{\prime}}\), makes node \(i^{\prime}\) out of phase to node \(i\) by \(-\varphi_{ii^{\prime}}\) as a result of skew-symmetric matrix \(\varphi\). Additionally, the assumption of a symmetric \(w\) and skew-symmetric \(\varphi\) halves the total number of neural outputs predicted by \(\pi_{\theta}\). In Fig. 2, the set of parameters \(\{w,\varphi,\omega,A,B\}\) are also referred to as CPG Goals \(g_{CPG}\). The value of \(g_{CPG}\) is predicted using neural policy \(\pi_{\theta}\). Together, \(g_{CPG}\) and the Equations (3)-(6) form the CPG Policy \(\pi_{CPG}\). This \(\pi_{CPG}\) consists of time dependent differential equations and maintains its hidden state \(h_{t}\) for each step \(t\) that consists of \(\{\phi_{t},\dot{\phi}_{t},a_{t},\dot{a}_{t},\dot{a}_{t},b_{t},\dot{b}_{t},\dot{ b}_{t}\}\). The CPG network outputs the desired motor joint states \(g_{j}=y(t)\) (referred as Joint Goals in Fig. 2) that is a vector of \(N\) dimensions for a robot with \(N\) motor joints at each step \(t\). To reduce notation, we refer to the vector formed by concatenation of goal and robot observations \((g,o)\) as \(o\) unless otherwise stated. It should be noted that \(g_{CPG}\) governs the entire architecture of CPG network formed by \(\pi_{CPG}\) based on the observed robot state. ### _Primitive Controller \(\pi_{pc}\)_ As Fig. 2 shows, the outputs action \(g_{j}\) of \(\pi_{CPG}\) form the desired states of the robot joints. The value of \(g_{j}\) is relayed to the lower level primitive controller \(\pi_{PC}\) that generates the motor commands. Without loss of generality, \(\pi_{PC}\) can be any type of controller such as a Proportional-Integral-Derivative (PID) controller or another neural policy. Policy \(\pi_{PC}\) can be designed as an inverse controller similar to the one described in [60]. To keep our implementation simple, we chose the PD-controller as \(\pi_{PC}\). The proportional \(k_{p}\) and derivative \(k_{d}\) gains of the PD controller were manually tuned and kept equal for all the robot joints in all the experiments. ## IV Scaling DeepCPG Policies To enable scalability of DeepCPG policies, we propose to train these policies using a multi-agent RL setup [71]. In this setup, we define the modular robot control policy as a set of modular agents that _cooperatively_ solve the desired task. A similar strategy for training multi-legged robot locomotion policy has been proposed in [18, 72]. The primary difference between these works and our approach lies in the trained policy where we learn the neural policy to predict CPG model parameters. Therefore, a fully cooperative multi-agent task is formulated such that a team of modular agents forming the control policy interact with the same environment (modular robot) to achieve a common goal. For modular robot configuration, we consider \(n\) agents corresponding to \(n\) modules in the system. Figure 10 visually illustrates a single robot divided in multiple modules. In this Markov game, \(\mathbf{s}\in\mathbf{S}\) describes the true state of the modular robot. Each module \(m\in\{1,\ldots,n\}\) consists of its corresponding action set \(U_{m}\) and (partial) observation set \(O_{m}\). A module \(m\) uses a policy \(\pi_{\theta_{m}}\) (stochastic or deterministic) to select a continuous (or discrete) action \(u_{m}\in U_{m}\) at each step. This produces a next state \(\mathbf{s}^{\prime}\) according to state transition function \(\mathbf{P}(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{u}):\mathbf{S}\times\mathbf{ U}\times\mathbf{S}\mapsto[0,1]\), where joint action \(\mathbf{u}\in\mathbf{U}\equiv\{U_{m}\}\forall m\in\{1,\ldots,n\}\). Modular agents earn a joint reward \(r(\mathbf{s},\mathbf{u})\) for every action \(\mathbf{u}\equiv\{u_{1},\ldots,u_{n}\}\) taken in state \(\mathbf{s}\). Thus, this Markov Decision Process (MDP) can be summarized in a tuple \(\langle n,\mathbf{S},\mathbf{U},\mathbf{P},r,\mathbf{O},\gamma,\mathbf{p}_{ \mathbf{S_{0}}}\rangle\). The set of observations corresponding to each module is contained in \(\mathbf{O}\equiv\{\,O_{m}\}\forall m\in\{1,\ldots,n\}\). Each modular agent in the robot learns policies \(\pi_{\theta_{m}}\) conditioned on its local observation correlated with the true state \(o_{m}=\mathbf{O}(\mathbf{s},m)\) where \(o_{m}\in O_{m}\) and \(\mathbf{s}\in\mathbf{S}\) (we will use \(\pi_{m}\) instead of \(\pi_{\theta_{m}}\) to avoid notation clutter). The distribution of initial states of the system is \(\mathbf{p}_{\mathbf{s_{0}}}:\mathbf{S}\mapsto[0,1]\). The overall objective is to maximize the sum of discounted return \(R_{t}=\sum_{t^{\prime}=0}^{\infty}\gamma^{t^{\prime}}r_{t+t^{\prime}}\) where \(\gamma\) is the discount factor. Figure 3 provides a schematic overview of this setup for a modular robot with \(n\) modules. Each module \(m\in\{1,\ldots,n\}\) consists of its corresponding action set \(U_{m}\) and (partial) observation set \(O_{m}\). The module actor \(\pi_{\theta_{m}}\) corresponds to the \(m\)-th robot module. The output of \(\pi_{\theta_{m}}\) is sent to \(\pi_{CPG,m}\) that generates action commands \(u_{m}=g_{j,m}\) for the Module-\(m\) of the robot. To enable emergence of coordination in the modular robot, we define the observation vector perceived by each module \(m\) with three distinct components. These three components comprise of global contextual information \(o_{g}\), local contextual information \(o_{(m)}\) private to module \(m\), and inter-modular contextual information \(\{o_{(m,m^{\prime})}\}\forall m^{\prime}\in\{1,\ldots,n\}\) The global information \(o_{g}\) consists of information about the desired goal of the system. To train this modular system, a centralized training with decentralized execution setup is adopted [73, 74]. The independent training of modular policies may lead to poor performance [75]. The major issue leading to such performance degradation is the non-stationarity of environments that prevents the use of the trajectory history of the agents during training. Thus, in the regime of centralized training, policy learning can be stabilized by the use of extra global information during training, but during execution, each modular agent uses only its local action-observation history. For centralized training, the joint policy, denoted by \(\Pi\), induces a joint action-value function \(Q_{s_{m}}^{\Pi}(\mathbf{s},\mathbf{u})=\mathbb{E}_{\mathbf{s}^{\prime}}\left[ r(\mathbf{s},\mathbf{u})+\gamma\mathbb{E}_{\mathbf{u}^{\prime}\sim\Pi}[Q_{s_{m}}^{ \Pi}(\mathbf{s}^{\prime},\mathbf{u}^{\prime})]\right]\) (henceforth, we will replace \(Q_{s_{m}}\) with \(Q_{m}\) to avoid notation clutter). It should be noted that this joint policy is a set \(\Pi\equiv\{\pi_{1},\ldots,\pi_{n}\}\). The primary advantage of this action-value function is that, if we know the actions taken by all modules along with their true states during training, the environment is stationary even as the individual policies \(\pi_{m}\) are being updated [74]. The joint action value function \(Q_{m}^{\Pi}\) is trained by minimizing the a mean-squared Bellman loss function, which estimates how close \(Q_{m}^{\Pi}\) comes to satisfying the Bellman equation. This loss function is given by Eq.(7): \[\mathcal{L}(\kappa_{m})\!\!=\!\!\mathbb{E}_{\mathcal{R}}\!\!\left[\!\left(Q_{ m}^{\Pi}(\mathbf{s},\mathbf{u})\!-\!\left(r_{m}(\mathbf{s},\mathbf{u})\!+\! \gamma Q_{m}^{\mathrm{I}arg}(\mathbf{s}^{\prime},\mathbf{u}^{\prime})\right) \right)\!\right] \tag{7}\] where \(r_{m}(\mathbf{s},\mathbf{u})\) is the reward received by module \(m\). In this case, the collaborative task of all the modules corresponds to a unified reward \(r(\mathbf{s},\mathbf{u})\) measuring the performance of the robot as a whole. Thus, it is not necessary to consider a different reward function for each modular agent. The set of target policies \(\Pi^{targ}\) with delayed parameters \(\{\theta_{1}^{targ},\ldots,\theta_{n}^{targ}\}\) produces \(\mathbf{u}^{\prime}\equiv\{u_{1}^{\prime},\ldots,u_{n}^{\prime}\}\). \(Q_{m}^{\mathrm{I}arg}\) corresponds to the target critic with delayed parameters \(\kappa_{m}^{targ}\), \(\mathcal{R}\) is the replay buffer containing the transition tuples \((\mathbf{s},\mathbf{u},\mathbf{s}^{\prime},r)\). We can evaluate the policy gradient for each module \(m\) according to Eq. (8): \[\nabla_{\theta_{m}}J(\pi_{m})=\mathbb{E}_{\mathcal{R}}\left[\nabla_{\theta_{m }}\pi_{m}(o_{m})\nabla_{u_{m}}Q_{m}^{\Pi}(\mathbf{s},\mathbf{u})\right] \tag{8}\] where \(Q_{m}^{\Pi}(\mathbf{s},\mathbf{u})\) is a centralized action value function. Action \(u_{m}\) is obtained from its current policy \(\pi_{m}\) and remaining \(\{u_{m^{\prime}}\}\forall m^{\prime}\neq m\) are obtained from the replay buffer \(\mathcal{R}\). Readers should note that we provide the explanation for \(n\) modules in a modular system for the sake of completeness. In the experimental section, we restrict ourselves to \(2\) agents (refer to Fig. 10) to demonstrate the proof-of-concept using the DeepCPG policies in this proposed multi-agent RL framework. Additionally, for locomotion tasks, the velocity direction in the body reference frame of Module-1 was used as \(o_{g}\) in observation. The local modular observation of the two modules considered in the modular robot in Fig. 10 is given by sets \(o_{1}=\left\{o_{g},o_{(1)},\{o_{(1,1)},o_{(1,2)}\}\right\}\) and \(o_{2}=\left\{o_{g},o_{(2)},\{o_{(2,1)},o_{(2,2)}\}\right\}\). We keep \(o_{(1)}=o_{(2,1)}\) and \(o_{(1,2)}=o_{(2,2)}\). ## V Training DeepCPG Policies As described in Section II, we use the TD3 algorithm for training the control policy. To train the DeepCPG policies using this approach, the policy gradient calculated in TD3 must backpropagate through actions \(u=g_{j}\) generated by \(\pi_{CPG}\) followed by \(\pi_{\theta}\). To back propagate, \(\pi_{CPG}\) must be differentiable in practice. While the Kuramoto model based CPG network is differentiable, to find the analytical solution of the set of differential equations 3-6, the integration is implemented with discrete steps of length \(\delta t\). For backpropagation to work, it can be shown that, the derivatives of these equations with respect to the parameters predicted by \(\pi_{\theta}\) in Eq. (9) exist: \[\frac{\partial y_{i}}{\partial w_{ii^{\prime}}},\quad\frac{\partial y_{i}}{ \partial\varphi_{ii^{\prime}}},\quad\frac{\partial y_{i}}{\partial\omega_{i}},\quad\frac{\partial y_{i}}{\partial A_{i}},\quad\frac{\partial y_{i}}{ \partial B_{i}} \tag{9}\] The complete derivation for Eq. (9) is provided in Appendix. Given that the derivatives of the \(\pi_{CPG}\) outputs with respect to the parameters \(g_{CPG}\sim\pi_{\theta}\) exist, it is possible to train a DRL policy in end-to-end fashion to predict CPG actions given the observations from the environment. As discussed in Section II, we use the TD3 algorithm to train \(\pi_{\theta}\) by propagating the policy gradients through \(\pi_{CPG}\). However, a different RL algorithm for training the policy can also be used. As shown in Fig. 4, policy \(\pi_{\theta}\) is modeled with multiple heads each corresponding to the CPG parameters \(g_{CPG}\subset\{w_{ij},\varphi_{ii^{\prime}},\omega_{i},A_{i},B_{i}\}\). It should be noted that it is possible to train \(\pi_{\theta}\) to predict (1) all the CPG parameters \(g_{CPG}\) or (2) only a subset of \(g_{CPG}\) with the remaining parameters preset manually to specific values. In this actor-critic method, two state-action value functions \(Q_{s_{1}}\) parameterized by \(\kappa_{1}\) and \(Q_{s_{2}}\) parameterized by \(\kappa_{2}\) are Fig. 3: (a) Schematic of information sharing across various modules in the modular robot; (b) Actor or Policy for \(m\)-th module parameterized by \(\theta_{m}\); (c) Centralized critic or action value function for \(m\)-th module parameterized by \(\kappa_{m}\) learned as critics. The job of these critics is to criticize the actions taken by the policy in the given states. The critique or the values given by the critics takes the form of a Temporal Difference error (see Eq. 2). The output signals of the critics drive all learning in both actor (policy) and critic. For critic architectures, it is possible to estimate the state-action values using the actions \(g_{CPG}\) by higher-level policy \(\pi_{\theta}\) or actions \(g_{j}\) generated by the CPG network \(\pi_{CPG}\). Based on our experiments, we observed that a better policy is trained when we use \(g_{j}\) for action-value prediction. This may be because the behavior of \(\pi_{CPG}\) is modified for each step of higher-level policy \(\pi_{\theta}\). This changing behavior of the lower-level policy \(\pi_{CPG}\) creates a _non-stationary_ environment for the higher-level policy \(\pi_{\theta}\), and old off-policy experiences may exhibit different transitions conditioned on the same goals. However, critiquing the actions \(g_{j}\) may alleviate this problem as critics are able to observe the complete behavior of the actor that consists of \(\pi_{\theta}\) and \(\pi_{CPG}\). As discussed in Section III-B, the weight matrix \(w\) is constrained to be symmetric with a zero diagonal. and the phase bias matrix \(\varphi\) is constrained to be skew symmetric for the CPG network \(\pi_{CPG}\). Thus, to maintain these constraints, the actor \(\pi_{\theta}\) is designed to predict only the off-diagonal (upper triangular) elements with an offset of 1, i.e., the diagonal elements are not included in the prediction vector. Thus, for robot with \(N\) CPG nodes in \(\pi_{CPG}\), the weight and phase bias predictions are \(\frac{N(N-1)}{2}\) dimensional in size. These prediction vectors are then converted to corresponding \(w\) and \(\varphi\) matrices of shape \(N\times N\) inside \(\pi_{CPG}\). To keep the outputs of \(\pi_{\theta}\) bounded, we use the '\(\tanh\)' activation function for all the output heads. \[y=0.5\big{(}x(y_{max}-y_{min})+(y_{max}+y_{min})\big{)} \tag{10}\] The affine transformation shown in Eq. 10 is applied to predictions of \(\pi_{\theta}\) to transform them within the respective bounds, i.e., \(0\leq w_{ii^{\prime}}\leq 1,-1\leq\varphi_{ii^{\prime}}\leq 1,0\leq\omega_{i} \leq 1,0\leq A_{i}\leq 1,-1\leq B_{i}\leq 1\) before passing in \(\pi_{CPG}\). We provide the complete pseudo-code for training the DeepCPG policy using DRL in Appendix. The input to policy \(\pi_{\theta}\) is a sequence of states of length \(\tau_{o}\), i.e., in Fig. 2 and Fig. 4, \(s=s_{t-\tau_{o},t}\). The CPG policy acts on the environment for \(\tau_{c}\) steps for each step of \(\pi_{\theta}\). ### Scaling DeepCPG policies Based on the setup described in Section IV, we apply a multi-agent RL algorithm to learn a scalable control policy for a modular robot. To that end, we use the DDPG algorithm customized for a multi-agent setup [74]. In the multi-agent setup, we chose DDPG over TD3 for training to reduce the compute requirement. We train the modular robot by bootstrapping the policies learned during a simpler design stage of the system, and use it on the modular system with added design complexity as a result of Module-2 attached to Module-1. Readers should note that the complexity in these modular systems is defined in terms of their degrees of freedom. The system with a higher count of motor joints is considered a comparatively more complex system than a system with a lower number of motor joints. An explicit increase in complexity in terms of motor joints also results in an increase of phenotypic complexity, where the robot body itself changes as a result of additional module. The scaling of policy and the critic in terms of this modular setup is given in Fig. 5. ### Deployment The proposed method uses trained policy \(\pi_{\theta}\) once per \(\tau_{c}\) steps in robot environment. This achieves speedup during inference as fewer forward passes are required for a DeepCPG policy Fig. 4: Schematic of Neural Network architectures: (a) Neural Policy \(\pi_{\theta}\); (b) Critics \(Q_{\kappa_{1}}\) and \(Q_{\kappa_{2}}\). In the article, the observation vector \((g,o)\) is represented as \(o\) to reduce notation unless otherwise stated. Note: The _Heads_ in \(\pi_{\theta}\) are the layers in the neural network. as compared to a feed-forward multi-layered perceptron based policy. Given the limited computational power of the real robot, this skip of \(\tau_{c}\)-steps also facilitates real-time processing. The pseudo-code for the policy deployment is provided in Appendix. The two critics learned during the training process are not needed during the deployment for inferring from the policy, further lowering the computational cost. ## VI Results and Discussion Experiments were carried out using a physics-based simulation environment as well as physical real-world robots. This section presents the experimental setup and results for the evaluation of DeepCPG policies. We evaluate the CPG policies on intrinsically motivated locomotion task (VI-B), Go-to-Goal task (VI-C) and visual feedback task (VI-F). ### _Experimental Setup - Physics Based Engine_ For developing a modular system using legged robots, we chose the Bullet Physics engine [76]. The quadruped robot simulation was developed to perform experiments with the proposed algorithm. Each joint in this robot is a revolute joint. Figure 6 shows an image and the corresponding schematic of the quadruped robot used in the evaluation of the proposed work. This robot is a \(12\)-degree-of-freedom system (\(12\) active joints) as indicated in this figure. The simulated environments used in the experiments were wrapped with the OpenAI gym interface [77]. Different features of the environments developed for testing the proposed approach are also shown in Fig. 6. PyTorch was used as the deep learning framework [78]. The simulations were performed on a machine with an Intel-i7 processor, 16GB RAM, and NVIDIA RTX 2070. The training of each experiment took approximately \(6\) hours for observations without images. For experiments that used visual feedback for the robot, each run took approximately \(23\) hours. To evaluate the proposed approach, we provide the comparison with a baseline of feed-forward policy trained using the TD3-algorithm [67]. The base architectures of both, feed-forward policy and DeepCPG policy, were kept identical (Note: base architecture refers to the network architecture before the output layer). For the feed-forward policy, the output layer consisted of a fully connected layer of neurons as opposed to the one in the DeepCPG policy that consisted of the CPG model. The dimensions of this layer were kept equal to the action-space dimensions of the robot. Further details about the hyperparameters used for training the RL policies are given in Table I. Each experiment was performed five times with different random seeds for a random number generator. The resultant statistics are provided in the plots. Figure 4 provides the architectural details of the neural networks corresponding to \(\pi_{\theta}\). For the feed-forward neural network, layer \(\mathcal{A}_{1}\) consisted of 1024 neurons with ReLU activation and \(\mathcal{A}_{2}\) consisted of 512 neurons with ReLU activation. CPG parameter heads for \(w,\varphi,\omega,A,B\) are layers with 512 neurons and ReLU activation. The input dimensions of the network depend on the observation space dimensions of the robot and the output dimensions depend on the architecture of \(\pi_{CPG}\) and the action space of the robot as discussed in Section V. For the Critic architecture, \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) each consisted of 1024 neurons with ReLU activation, \(\mathcal{C}_{3}\) consisted of 512 neurons with ReLU activation, followed by a linear layer \(\mathcal{C}_{4}\) with 512 neurons. The critics output a scalar value. It should be noted that for training the actor with visual feedback (images as observations), actor and critic architectures were modified to include 2 convolutional layers before \(\mathcal{A}_{1}\) and \(\mathcal{C}_{1}\), respectively. The image size used during training was \(32\times 32\). These images were converted to grayscale before feeding them to the neural networks. The images for \(\tau_{o}\) steps were stacked and passed through the two convolutional layers and then flattened and concatenated with other observations (if any) to pass through the first linear layer of \(1024\) neurons, shown in Fig. 4, for Fig. 5: Neural Network scaling: (a) Scaling of \(\pi_{\theta_{1}}\) when Robot Module-2 is attached to Module-1; (b) Scaling of centralized critic of Module-1 when Robot Module-2 is attached to Module-1. The same architecture is also used for Module-2 policy and centralized critic. Note: \(\mathcal{A}_{l}^{\text{m}}\) is a layer \(l\) of policy corresponding to module \(m\) and \(\mathcal{C}_{l}^{\text{m}}\) is layer \(l\) of critic corresponding to module \(m\). the actor and both the critics. The first 2D-convolutional layer consisted of \(\tau_{o}\) input channels and \(10\) output channels. For the second 2D-convolutional layer, \(10\) input channels, \(15\) output channels were used. The kernel size of \(3\) and stride of \(1\) were used for both the layers. ### _Intrinsic Motivation Task_ Good exploration enables the policy to discover various behaviors. In practice, to enable this, the learning agent should leverage information gained from environment dynamics. To encourage this outcome, we define an intrinsic reward function in equation (11) that encourages the policy to learn the locomotion behavior. The agent is rewarded for successfully moving the robot in \(X-Y\) plane. \[r_{t}\!\!=\!c_{v}||v_{(x,y),t}||_{2}\!\!-\!c_{\phi}||\vartheta_{t}||_{2}\!\!- \!c_{z}||z_{t}||_{2}\!\!-\!c_{j}||s_{joint_{t}}||_{2}\!\!+\!c_{b} \tag{11}\] In Eq.11, \(v_{(x,y),t}\) denotes the current velocity of the robot along \(x\) and \(y\) direction. \(\vartheta_{t}\) is the angular velocity of the robot. \(s_{joint_{t}}\) represents the vector of joint angles of the robot at time \(t\). \(z_{t}\) represents the robot height above the ground at time \(t\). The term with \(s_{joint_{t}}\) implicitly regularizes actions by imposing a penalty on the robot joint angles. The last term \(c_{b}\) contributes to the bonus value for the robot for staying alive. The coefficients \(c_{(\cdot)}>0\forall\{v,\vartheta,z,b,j\}\). It should be noted that Eq.11 does not contribute to a goal directed behavior of any kind in the learned policy. In the experiments, \(c_{v}=2.0\), \(c_{b}=4.0\), \(c_{\vartheta}=0.5\), \(c_{z}=5.0\), \(c_{j}=10^{-3}\). Figure 7-(a) provides the plot of episode returns over the training of the robot. From this plot, it can be observed that the DeepCPG policy performance is on par with the feedforward policy trained using TD3. The advantage of the CPG priors in the locomotion is evident from the results in Section VI-C, where these policies are fine-tuned for the goal-directed task. ### _Go-to-Goal Task_ The policies that were trained in Section VI-B were fine-tuned to learn a goal-reaching behavior with the modification to the reward in Eq. (11). The weights that were learned for the policies for the task in Section VI-B were transferred for this downstream task. Equation (12) provides the updated reward function used for learning a go-to-goal behavior. \[r_{t}=c_{v}(v_{(x,y),t}\cdot\hat{e}_{g})+c_{v}||v_{(x,y),t}||_{2 }\!\!-\!c_{\phi}||\vartheta_{t}||_{2}\\ -c_{z}||z_{t}||_{2}\!\!-\!c_{j}||s_{joint_{t}}||_{2}\!\!+\!c_{b} \tag{12}\] In Equation (12), \(\hat{e}_{g}\) is the unit vector pointing towards goal \((x_{g},y_{g})\) from the robot position \((x_{t},y_{t})\) and \(c_{(\cdot)}>0\forall\{e,v,\vartheta,z,j,b\}\). For the experiments, \(c_{e}=5.0\). The values of the remaining coefficients are provided in Section VI-B. The advantage of the DeepCPG policy is visible from the training plots in Fig. 7-(b) in terms of sample efficiency. It was able to learn the go-to-goal behavior faster as compared to the baseline policy. This could be attributed to the behavioral priors in the DeepCPG policy as a result of \(\pi_{CPG}\). ### _Ablation Study: Fault Tolerance_ To investigate the fault-tolerant behavior of the proposed policy, we introduced a fault in the robot. This environment change is shown in Fig. 6-(c). One of the legs in the robot was broken. The reward function was updated from Eq. (11) to Eq. (13). In Eq. (13), the robot was evaluated based on its performance to walk along the x-axis, so its velocity along x-axis \(v_{(x),t}\) was rewarded. All the coefficient values were kept as defined in Section VI-B. \[r_{t}\!=\!c_{v}||v_{(x),t}||_{2}\!\!-\!c_{\phi}||\vartheta_{t}||_{2}\!\!-\!c_ {z}||z_{t}||_{2}\!\!-\!c_{j}||s_{joint_{t}}||_{2}\!\!+\!c_{b} \tag{13}\] Figure 7-(c) shows the episode return plots for the policies from Section VI-B fine-tuned for the system with the broken-leg. The DeepCPG policy shows comparatively better performance than the feed-forward policy. As compared to the system without any Fig. 6: Quadruped robot: (a) Bullet Physics GUI; (b) Schematic; (c) Fault in front-left limb; (d) Uneven terrain and with simulated visual feedback. faults, the performance of this system saturated to comparatively lower episode returns as observed from the plots. Nonetheless, the DeepCPG policy converged to a comparatively higher performance than the feed-forward policy. ### _Ablation Study: CPG Parameters_ Figure 8 presents the ablation studies of various parameters used in the trained DeepCPG policy. The policy trained for the experiments described in Section VI-B is used for this study. The plots in Fig. 8 were generated by running each experiment for 30 episodes with random initialization of the robot. Only the parameter under consideration was varied in each case. The values of the remaining parameters were kept equal to those used during training, as given in Table I. In Fig. 8-(a), we varied the parameter \(\tau_{c}\) during the deployment of the policy. A higher value of this parameter signifies a lower execution frequency of \(\pi_{\theta}\) in the DeepCPG policy. It can be observed from the plots that the trained policy was robust to variation \(\tau_{c}\) during deployment for a wide range of values. Although \(\tau_{c}=5\) was used during training, similar performance was observed for \(\tau_{c}=10\). The performance of the robot degraded when the value of \(\tau_{c}\) was further increased. As the robot's response to the sensor-feedback was delayed due to reduced execution frequency of \(\pi_{\theta}\) when \(\tau_{c}\) increases, we observed a corresponding performance degradation. Figure 8-(b) presents the results of episode returns from the trained policy with variation of \(\alpha_{w}\) in Eq. (3). The higher value of this parameter enables faster synchronization of all the Fig. 8: Plot of episode returns with respect to respective parameter values used in the ablation study: (a) CPG Policy Steps \(\tau_{c}\); (b) External modulation of CPGs using \(\alpha_{w}\); (c) External modulation of CPGs using \(\alpha_{w}\); (d) External modulation of CPGs using \(\alpha_{A}\); (e) External modulation of CPGs using \(\alpha_{w}\) Fig. 7: Plot of episode returns the task of: (a) moving as fast as possible; (b) go-to-goal; (c) recovery from the fault. Legends: (TD3) Feed-forward policy trained using TD3, (DeepCPG) DeepCPG policy. oscillators in Kuramoto CPG nodes contained in \(\pi_{CPG}\). The plot shows a corresponding response of the trained DeepCPG policy with the variation of this value. The policy was trained using \(\alpha_{w}=600\). As the CPG values are not able to synchronize when \(\alpha<600\), the policy performance was observed to degrade. For \(\alpha>600\), the trained DeepCPG policy showed robustness and its performance did not suffer. Figure 8-(c) shows the results for variation of \(\alpha_{\varphi}\) in Eq. (3). The trained policy performance was observed to be sensitive to variation of \(\alpha_{\varphi}\). When \(\alpha_{\varphi}=0\), the phase bias of each CPG node of the \(\pi_{CPG}\) becomes zero, the robot was not able to perform well. Figure 8-(d) shows the results for variation of \(\alpha_{A}\) in Eq. (4). It should be noted that we maintained a constraint of \(\alpha_{A}+\alpha_{B}=1\) to ensure \(\max(\alpha_{A}A_{i})+\max(\alpha_{B}B_{i})\leq\max(s_{joint_{i}})\) where \(\max(s_{joint_{i}})\) corresponds to the limits of \(joint_{i}\) on the robot and \(A_{i}\) and \(B_{i}\), respectively, correspond to the desired amplitude and desired offset of the CPG node corresponding to that joint. This constrains the joint movement resulting from a CPG node to be within the joint's physical limits. From the plot shown in Fig. 8-(d), the robot achieved less reward for lower values of \(\alpha_{A}\). This was the result of the smaller amplitudes of a robot's gait. On the other hand, an interesting observation is for \(\alpha_{A}=1\) where the robot performance degrades. We observed that the offset is an important component in DeepCPG Policy. When \(\alpha_{A}=1\), \(\alpha_{B}=0\) resulting in the performance degradation. Figure 8-(e) shows the results for variation of \(\alpha_{\omega}\) in Eq. (3). This parameter influences the oscillation frequency of the CPG nodes in DeepCPG policies. The policy showed robust behavior to some variation of \(\alpha_{\omega}\). The higher the value of \(\alpha_{\omega}\), the higher the oscillation frequency of CPG nodes. This increase in frequency is reflected in higher robot speeds and a resultant increase in episode return. At high values, \(\alpha_{\omega}=100\) and \(\alpha_{\omega}=120\), the policy performance was degraded. This may be associated with the non-stationarity in the observation space with high \(\alpha_{\omega}\). Although the trained policy was robust to some variations of this parameter, the degrading performance could be the result of distribution drift in the observations space when \(\alpha_{\omega}\) increases. Overall, it can be observed from the parameter study conducted in this section that the DeepCPG policy is robust to a wide range of parameter changes during deployment even when these parameters are not shown to the policy during the training phase. ### _Visual Feedback Task_ We also evaluated the DeepCPG policy on the environment designed with high-dimensional image observations. Two cases were tested for DeepCPG and feed-forward policies each: (1) The robot was provided with visual feedback only, (2) The robot was provided with visual feedback along with input from the proprioceptive touch sensors on its feet. The sample image of the environment is shown in Fig. 6. The terrain was randomized in every episode. This randomization was introduced to enable the generalization of the quadruped to walk on rough terrain. The terrain height was randomly sampled from the range \([0,0.1\times h_{r}]\) where \(h_{r}\) represents the robot height. The robot was trained with the reward function defined in Eq. (13) to walk along the x-axis. Figure 9-(a) shows the results of the experiments with visual feedback to the robot. From these results, it can be observed that the learned DeepCPG policy was able achieve comparatively higher reward than the feed-forward policy. In general, it was observed that the feed-forward actor showed comparatively poor performance than DeepCPG for any environment perturbation. The feed-forward actor consistently demonstrated lower asymptotic performance, especially in the case of visual feedback with no proprioceptive feet touch sensors (see Fig. 9-(a)). ### _Energy Analysis_ This section presents the results for the energy analysis of the walking robot described in the previous sections. We compare the results generated from the feed-forward MLP policy with the DeepCPG policy. Both the policies were trained with the reward function Eq. (11) in Section VI-B. Figure 9-(b) provides the joint trajectories for the robot trained using these policies recorded for a span of 3 seconds arbitrarily from the simulation. It should be noted that these trajectories are plotted using the joint angles of the robot simulation and do not correspond to the motor commands sent from the neural network. From this figure, it can be observed that the DeepCPG policy enables the generation of smooth joint trajectories. This is attributed to the CPG dynamical system embedded as an output neural layer. This layer can predict in the trajectory space of the Kuramoto Model of CPGs and thus the joint trajectories generated from these commands appear smooth. On the other hand, for Feed-forward MLP, the neural network predicts the values of the motor commands directly for every iteration. As a result of this, the joint trajectories appear non-smooth. The average energy consumed by all the motor joints on the robot in the simulation is shown in Table II where Column \(t\) denotes time taken by the policies for each iteration (in milliseconds) and Column \(T\) denotes the time taken for task completion (in seconds). We estimate these values by using the equation \(Work=\sum_{joint}\sum_{t}Torque_{joint_{t}}\cdot\partial s_{joint_{t}}\), where \(Torque_{joint_{t}}\) is the torque applied on a motor joint at time \(t\) and \(\partial s_{joint_{t}}\) is the change of joint state from time \(t\) to \(t+1\). This statistic was evaluated based on the data gathered over the complete episode of \(2,500\) steps for \(5\) episodes. Given the non-smooth nature of the joint trajectories generated from Feed-Forward MLP policy, the energy expended in driving the robot motors is almost twice the energy consumed by the DeepCPG policy. Additionally, it can be observed that the trajectories produced by DeepCPG are smoother (see Fig. 9-(b)). In the case of the real robots, this could also contribute to robot safety since DeepCPG policies would not perform any unbounded actions that could damage the robot actuators. ### _Scalability and Simulation to Real-World Transfer of DeepCPG Policies_ In this section, results of the scalability study and simulation of the real-world (sim2real) transfer of the DeepCPG policies are presented. Figure 10 shows the snapshots of the simulation model and the corresponding experimental setup used for the sim2real experiments. The images of the CAD model of the robot used in the simulation-to-real-world transfer experiments with the trained DeepCPG policies are shown in Fig. 10-(b). These robots are custom built and the experiments were performed under a motion capture system. Each of these quadrupeds had two motor joints per leg. Thus, a single quadruped module have a total of \(8\) degrees of freedom. Raspberry-Pi 4 Model B is used as the on-board computing platform for these robots [80]. The robot joints are connected to a PWM breakout board that handles sending commands to multiple servo motors simultaneously. This is all housed within a shell that gives the robot a turtle-like appearance. These robots can connect and disconnect with each other using an electromagnet. A hall effect sensor is used to detect the connection of one module Fig. 10: (a) Physics engine based model: (left) a single robot, (right) a robot with two modules; (b) CAD model: (left) Robot used in Sim2Real experiment, (middle) Single leg along with servos, (right) Leg attached to robot body; (c) Individual modules of real robot; (d) Connected modules of real robot. Fig. 9: (a) Training plots for the tasks with visual feedback: (vision) Robot has access to only visual feedback, (vision+feet) Robot gets feedback from vision and feet touch sensors, (TD3) Feed-Forward policy trained using TD3, (DeepCPG) DeepCPG policy; (b) Trajectory of the Hip Joints of the quadruped robot: (Top) Trajectories observed for the Feed-forward MLP policy, (Bottom) Trajectories observed for the DeepCPG policy. Hip-0 is Front Right, Hip-1 is Front Left, Hip-2 is Back Right and Hip-3 is Back left. to another. The connectors are designed with a conical shape to allow for tolerance while connecting (highlighted in yellow in Fig. 10-(b)-(left)). The corresponding 3D-Printed quadruped robot modules using this CAD model are shown in Fig. 10-(c) and Fig. 10-(d). For the DeepCPG policies trained for sim2real experiments, we did not use touch sensors on the robot feet. The feedback available to the robot consisted of the robot joint angles, joint angle velocities, linear velocity and angular velocity. The inter-modular information shared across each module consisted of values of joint angles measured for each module. As there are eight joints on each module, this information consists of eight joint angle values. The global contextual information shared with Module-\(2\) from Module-\(1\) consisted of robot velocity direction in body frame of reference. The training plots for the modular system in Fig. 10-(a)-(right) with \(16\) degrees of freedom are provided in Fig. 11-(a). The policies were trained using the reward function in Eq.(13). This plot compares three training routines. It should be noted that, for a fair comparison, the architectures of the control policies were kept identical in each training routine and only the neural network \(\pi_{\theta_{m}}\forall m\in\{1,2\}\) parameter initialization strategies were varied: **Routine-1**: Training the system with Module-1 and Module-2 from scratch where the modular policies were initialized randomly and trained together as a single giant policy. This is referred to as "Not-Mod" in the Fig.11-(a). **Routine-2**: First, a policy for Module-1 was trained using the reward function Eq. (13) and then transferred to the corresponding module in the new system formed by connecting Module-1 and Module-2. The Module-2 policy weights were initialized randomly. The multi-agent RL setup described in Section V was used to train these control policies. This is referred to as "Mod-Rand" in the Fig.11-(a). **Routine-3**: First, a policy for Module-1 was trained using the reward function Eq. (13) and then it was used to initialize the weights of each module in the new system formed by connecting Module-1 and Module-2. The multi-agent RL setup described in Section V was used to train these control policies. This is referred to as "Mod" in the Fig.11-(a). Based on the comparison of all these training routines, it was observed that **Routine-3** is the most sample efficient. In this routine, the modular policies were able to take advantage of two priors when the transfer from an individual module stage to a connected module stage occured. The first prior was as a result of movement primitives embedded in DeepCPG policies. The second prior is due to trained quadruped policy used for parameter initialization in the design complexification stage. It can be observed clearly that modular design complexification with transfer of weights and additive learning enabled effective and efficient scaling of the DeepCPG policies to robots of increasing complexity. Given the Kuramoto model of CPG embedded in the network, the DeepCPG policies were able to successfully transfer to the real robot without requiring any further fine-tuning of the neural network weights. We performed two experiments with the provided setup: **Experiment 1** - A DeepCPG policy was trained for a single quadruped robot for waypoint navigation using the reward function in Eq.(12). The policy was trained in simulation and transferred to the real robot. The trajectory followed by the real robot Module-1 is shown in Fig. 11-(b). **Experiment 2** - In this experiment, the robot Module-1 and Module-2 were connected together. The DeepCPG policy was trained to let the connected modules walk in a straight line. The connection between Module-2 and Module-1 is detected by a hall-effect sensor. When the connection is detected, the modular policy trained using **Routine-3** described above was activated. Based on these experiments, it was also observed that the learned control policies both for individual robot and the modular robots were able to successfully work in the real world. Additionally, the modular policies also enabled smooth switching of behavior from a single module to a system with two modules and effectively synchronize the leg movements after the connection as decribed in **Experiment 2**. DeepCPG policy also ensured smooth joint trajectories in the robots. This implicitly constrains the robots to follow a smooth and safe trajectories using its actuators. ## VII Conclusions and Future Work We presented a developmental RL-based approach for learning robot locomotion that allows the scalable construction of more complex robots by a process of modular complexification and transfer of prior learning. This approach was tested in various simulated environments as well as on a real-world experimental platform. The results show the advantages offered by the behavioral priors introduced in the actor-networks as central pattern generator models. DeepCPG policies were able to show sample efficient learning of various behavioral policies with different sensor modalities. As demonstrated by the results presented for various sensor modalities, DeepCPG policies enable end-to-end learning even in the case of high-dimensional sensor spaces such as visual feedback. The hierarchical DeepCPG policy also incorporates ideas from the _dual process theory_ of human intelligence [81, 82]. In the DeepCPG policy, the lower level CPG policy \(\pi_{CPG}\) can be thought of as \(System\ 1\) which is a fast, unconscious, and automatic mode of behavior, while the higher-level policy \(\pi_{\theta}\) can be considered as \(System\ 2\) which is a slow, conscious and rule-based model of reasoning developed as a result of knowledge gained by the robot from its surrounding environment. In other words, the DeepCPG actor employs the idea of _thinking fast and slow_ while learning a behavioral policy. Closely related work to the proposed work includes the work by Schilling et al. [18] investigating learning of various decentralized control policies for a fixed robot morphology. Huang et al. [83] also presented a very interesting message-passing approach to train a generalized decentralized control policy for a wide variety of agent morphologies. This work showed that an identical modular network could be shared across various actuators in the robot morphology to generate locomotion behaviors. The results using DeepCPG also align with the conclusions drawn from the studies in [18, 83] that decentralized training and biological inspiration are quite helpful for learning. To summarize, decentralized training of DeepCPG policies corroborates well with the observations from these works that modular policies could be trained faster. In addition to that, our work also showed that the interaction of sensory modalities and movement primitives enables faster learning of robust locomotion for a high degree of freedom system. From the perspective of developmental robotics, the approach proposed to train the behavioral policies for modular systems also shows the efficacy of bootstrapping more complex intelligent systems from simpler ones based on biological principles. The gradual complexification with modular policies along with phenotypic development in the robot enables non-trivial sensorimotor integration on a real robotic platform. We used the Kuramoto model as the basis of CPGs. One can replace this model with other models without the loss of generality. The addition of different nonlinear terms in the CPG models could also enable the emergence of diverse behaviors like jumping with appropriate environmental scenarios. The proposed DeepCPG based policies could also be used with complex robot architectures like humanoid robots [84] and soft robots [85] for learning different behaviors. We believe these could be interesting directions to explore as part of future work. ## References * [1] D. M. Wilson, "The central nervous control of flight in a locust," _Journal of Experimental Biology_, vol. 38, no. 2, pp. 471-490, 1961. * [2] P. S. Stein, D. G. Stuart, S. Grillner, and A. I. Selverston, _Neurons, networks, and motor behavior_. MIT press, 1999. * [3] H. Hultborn and J. B. Nielsen, "Spinal control of locomotion-from cat to man," _Acta Physiologica_, vol. 189, no. 2, pp. 111-121, 2007. * [4] I. R. Popescu and W. N. Frost, "Highly dissimilar behaviors mediated by a multifunctional network in the marine mollusta trionia domelea," _Journal of Neuroscience_, vol. 22, no. 5, pp. 1985-1993, 2002. * [5] E. Marder and D. Bucher, "Central pattern generators and the control of rhythmic movements," _Current biology_, vol. 11, no. 23, pp. R986-R996, 2001. * [6] V. Edgerton, S. Grillner, A. Sjostrom, and P. Zangger, "Central generation of locomotion in vertebrates," in _Neural control of locomotion_. Springer, 1976, pp. 439-464. * [7] D. E. Koditschek, R. J. Full, and M. Buehler, "Mechanical aspects of legged locomotion control," _Arthropod structure & development_, vol. 33, no. 3, pp. 251-272, 2004. * [8] C. F. Heret, _Locomotion and energetics in arthropods_. Springer Science & Business Media, 2012. * [9] M. H. Dickinson, C. T. Farley, R. J. Full, M. A. R. Koehl, R. Kram, and S. Lehman, "How animals move: An integrative view," _Science_, vol. 288, no. 5463, pp. 109-106, 2000. [Online]. Available: [https://www.science.org/doi/abs/10.1126/science.288.5463.100](https://www.science.org/doi/abs/10.1126/science.288.5463.100) * [10] K. Pearson, "Central programming and reflex control of walking in the cockroach," _Journal of experimental biology_, vol. 56, no. 1, pp. 175-193, 1972. * [11] S. S. Bidaye, T. Bockenmild, and A. Buschges, "Six-legged walking in insects: How cpg, peripheral feedback, and descending signals generate coordinated and adaptive motor rhythms," _Journal of Neurophysiology_, vol. 119, no. 2, pp. 459-475, 2018. * [12] B. D. DeAngelis, J. A. Zavatone-Veth, and D. A. Clark, "The manifold structure of limb coordination in walking drosophila," _Elife_, vol. 8, p. e46409, 2019. * [13] T. G. Brown, "The intrinsic factors in the act of progression in the mammal," _Proceedings of the Royal Society of London. Series B, containing papers of a biological character_, vol. 84, no. 572, pp. 308-319, 1911. * [14] T. G. Brown, "The factors in rhythmic activity of the nervous system," _Proceedings of the Royal Society of London. Series B, Containing Papers of a Biological Character_, vol. 85, no. 579, pp. 278-289, 1912. * [15] A. J. Ijspeert, A. Crespi, D. Ryczko, and J.-M. Cabelguen, "From swimming to walking with a salamander robot driven by a spinal cord model," _science_, vol. 315, no. 58LT, pp. 1416-1420, 2007. * [16] P. Liljebick, K. Y. Pettersen, O. Stavdahl, and J. T. Gravdahl, _Snake robots: modelling, mechanorucs, and control_. Springer Science & Business Media, 2012. * [17] R. Thandackal, K. Melo, L. Paez, J. Herault, T. Kano, K. Akiyama, F. Boyer, D. Ryczko, A. Ishiguro, and A. J. Ijspeert, "Emergence of robust self-organized undulatory swimming based on local hydrodynamic force sensing," _Science Robotics_, vol. 6, no. 57, 2021. [Online]. Available: [https://robotics.sciencemag.org/content/6/57/eabf6354](https://robotics.sciencemag.org/content/6/57/eabf6354) * [18] M. Schilling, A. Melnik, F. W. Ohl, H. J. Ritter, and B. Hammer, "Decentralized control and local information for robust and adaptive decentralized deep reinforcement learning," _Neural Networks_, vol. 144, pp. 699-725, 2021. * [19] P. Eckert, A. Sprowitz, H. Witte, and A. J. Ijspeert, "Comparing the effect of different spine and leg designs for a small bounding quadruped robot," in _2015 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2015, pp. 3128-3133. * [20] F. Herrero-Carron, F. Rodriguez, and P. Varona, "Bio-inspired design strategies for central pattern generator control in modular robotics," _Bioinspiration & Biomimetics_, vol. 6, no. 1, p. 016006, 2011. * [21] A. Schneider, J. Paskarbeit, M. Schafertsmann, and J. Schmitz, "Hector, a new hexaped robot platform with increased mobility-control approach," Fig. 11: (a)Training plots for a modular walking robot when extended from having 4 legs to 8 legs: [Not-Mod] Policy was trained for 8-legged robot from random initialization; [Mod-Rand] Policy was initialized randomly for the added legs when the 4-legged robot was extended to an 8-legged robot; [Mod] Policy was initialized with the weights from the quadruped module when the robot was extended from 4-legged to 8-legged, (b) Trajectory followed by Robot Module-1 for waypoint navigation task, (c) Trajectories followed by Module-1 and Module-2 while walking in the straight line after connecting with each other. design and communication," in _Advances in Autonomous Mini Robots_. Springer, 2012, pp. 249-264. * [22] P. Billeschou, N. N. Bijima, L. B. Larsen, S. N. Gorb, J. C. Larsen, and P. Manoonpong, "Framework for developing bio-inspired morphologies for walking robots," _Applied Sciences_, vol. 10, no. 19, p. 6986, 2020. * [23] R. Siddall, G. Byrnes, R. J. Full, and A. Jusufi, "Tails stabilize landing of gliding geckos crawling head-first into two trunks," _Communications Biology_, vol. 4, no. 1, p. 1020, Sep 2021. [Online]. Available: [https://doi.org/10.1038/s42003-021-02378-6](https://doi.org/10.1038/s42003-021-02378-6) * [24] A. Ijspeert, J. Nakanishi, and S. Schaal, "Learning attractor landscapes for learning motor primitives," in _Advances in Neural Information Processing Systems_, S. Becker, S. Thrun, and K. Obermayer, Eds., vol. 15. MIT Press, 2003. * [25] J. Nordmen, T. F. Nygaard, K. O. Ellefsen, and K. Glette, "Evolved embodied phase coordination enables robust quadruped robot locomotion," in _Proceedings of the Genetic and Evolutionary Computation Conference_, 2019, pp. 133-141. * [26] E. D. Tytell and J. H. Long, "Biorobotic insights into neuromechanical coordination of undulatory swimming," _Science Robotics_, vol. 6, no. 57, 2021. * [27] A. Pikovsky, M. Rosenblum, and J. Kurths, _Synchronization: A Universal Concept in Nonlinear Sciences_, ser. Cambridge Nonlinear Science Series. Cambridge University Press, 2001. * [28] G. L. Liu, M. K. Habib, K. Watanabe, and K. Izumi, "Central pattern generators based on matuso oscillators for the locomotion of biped robots," _Artificial Life and Robotics_, vol. 12, no. 1-2, pp. 264-269, 2008. * [29] B. Lim, J. Lee, J. Kim, M. Lee, H. Kwak, S. Kwon, H. Lee, W. Kwon, and K. Roth, "Optimal gait primitives for dynamic bipedal locomotion," in _2012 IEEE/RSJ International Conference on Intelligent Robots and Systems_. IEEE, 2012, pp. 4013-4018. * [30] D. Clever, M. Harant, K. Mombaur, M. Naveau, O. Stasse, and D. Endres, "COCoMoPL: A novel approach for humanoid walking generation combining optimal control, movement primitives and learning and its transfer to the real robot hrp-2," _IEEE Robotics and Automation Letters_, vol. 2, no. 2, pp. 977-984, 2017. * [31] C. Mastalli, I. Havovits, A. W. Winkler, D. G. Caldwell, and C. Semini, "On-line and on-board planning and perception for quadrupedal locomotion," in _2015 IEEE International Conference on Technologies for Practical Robot Applications (TPRA)_. IEEE, 2015, pp. 1-7. * [32] M. Duarte, J. Gomes, S. M. Oliveira, and A. L. Christensen, "Evolution of repertoire-based control for robots with complex locomotor systems," _IEEE Transactions on Evolutionary Computation_, vol. 22, no. 2, pp. 314-328, 2017. * [33] C. Yang, C. Chen, N. Wang, Z. Ju, J. Fu, and M. Wang, "Biologically inspired motion modeling and neural control for robot learning from demonstrations," _IEEE Transactions on Cognitive and Developmental Systems_, vol. 11, no. 2, pp. 281-291, 2019. * [34] G. Bellegrada and A. Ijspeert, "CPG-RL: Learning central pattern generators for quadruped locomotion," _IEEE Robotics and Automation Letters_, vol. 7, no. 4, pp. 12 547-12 554, 2022. * [35] A. L. Hodgkin and A. F. Huxley, "A quantitative description of membrane current and its application to conduction and excitation in nerve," _The Journal of physiology_, vol. 117, no. 4, pp. 500-544, 1952. * [36] L. Abbott and T. B. Kepler, "Model neurons: from Hodgkin-Huxley to Hopfield," in _Statistical mechanics of neural networks_. Springer, 1990, pp. 5-18. * [37] P. Wen, X. Linsen, F. Baolin, and W. Zhong, "CPG control model of snake-like robot parameters of optimization based on GA," in _2015 IEEE International Conference on Robotics and Biomimetics (ROBIO)_. IEEE, 2015, pp. 1944-1949. * [38] D. Blanchard, K. Aihara, and T. Levi, "Snake robot controlled by biomimetic cpcps," _Journal of Robotics, Networking and Artificial Life_, vol. 5, no. 4, pp. 253-256, 2019. * [39] A. Hunt, M. Schmidt, M. Fischer, and R. D. Quinn, "Neuromechanical simulation of an inter-leg controller for tetrapod coordination," in _Conference on Biomimetic and Biobrybrid Systems_. Springer, 2014, pp. 142-153. * [40] K. Matsuoka, "Analysis of a neural oscillator," _Biological Cybernetics_, vol. 104, no. 4-5, pp. 297-304, 2011. * [41] X. Liu, R. Gascon, Z. Jiang, C. Onal, and J. Fu, "Learning to locomote with artificial neural-network and CPG-based control in a soft snake robot," in _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2020, pp. 7758-7765. * [42] M. Cartwright, "Baltharararar der pol," _Journal of the London Mathematical Society_, vol. 1, no. 3, pp. 367-376, 1960. * [43] A. H. Nayfeh and B. Balachandran, _Applied nonlinear dynamics: analytical, computational, and experimental methods_. John Wiley & Sons, 2008. * [44] P. Veskos and Y. Demiris, "Developmental acquisition of entrainment skills in robot swinging using van der Pol oscillators," in _Fifth International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems_. Lund University Cognitive Studies, 2005, pp. 87-93. * [45] G. Wang, X. Chen, and S.-K. Han, "Central pattern generator and feedforward neural network-based self-adaptive gait control for a chre-like robot locomoting on complex terrain under two reflex mechanisms," _International Journal of Advanced Robotic Systems_, vol. 14, no. 4, p. 1729881417723440, 2017. * [46] Y. Zhu, Y. Wu, Q. Liu, T. Guo, R. Qin, and J. Hui, "A backward control based on \(\sigma\)-Hopf oscillator with decoupled parameters for smooth locomotion of bio-inspired legged robot," _Robotics and Autonomous Systems_, vol. 106, pp. 165-178, 2018. * [47] Y. Kuramoto, _Chemical oscillations, waves, and turbulence_. Courier Corporation, 2003. * [48] M. Breakspear, S. Heitmann, and A. Daffertshofer, "Generative models of cortical oscillations: neurobiological implications of the kuramoto model," _Frontiers in human neuroscience_, vol. 4, p. 190, 2010. * [49] C. Liu, Y. Chen, J. Zhang, and Q. Chen, "CPG driven locomotion control of quadruped robot," in _2009 IEEE International Conference on Systems, Man and Cybernetics_. IEEE, 2009, pp. 2368-2373. * [50] M. Schilling, T. Hoinville, J. Schmitz, and H. Cruse, "Walknet, a bio-inspired controller for hexapod walking," _Biological cybernetics_, vol. 107, no. 4, pp. 397-419, 2013. * [51] S. Schaal, J. Peters, J. Nakanishi, and A. Ijspeert, "Control, planning, learning, and imitation with dynamic movement primitives," in _Workshop on Bilateral Paradigms on Humans and Humans: IEEE International Conference on Intelligent Robots and Systems (IROS 2003)_, 2003, pp. 1-21. * [52] J. Ding, X. Xiao, N. Tsagarakis, and Y. Huang, "Robust gait synthesis combining constrained optimization and imitation learning," in _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2020, pp. 3473-3480. * [53] A. Reske, J. Carius, Y. Ma, F. Farshidian, and M. Hutter, "Imitation learning from mpc for quadrupedal multi-gait control," in _2021 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2021, pp. 5014-5020. * [54] R. A. Brooks, "A robot that walks; emergent behaviors from a carefully evolved network," _Neural computation_, vol. 1, no. 2, pp. 253-262, 1989. * [55] R. S. Sutton and A. G. Barto, _Reinforcement learning: An introduction_. MIT press, 2018. * [56] N. Sunderhauf, O. Brock, W. Scheirer, R. Hadsell, D. Fox, J. Leitner, B. Upcroft, P. Abbeel, W. Burgard, M. Milford _et al._, "The limits and potentials of deep learning for robotics," _The International Journal of Robotics Research_, vol. 37, no. 4-5, pp. 405-420, 2018. * [57] J. Ibarz, J. Tan, C. Finn, M. Kalakrishnan, P. Pastor, and S. Levine, "How to train your robot with deep reinforcement learning: lessons we have learned," _The International Journal of Robotics Research_, vol. 40, no. 4-5, pp. 698-721, 2021. * a framework for motor control in humans and humanoid robotics," in _Adaptive motion of animals and machines_. Springer, 2006, pp. 261-280. * [59] A. Gams, A. Ude, J. Morimoto _et al._, "Deep encoder-decoder networks for mapping raw images to dynamic movement primitives," in _2018 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2018, pp. 5863-5868. * [60] S. Bahl, M. Mukadam, A. Gupta, and D. Pathak, "Neural dynamic policies for end-to-end sensorimotor learning," in _NeurIPS_, 2020. * [61] T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter, "Learning robust perceptive locomotion for quadrupedal robots in the wild," _Science Robotics_, vol. 7, no. 62, p. eaik02822, 2022. * [62] A. Belmonte-Baeza, J. Lee, G. Valsecchi, and M. Hutter, "Meta reinforcement learning for optimal design of legged robots," _IEEE Robotics and Automation Letters_, 2022. * [63] M. Srooij, J. Zhang, and R. Salakhutdinov, "Structured control nets for deep reinforcement learning," in _International Conference on Machine Learning_. PMLR, 2018, pp. 4742-4751. * [64] V. Liu, A. Adenij, N. Lee, and J. Zhao, "Recurrent control nets as central pattern generators for deep reinforcement learning," _SURI: The Stanford Undergraduate Research Journal_, vol. 18, no. 1, pp. 51-55, 2019. * [65] G. Endo, J. Morimoto, T. Matsubara, J. Nakanishi, and G. Cheng, "Learning CPG-based biped locomotion with a policy gradient method Application to a humanoid robot," _The International Journal of Robotics Research_, vol. 27, no. 2, pp. 213-228, 2008. * [66] L. Campanaro, S. Gangapurwala, D. De Martini, W. Merkt, and I. Havoutsis, "Cp-ag actor: Reinforcement learning for central pattern generators," _arXiv preprint arXiv:2102.12891_, 2021. * [67] S. Fujimoto, H. Hoof, and D. Meger, "Addressing function approximation error in actor-critic methods," in _International Conference on Machine Learning_. PMLR, 2018, pp. 1587-1596. * [68] T. P. Lillicrap, J. I. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, "Continuous control with deep reinforcement learning," _arXiv preprint arXiv:1509.02971_, 2015. * [69] S. Thrun and A. Schwartz, "Issues in using function approximation for reinforcement learning," in _Proceedings of the 1993 Connectionist Models Summer School Hillsdale, NJ. Lawrence Erlbaum_, vol. 6, 1993, pp. 1-9. * [70] T. Geijtenbeek, M. Van De Panne, and A. F. Van Der Stappen, "Flexible muscle-based locomotion for bipedal creatures," _ACM Transactions on Graphics (TOG)_, vol. 32, no. 6, pp. 1-11, 2013. * [71] M. L. Littman, "Markov games as a framework for multi-agent reinforcement learning," in _Machine learning proceedings 1994_. Elsevier, 1994, pp. 157-163. * [72] M. Schilling, K. Konen, F. W. Ohl, and T. Korthals, "Decentralized deep reinforcement learning for a distributed and adaptive locomotion controller of a hexapod robot," in _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2020, pp. 5335-5342. * [73] F. A. Oliehoek, M. T. Spaan, and N. Vlassis, "Optimal and approximate Q-value functions for decentralized POMDPs," _Journal of Artificial Intelligence Research_, vol. 32, pp. 289-353, 2008. * [74] R. Lowe, Y. WU, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments," _Advances in Neural Information Processing Systems_, vol. 30, pp. 6379-6390, 2017. * [75] M. Tan, "Multi-agent reinforcement learning: Independent vs. cooperative agents," in _Proceedings of the tenth international conference on machine learning_, 1993, pp. 330-337. * [76] E. Coumans and Y. Bai, "Pybullet, a python module for physics simulation for games, robotics and machine learning," [http://pybullet.org](http://pybullet.org), 2016-2019. * [77] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, "OpenAI Gym," 2016. * [78] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelsheim, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, "Pytorch: An imperative style, high-performance deep learning library," in _Advances in Neural Information Processing Systems 32_, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, Eds. Curran Associates, Inc., 2019, pp. 8024-8035. * [79] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," _arXiv preprint arXiv:1412.6980_, 2014. * [80] Raspberry Pi Foundation, "Raspberry Pi 4 Model B," 2021, accessed: September 18, 2021. [Online]. Available: [https://www.raspberrypi.org/products/raspberry-pi-4-model-b/](https://www.raspberrypi.org/products/raspberry-pi-4-model-b/) * [81] D. Kahneman, _Thinking, fast and slow_. Macmillan, 2011. * [82] S. B. Kaufman, _Beyond general intelligence: The dual-process theory of human intelligence_. Yale University, 2009. * [83] W. Huang, I. Mordatch, and D. Pathak, "One policy to control them all: Shared modular policies for agent-agnostic control," in _International Conference on Machine Learning_. PMLR, 2020, pp. 4455-4464. * [84] P. Atoofi, F. H. Hamker, and J. Nassour, "Learning of central pattern generator coordination in robot drawing," _Frontiers in Neurorobotics_, p. 44, 2018. * [85] M. Ishige, T. Umedachi, T. Taniguchi, and Y. Kawahara, "Exploring Behaviors of Caterpillar-Like Soft Robots with a Central Pattern Generator-Based Controller and Reinforcement Learning," _Soft robotics_, vol. 6, no. 5, pp. 579-594, 2019. Appendix: DeepCPG Policies for Robot Locomotion **Aditya M. Deshpande, Eric Hurd, Ali A. Minai, Manish Kumar** University of Cincinnati Cincinnati, OH 45221 [email protected]; [email protected]; [email protected]; [email protected] ## Appendix A Derivatives of Kuramoto Model based CPG Network After incorporating Central Pattern Generators based on Kuramoto Model of oscillations in the neural actor, it remains differentiable. The equations 1-4 describe the model of central pattern generators (CPGs). \[\dot{\phi}_{i} = \omega_{i}+\sum_{j\neq i}a_{j}w_{ij}\sin(\phi_{j}-\phi_{i}-\varphi _{ij}), \tag{1}\] \[\ddot{a}_{i} = \alpha_{a}(\beta_{a}(A_{i}-a_{i})-\dot{a}_{i}),\] (2) \[\ddot{b}_{i} = \alpha_{b}(\beta_{b}(B_{i}-b_{i})-\dot{b}_{i}),\] (3) \[y_{i} = b_{i}+a_{i}\sin(\phi_{i}) \tag{4}\] Here for a CPG \(i\), \(\phi_{i}\) is the rate of change of phase, \(\omega_{i}\) is the natural frequency, \(A_{i}\) is the desired amplitude and \(B_{i}\) is the desired offset. \(w_{ij}\) represents the connection weight between the incoming connection from CPG \(j\) to CPG \(i\). \(\varphi_{ij}\) is the connection phase bias between the CPGs \(i\) and \(j\). \((\alpha_{a},\beta_{a})\) and \((\alpha_{b},\beta_{b})\) are the constants. The target trajectory or action command sent to the robot environment by CPG \(i\) is given by \(y_{i}\). We discretize equations 1-4 at time \(t\) for analytically solution. Equations 5-6 represent the discrete form of Eq. 1. \[\dot{\phi}_{i}^{t} = \omega_{i}+\sum_{j\neq i}a_{j}^{t-1}w_{ij}\sin(\phi_{j}^{t-1}- \phi_{i}^{t-1}-\varphi_{ij}), \tag{5}\] \[\phi_{i}^{t} = \phi_{i}^{t-1}+\dot{\phi}_{i}^{t-1}\delta t \tag{6}\] Equations 7-9 represent the discrete form of Eq. 2. \[\ddot{a}_{i}^{t} = \alpha_{a}(\beta_{a}(A_{i}-a_{i}^{t-1})-\dot{a}_{i}^{t-1}), \tag{7}\] \[\dot{a}_{i}^{t} = \dot{a}_{i}^{t-1}+\ddot{a}_{i}^{t-1}\delta t,\] (8) \[a_{i}^{t} = a_{i}^{t-1}+\dot{a}_{i}^{t-1}\delta t, \tag{9}\] Equations 10-12 represent the discrete form of Eq. 3. \[\ddot{b}_{i}^{t} = \alpha_{b}(\beta_{b}(B_{i}-b_{i}^{t-1})-\dot{b}_{i}^{t-1}), \tag{10}\] \[\dot{b}_{i}^{t} = \dot{b}_{i}^{t-1}+\ddot{b}_{i}^{t-1}\delta t,\] (11) \[b_{i}^{t} = b_{i}^{t-1}+\dot{b}_{i}^{t-1}\delta t, \tag{12}\] Similarly, Eq. 13 represent the discrete form of Eq. 4. \[y_{i}^{t} = b_{i}^{t}+a_{i}^{t}\sin(\phi_{i}^{t}) \tag{13}\] Using 6-13, a recurrent relationship can be derived for \(y_{i}\) and its derivatives with respect to CPG parameters \(\{\varphi_{ij},w_{ij},\omega_{i},A_{i},B_{i}\}\forall i,j\in\{1,..,N\}\). This relationship is similar to the one discussed for discrete dynamic movement primitives in [Gams et al., 2018]. Let there be a cost function \(\mathcal{L}(y_{i}^{t})\) which is dependent on the CPG output \(y_{i}^{t}\) at time \(t\). This loss function can be considered as a negative reward function. Thus, to backpropagate through this policy using the loss function, we need the partial derivatives with respect to the parameters that are outputs of the neural policy, i.e., CPG parameters \(\{\varphi_{ij},w_{ij},\omega_{i},A_{i},B_{i}\}\forall i,j\in\{1,..,N\}\). We can derive these partial derivatives using the Chain Rule in calculus. ### Phase Bias \(\varphi\) \[\frac{\partial\mathcal{L}(y_{i}^{t})}{\partial\varphi_{ij}} = \frac{\partial\mathcal{L}(y_{i}^{t})}{\partial y_{i}^{t}}\frac{ \partial y_{i}^{t}}{\partial\phi_{i}^{t}}\frac{\partial\phi_{i}^{t}}{\partial \varphi_{ij}} \tag{14}\] Here, the term \(\frac{\partial\phi_{i}^{t}}{\partial\varphi_{ij}}\) can be found using Eq. 6: \[\frac{\partial\phi_{i}^{t}}{\partial\varphi_{ij}} = \frac{\partial\phi_{i}^{t-1}}{\partial\varphi_{ij}}+\frac{ \partial\dot{\phi}_{i}^{t-1}}{\partial\varphi_{ij}}\delta t \tag{15}\] And using Eq. 5, to evalute \(\frac{\partial\dot{\phi}_{i}^{t-1}}{\partial\varphi_{ij}}\) as follows: \[\frac{\partial\dot{\phi}_{i}^{t-1}}{\partial\varphi_{ij}} = \frac{\partial\sum_{k\neq i}a_{j}^{t-2}w_{ik}\sin(\phi_{k}^{t-2}- \phi_{i}^{t-2}-\varphi_{ik})}{\partial\varphi_{ij}} \tag{16}\] \[= \frac{\partial\big{(}\cdots+a_{j}^{t-2}w_{ij}\sin(\phi_{j}^{t-2} -\phi_{j}^{t-2}-\varphi_{ij})+\cdots\big{)}}{\partial\varphi_{ij}}\] (17) \[= a_{j}^{t-2}w_{ij}\Big{(}\frac{\partial\phi_{j}^{t-2}}{\partial \varphi_{ij}}-\frac{\partial\phi_{i}^{t-2}}{\partial\varphi_{ij}}-1\Big{)} \cos(\phi_{j}^{t-2}-\phi_{i}^{t-2}-\varphi_{ij}) \tag{18}\] But, \(\phi_{j}^{t-2}\) depends on \(\varphi_{ji}\) and not on \(\varphi_{ij}\). We can rewrite Eq. 18 as Eq. 19 \[\frac{\partial\dot{\phi}_{i}^{t-1}}{\partial\varphi_{ij}} = a_{j}^{t-2}w_{ij}\cos(\phi_{j}^{t-2}-\phi_{i}^{t-2}-\varphi_{ij} )\Big{(}-\frac{\partial\phi_{i}^{t-2}}{\partial\varphi_{ij}}-1\Big{)} \tag{19}\] By substituting Eq. 19 in Eq. 15 \[\frac{\partial\phi_{i}^{t}}{\partial\varphi_{ij}} = \frac{\partial\phi_{i}^{t-1}}{\partial\varphi_{ij}}+a_{j}^{t-2}w _{ij}\cos(\phi_{j}^{t-2}-\phi_{i}^{t-2}-\varphi_{ij})\Big{(}-\frac{\partial \phi_{i}^{t-2}}{\partial\varphi_{ij}}-1\Big{)}\delta t \tag{20}\] The loss function \(\mathcal{L}(y_{i}^{t})\) can be given similar treatment with respect to \(w_{ij}\) and \(\omega_{i}\). ### CPG Connection Weights \(w\) \[\frac{\partial\phi_{i}^{t}}{\partial w_{ij}} = \frac{\partial\phi_{i}^{t-1}}{\partial w_{ij}}+a_{j}^{t-2}\sin( \phi_{j}^{t-2}-\phi_{i}^{t-2}-\varphi_{ij})\delta t+a_{j}^{t-2}w_{ij}\cos( \phi_{j}^{t-2}-\phi_{i}^{t-2}-\varphi_{ij})\Big{(}-\frac{\partial\phi_{i}^{t-2 }}{\partial w_{ij}}\Big{)}\delta t \tag{21}\] ### CPG Natural Frequency \(\omega\) \[\frac{\partial\phi_{i}^{t}}{\partial\omega_{i}} = \frac{\partial\phi_{i}^{t-1}}{\partial\omega_{i}}+\delta t+a_{j}^{t -2}w_{ij}\sin(\phi_{j}^{t-2}-\phi_{i}^{t-2}-\varphi_{ij})\Big{(}-\frac{ \partial\phi_{i}^{t-2}}{\partial\omega_{i}}\Big{)}\delta t \tag{22}\] ### CPG Desired Amplitude \(A\) Similarly, taking partial derivative of loss function \(\mathcal{L}(y_{i}^{t})\) with respect to \(A_{i}\): \[\frac{\partial\mathcal{L}(y_{i}^{t})}{\partial A_{i}} = \frac{\partial\mathcal{L}(y_{i}^{t})}{\partial y_{i}^{t}}\frac{ \partial y_{i}^{t}}{\partial a_{i}^{t}}\frac{\partial a_{i}^{t}}{\partial A_{ i}} \tag{23}\] Using Eq. 9 to find the value of \(\frac{\partial a_{i}^{t}}{\partial A_{i}}\) in Eq. 23: \[\frac{\partial a_{i}^{t}}{\partial A_{i}} = \frac{\partial a_{i}^{t-1}}{\partial A_{i}}+\frac{\partial\dot{ a}_{i}^{t-1}}{\partial A_{i}}\delta t \tag{24}\] Using Eq. 8 we can evaluate \(\frac{\partial\dot{a}_{i}^{t-1}}{\partial A_{i}}\): \[\frac{\partial\dot{a}_{i}^{t-1}}{\partial A_{i}} = \frac{\partial\dot{a}_{i}^{t-2}}{\partial A_{i}}+\frac{\partial \ddot{a}_{i}^{t-2}}{\partial A_{i}}\delta t \tag{25}\] Similarly, Eq. 7 to find \(\frac{\partial\dot{a}_{i}^{t-1}}{\partial A_{i}}\): \[\frac{\partial\ddot{a}_{i}^{t-1}}{\partial A_{i}} = \alpha_{a}\Big{(}\beta_{a}(1-\frac{\partial a_{i}^{t-2}}{\partial A _{i}})-\frac{\partial\dot{a}_{i}^{t-2}}{\partial A_{i}}\Big{)} \tag{26}\] By substitution for Eq. 25 in Eq. 24, \[\frac{\partial a_{i}^{t}}{\partial A_{i}} = \frac{\partial a_{i}^{t-1}}{\partial A_{i}}+\Big{(}\frac{ \partial\dot{a}_{i}^{t-2}}{\partial A_{i}}+\frac{\partial\ddot{a}_{i}^{t-2}}{ \partial A_{i}}\delta t\Big{)}\delta t \tag{27}\] By substitution Eq. 26 in Eq. 27, \[\frac{\partial a_{i}^{t}}{\partial A_{i}}=\frac{\partial a_{i}^{t-1}}{ \partial A_{i}}+\Big{(}\frac{\partial\dot{a}_{i}^{t-2}}{\partial A_{i}}+ \alpha_{a}\big{(}\beta_{a}(1-\frac{\partial a_{i}^{t-3}}{\partial A_{i}})- \frac{\partial\dot{a}_{i}^{t-3}}{\partial A_{i}}\big{)}\delta t\Big{)}\delta t \tag{28}\] ### CPG Desired Offset \(B\) Now, taking partial derivative of loss function \(\mathcal{L}(y_{i}^{t})\) with respect to \(B_{i}\) and again using the Chain Rule: \[\frac{\partial\mathcal{L}(y_{i}^{t})}{\partial B_{i}} = \frac{\partial\mathcal{L}(y_{i}^{t})}{\partial y_{i}^{t}}\frac{ \partial y_{i}^{t}}{\partial b_{i}^{t}}\frac{\partial B_{i}^{t}}{\partial B_{i}} \tag{29}\] Now, we can derive the similar expression as Eq. 28 for \(B_{i}\) in Eq. 30, \[\frac{\partial b_{i}^{t}}{\partial B_{i}}=\frac{\partial b_{i}^{t-1}}{ \partial B_{i}}+\Big{(}\frac{\partial\dot{b}_{i}^{t-2}}{\partial B_{i}}+\alpha _{b}\big{(}\beta_{b}(1-\frac{\partial b_{i}^{t-3}}{\partial B_{i}})-\frac{ \partial\dot{b}_{i}^{t-3}}{\partial B_{i}}\big{)}\delta t\Big{)}\delta t \tag{30}\] Therefore, equations 20, 21, 22, 28, 30 show that the dynamical system defined by Central Pattern Generators based or Kuramoto model of oscillation is differentiable with respect to \(\varphi_{ij},w_{ij},\omega_{i},A_{i},B_{i}\) respectively. Pseudo code ``` Maximum iterations \(Nmax\), Babbling steps \(\tau_{b}\), Update steps \(\tau_{update}\), Policy update delay \(\tau_{delay}\), CPG policy steps \(\tau_{c}\), Observation steps \(\tau_{o}\), Discount factor \(\gamma\), Polyak averaging constant \(\rho\), Learning rate \(\eta\), Noise clip limit \(c\), Policy exploration noise \(\sigma\) NOTE: We use \(x_{-\tau_{(:)}}\equiv x_{t-\tau_{(:)}:t}\) and \(x_{:\tau_{(:)}}\equiv x_{t:t+\tau_{(:)}}\) to avoid notation clutter Randomly initialize \(\pi_{\theta}\), Q-functions \(Q_{\kappa_{1}}\) and \(Q_{\kappa_{2}}\) Set target network parameters equal to main parameters: \(\theta^{targ}\leftarrow\theta\), \(\kappa_{1}^{targ}\leftarrow\kappa_{1}\) and \(\kappa_{2}^{targ}\leftarrow\kappa_{2}\) Initialize replay buffer \(\mathcal{R}\) while\(k<Nmax\)do Observe state \(s_{-\tau_{o}:}\), Goal \(g\) if\(k<\tau_{b}\)then Select CPG goals \(g_{CPG}\) randomly else Select CPG goals \(g_{CPG}\sim\pi_{\theta}(s_{-\tau_{a}:},g)\) end if for\(n=\{1,\ldots,\tau_{c}\}\)do Observe CPG hidden state \(h\) Select CPG actions \(\hat{g}_{j},h^{\prime}\sim\pi_{CPG}(g_{CPG},h)\) \(g_{j}=clip\left(\hat{g}_{j}+\epsilon,g_{j}^{\mathit{min}},g_{j}^{\mathit{max}}\right)\) where \(\epsilon\sim\mathcal{N}(0,\sigma)\); Execute \(g_{j}\) in environment Observe next state \(s^{\prime}\), reward \(r\) and environment terminal signal \(d\) indicating if \(s^{\prime}\) is terminal state Collect \((s,g_{j},s^{\prime},r,d,h,h^{\prime},g_{CPG},g)\) in \(\mathcal{R}\) where \(s=s_{t}\) \(k=k+1\) end for If \(s^{\prime}\) is terminal state, reset environment, reset goal \(g\) and reset CPG hidden state \(h\) if\(k>\tau_{b}\)then for\(i=\{1,\ldots,\tau_{update}\}\)do Randomly sample batches from episodes stored in \(\mathcal{R}\) \(B=\{(s_{-\tau_{o}:},g_{j,\tau_{c}},s^{\prime}_{:\tau_{c}},\tau_{:\tau_{c}},d_{ t+\tau_{c}},h_{t},h^{\prime}_{t+\tau_{c}},g_{CPG},g_{t},g_{t+\tau_{c}})\}\) Compute \(g^{\prime}_{CPG}\sim\pi_{\theta^{targ}}(s^{\prime}_{t+\tau_{c}-\tau_{o}:t+ \tau_{c}},g_{t+\tau_{c}})\) \(h^{\prime}=h^{\prime}_{t+\tau_{c}}\) for\(t=\{1,\ldots,\tau_{c}\}\)do Get CPG hidden state \(h^{\prime}\) Select target CPG actions \(\hat{g}^{\prime}_{j,t},h^{\prime\prime}\sim\pi_{CPG}(g^{\prime}_{CPG},h^{ \prime})\) Collect \(g^{\prime}_{j,t}=clip\left(\hat{g}^{\prime}_{j,t}+clip\left(\epsilon,-c,c \right),g_{j}^{\mathit{min}},g_{j}^{\mathit{max}}\right)\) where \(\epsilon\sim\mathcal{N}(0,\sigma)\) end for Compute targets: \(y(r_{\tau_{c}},s^{\prime}_{t+\tau_{c}-\tau_{o}:t+\tau_{c}},g_{t+\tau_{c}})=\) \(\sum r_{\tau_{c}}+\gamma(1-d_{t+\tau_{c}})\min\limits_{i=1,2}\left(Q_{\kappa_{ i}^{targ}}\left(s^{\prime}_{t+\tau_{c}-\tau_{o}:t+\tau_{c}},g_{t+\tau_{c}},g_{j,:\tau_{c}}^{\prime}\right)\right)\) Update Q-functions by one-step gradient descent using: \(\nabla_{\kappa_{i}}\frac{1}{|B|}\sum\limits_{B}\left(Q_{\kappa_{i}}(s_{-\tau_{ o}:},g_{t},g_{j,:\tau_{c}})-y(r_{:\tau_{c}},s^{\prime}_{t+\tau_{c}-\tau_{o}:t+ \tau_{c}},g_{t+\tau_{c}},d_{t+\tau_{c}})\right)^{2}\) for\(i=\{1,2\}\) if\(i\mod\tau_{delay}=0\)then Update policy by one-step gradient ascent \(\nabla_{\theta}\frac{1}{|B|}\sum\limits_{B}Q_{\kappa_{1}}(s_{-\tau_{o}:},g_{t},g_{j,:\tau_{c}})\) Update target networks: \(\theta^{targ}\leftarrow\rho\theta^{targ}+(1-\rho)\theta;\quad\kappa_{i}^{targ} \leftarrow\rho\kappa_{i}^{targ}+(1-\rho)\kappa_{i}\) for\(i=\{1,2\}\) end if end for end for ``` **Algorithm 1**Training DeepCPG Policies ## Appendix C Schematic of DeepCPG Policy Figure 1: Example schematic illustrating CPG Policy and dimensions of CPG parameter outputs from neural policy \(\pi_{\theta}\) for a robot with four CPG nodes. In CPG Policy, each circled number represents a CPG node corresponding to the active joint on the robot with that number. It should be noted that this is just an illustration. In the case of the actual robots considered for the experiments, the number of joints on the robot is more and the corresponding diagram of CPG Policy would have far more connections. We chose to show an example with only four nodes in the CPG policy to avoid clutter. Please refer Figure 2 and Section-III in the paper for the meaning of various symbols used here. ## Appendix D Additional Results
2308.00808
Towards Climate Neutrality: A Comprehensive Overview of Sustainable Operations Management, Optimization, and Wastewater Treatment Strategies
Various studies have been conducted in the fields of sustainable operations management, optimization, and wastewater treatment, yielding unsubstantiated recovery. In the context of Europes climate neutrality vision, this paper reviews effective decarbonization strategies and proposes sustainable approaches to mitigate carbonization in various sectors such as building, energy, industry, and transportation. The study also explores the role of digitalization in decarbonization and reviews decarbonization policies that can direct governments action towards a climate-neutral society. The paper also presents a review of optimization approaches applied in the fields of science and technology, incorporating modern optimization techniques based on various peer-reviewed published research papers. It emphasizes non-conventional energy and distributed power generating systems along with the deregulated and regulated environment. Additionally, this paper critically reviews the performance and capability of micellar enhanced ultrafiltration (MEUF) process in the treatment of dye wastewater. The review presents evidence of simultaneous removal of co-existing pollutants and explores the feasibility and efficiency of biosurfactant in-stead of chemical surfactant. Lastly, the paper proposes a novel firm-regulator-consumer interaction framework to study operations decisions and interactive cooperation considering the interactions among three agents through a comprehensive literature review on sustainable operations management. The framework provides support for exploring future research opportunities.
Vasileios Alevizos, Ilias Georgousis, Anna-Maria Kapodistria
2023-08-01T19:46:01Z
http://arxiv.org/abs/2308.00808v1
Towards Climate Neutrality: A Comprehensive Overview of Sustainable Operations Management, Optimization, and Wastewater Treatment Strategies ###### Abstract Various studies have been conducted in the fields of sustainable operations management, optimization, and wastewater treatment, yielding unsubstantiated recovery. In the context of Europe's climate neutrality vision, this paper reviews effective decarbonization strategies and proposes sustainable approaches to mitigate carbonization in various sectors such as building, energy, industry, and transportation. The study also explores the role of digitalization in decarbonization and reviews decarbonization policies that can direct governments' action towards a climate-neutral society. The paper also presents a review of optimization approaches applied in the fields of science and technology, incorporating modern optimization techniques based on various peer-reviewed published research papers. It emphasizes non-conventional energy and distributed power generating systems along with the deregulated and regulated environment. Additionally, this paper critically reviews the performance and capability of micellar enhanced ultrafiltration (MEUF) process in the treatment of dye wastewater. The review presents evidence of simultaneous removal of coexisting pollutants and explores the feasibility and efficiency of biosurfactant instead of chemical surfactant. Lastly, the paper proposes a novel firm-regulator-consumer interaction framework to study operations decisions and interactive cooperation considering the interactions among three agents through a comprehensive literature review on sustainable operations management. The framework provides support for exploring future research opportunities. Sustainable operations management, optimization, wastewater treatment, decarbonization strategies, carbonization, building, energy, industry, digitalization, decarbonization policies. + Footnote †: c) 2023 by the authors. Submitted for possible open access publication under the terms and conditions of the Creative Commons Attribution (CC BY) license ([https://creativecommons.org/licenses/by/4.0](https://creativecommons.org/licenses/by/4.0)). ## 1 Introduction Emphasis is placed on non-conventional energy and distributed power generating systems, as well as deregulated and regulated environments. Furthermore, a critical appraisal of micellar enhanced ultrafiltration (MEUF) process performance and capability in dye wastewater treatment is conducted. Evidence of simultaneous removal of co-existing pollutants is presented, and an investigation into the feasibility and efficiency of bio-surfactant in lieu of chemical surfactant is explored [1]. Finally, a novel firm-regulator-consumer interaction framework is proposed for studying operations decisions and interactive cooperation, considering interactions among three agents through an exhaustive literature review on sustainable operations management. This framework offers a foundation for delving into future research opportunities. ### Background and Motivation Climate Spanning back to the Minoan civilization around 3,200 BC, Greece's wastewater management history is enriched with the initial development of drainage and sewage systems, along with other sanitary infrastructures. Progressively, these technological advancements were adopted on the Greek mainland during subsequent eras of Greek civilization, encapsulating the Myceneaan, Classical, Hellenistic, and Roman periods [2]. _Objectives and Scope of the Study_ Research on achieving climate neutrality and greenhouse gas emissions reduction incorporates investigation into governance, economic tools, technology, dialogue, to promote sustainable development and mitigate climate change. Analyzed are potential of non-conventional vehicles like battery electric, compressed natural gas, hydrogen fuel cell electric vehicles for greenhouse gas emissions reduction from land transportation [3, 4]. Control of greenhouse gas emissions might be significantly influenced by economic instruments such as carbon pricing, carbon tax [5], with green finance and circular economy providing support for transition towards carbon neutrality. Gained understanding of climate change economics' principles and methodologies provide crucial insights [6]. Lastly, witnessed are advancements in wastewater infrastructure construction in Greece, yet challenges persist [7, 8]. _Structure of the Paper_ Initially, an introduction is provided, whereby context and background information are elucidated, and objectives and hypotheses are outlined. Subsequently, a chapter on achieving climate neutrality in Europe through decarbonization strategies is presented, wherein various approaches and technologies are thoroughly examined, and their effectiveness in reducing greenhouse gas emissions is assessed. A discussion follows, wherein findings are critically analyzed, and potential limitations, implications, and areas for further research are identified [1]. Lastly, a conclusion is drawn, synthesizing key insights gleaned from chapters, and highlighting contributions made to the field of climate change mitigation [2]. ## 2 Achieving Climate Neutrality in Europe through Decarbonization Strategies Climate neutrality in Europe has been envisioned as a critical objective, with decarbonization strategies being extensively pursued to achieve this goal. Simultaneously, innovative wastewater treatment methods, such as Micellar Enhanced Ultrafiltration (MEUF), have been employed to address environmental concerns. Through the amalgamation of Europe's climate neutrality vision, decarbonization efforts, and the implementation of cutting-edge technologies like MEUF, a sustainable future is being forged for the region. By intertwining these essential topics, a comprehensive understanding of the multifaceted approach to environmental preservation and climate action can be gleaned [1]. _Advancing Towards Europe's Climate Neutrality Ambitions_ Progress towards Europe's climate neutrality ambitions has been steadily advanced through the implementation of comprehensive decarbonization strategies. The adoption of innovative technologies, such as Micellar Enhanced Ultrafiltration (MEUF), has been promoted to ensure effective wastewater treatment and to address environmental concerns [9]. A sustainable future for the region has been facilitated by the harmonious integration of these efforts, thereby enabling a multi-faceted approach to environmental preservation and climate action. Consequently, a deeper understanding of the intricate interplay between climate neutrality, decarbonization, and cutting-edge technologies has been provided, highlighting the commitment of European nations to achieving a sustainable and environmentally responsible future [1]. _Decarbonization Strategies in Key Sectors_ Decarbonization strategies in key sectors have been increasingly prioritized as a crucial component of Europe's climate neutrality efforts. In the energy sector, a significant transition to renewable sources has been witnessed, while the reliance on fossil fuels has been gradually diminished. The transportation industry has been revolutionized by the widespread adoption of electric vehicles, and improvements in public transport systems have been made to reduce the carbon footprint. Industrial processes have been reevaluated and optimized to minimize greenhouse gas emissions and sustainable practices have been integrated into agriculture and land use. As a result, the transition towards a low-carbon economy has been accelerated, and the ambition of achieving climate neutrality in Europe has been brought closer to realization [10]. #### 2.1.1 Building Sector Significant strides have been made in the building sector as part of the concerted efforts towards achieving climate neutrality in Europe. Energy-efficient practices and the use of eco-friendly materials have been increasingly prioritized, leading to the construction of greener, more sustainable buildings [11]. The adoption of passive house designs and the incorporation of renewable energy sources, such as solar panels and geothermal heating systems, have been embraced across the region. Furthermore, retrofitting older buildings with improved insulation and energy-efficient technologies has been widely implemented to reduce the overall energy consumption and greenhouse gas emissions attributable to the built environment [12]. In addition to these technological advancements, policy measures have been enacted to facilitate the transition towards a more sustainable building sector. Stringent energy performance standards have been established by European governments, with financial incentives and support programs being offered to encourage the construction of energy-efficient buildings and the renovation of existing structures. Collaborative efforts among architects, engineers, urban planners, and policymakers have been fostered to develop innovative solutions that address the unique challenges posed by urbanization and climate change. By focusing on the building sector as a key component of the decarbonization strategy, Europe continues to demonstrate its commitment to creating a sustainable and climate-resilient future [13]. #### 2.1.2 Energy Sector Identified as a crucial component for pursuing climate neutrality in Europe, the energy sector's significant greenhouse gas emissions are attributed to energy production and consumption. Exploration and implementation of diverse low-carbon, renewable energy sources across Europe, for replacing fossil fuels, resulted in a considerable carbon footprint reduction [14]. Intensified efforts for energy efficiency enhancement and conservation promotion have centered on energy consumption optimization in various sectors. Innovative technologies for improved renewable energy integration into grids, investment encouragement in research and development, and promotion of breakthrough energy technologies contribute to resilience, sustainability, economic growth, and job creation, underlining a green economy's potential [15]. #### 2.1.3 Industrial Sector The industrial sector has been recognized as a key contributor to greenhouse gas emissions, and as such, it is imperative that significant decarbonization efforts are focused on this domain. Various measures have been adopted to reduce the environmental impact of industrial processes, including optimizing energy consumption, using renewable energy sources, and implementing innovative waste management techniques. Additionally, the adoption of circular economy principles has been encouraged, in which resource efficiency and waste reduction are prioritized, thereby promoting sustainable industrial practices [16]. In recent years, the application of advanced technologies, such as Micellar Enhanced Ultrafiltration (MEUF), has been widely embraced in the industrial sector to address wastewater treatment challenges. By utilizing these cutting-edge methods, industries have been able to reduce their environmental footprint and support Europe's climate neutrality ambitions. Furthermore, the collaboration between the public and private sectors has been instrumental in driving research and development in environmentally friendly technologies, leading to the creation of innovative solutions that support a cleaner and more sustainable industrial sector. As Europe moves towards achieving its climate goals, the continued commitment to decarbonization within the industrial sector remains a crucial component of the overall strategy. _Digitalization as a Catalyst for Decarbonization Efforts_ Digitalization's role as a catalyst for decarbonization efforts across various sectors in Europe is increasingly recognized [17]. Advanced technologies like artificial intelligence, big data, IoT have resulted in substantial improvements in energy efficiency, resource management, and greenhouse gas emissions reduction. Enhanced monitoring and control over energy consumption patterns have been facilitated, processes optimized. A key enabler of Europe's low-carbon economy transition, digitalization accelerates the transition from fossil fuels to renewables, contributes to reducing Europe's carbon footprint, and encourages the adoption of electric vehicles through smart charging infrastructure. Industrial transformation through digitalization results in more sustainable practices, promotes the circular economy, minimizes energy consumption and waste, aligning with Europe's decarbonization and climate neutrality goals. Digitalization fosters sustainable development and accelerates Europe's transition towards a low-carbon future [17]. ## 3 Discussion Compliance with the Urban Wastewater Treatment Directive (UWWTD) and incorporation of water reuse into water resource management strategies are being pursued by Athens Water Supply & Sewerage Company (EYDAP S.A.) through several initiatives in Greece. EU Cohesion funds co-financing has been authorized for two significant wastewater projects in East Attica, aimed at producing treated effluent wastewater suitable for limitless irrigation and urban reuse. Additionally, another wastewater plan is being developed to produce reclaimed water for aquifer recharge, while public datasets related to water supply and wastewater management are being utilized for enhancing the efficiency of these initiatives. Greece is following the Swedish paradigm in wastewater management, learning from key success factors such as a holistic policy approach, integration of recycling and energy recovery, and use of economic instruments to incentivize positive practices, all while considering the country's specific needs and alignment with European Union policies and international technological trends. ### Wastewater Management in Greece To comply with the Urban Wastewater Treatment Directive (UWWTD) and include water reuse in its water resources management strategy, the Athens Water Supply & Sewerage Company (EYDAP S.A.) is working on several wastewater management initiatives. Two significant wastewater projects in East Attica (Rafina/Artemida and Marathon agglomerations) have been authorized for EU Cohesion funds co-financing and implementation by EYDAP S.A [18]. The goal of these programs is to create treated effluent wastewater that meets national criteria for limitless irrigation and urban reuse. Another wastewater plan is being developed incorporating the agglomerations of Koropi and Paiania, which will produce reclaimed water appropriate for aquifer recharge to restore the water quality of groundwater bodies [19]. In addition, a plethora of public data sets related to water supply and wastewater management, such as computer modeling of water supply and sewage networks, the implementation of an integrated SCADA system, and the history of water supply and wastewater management in Paris and the Republic of Belarus, could be used to further improve the efficiency of such initiatives [19]. #### Wastewater Management in Sweden In 2013, the Swedish Environmental Agency recommended a national aim for increasing phosphorus recycling from wastewater sludge. Sweden has more than 80 years of experience protecting water quality, and the creation of phosphorus removal technology may be a Swedish contribution to advanced knowledge. Source separation systems have been found to be an efficient method of recovering nutrients and energy from wastewaters in both rural and urban settings, with research on the nutrient recovery potential and life cycle consequences of source separation systems undertaken in northern Finland and Sweden. In Sweden, exploratory research looked at how local administration and municipally held enterprises influence the governance of industrial symbiosis in the water and sewage sectors. Finally, a special issue was published on municipal wastewater management in 2021 [20]. ### Greek Government following Swedish paradigm in waterwast management Several key success factors from Sweden's waste management paradigm can be learned and applied by Greece to enhance its waste management practices. A holistic policy approach is employed by Sweden, which addresses diverse public demands by integrating waste management with other environmental and economic policies. Recycling and energy recovery are integrated in Sweden, as 99% of municipal solid waste is recycled and energy is harnessed, with less than 1% going to landfills [21]. Taxes and tariffs are utilized by Sweden as economic instruments to discourage harmful practices and incentivize positive ones, such as recycling and energy recovery. Autonomy is given to Swedish municipalities, allowing them to have the economic and operational capacity to manage waste collection and treatment systems. By adopting these key success factors, improvements can be made to Greece's waste management practices, reductions in environmental pollution can be achieved, and sustainable development can be promoted. Careful consideration should be given to Greece's specific needs, such as interactions with the extensive tourism sector, and alignment of waste management strategies with the European Union's framework policies and international technological trends. In doing so, wastewater treatment plants can be transformed into sites where energy is efficiently used or produced, resources are recovered and reused, and environmental sustainability is practiced overall [21]. ## 4 Conclusions In compliance with the Urban Wastewater Treatment Directive (UWWTD), several wastewater management initiatives are being undertaken by the Athens Water Supply & Sewerage Company (EYDAP S.A.) to incorporate water reuse into Greece's water resources management strategy. Authorization for EU Cohesion funds co-financing has been granted for two significant wastewater projects in East Attica (Rafina/Artemida and Marathon agglomerations), which are being implemented by EYDAP S.A [18]. National criteria for limitless irrigation and urban reuse are aimed to be met by the creation of treated effluent wastewater through these programs. Another wastewater plan is being developed, which will incorporate the agglomerations of Koropi and Paiania, with the production of reclaimed water appropriate for aquifer recharge to restore the water quality of groundwater bodies [19]. Furthermore, the efficiency of such initiatives can be improved by utilizing a plethora of public data sets related to water supply and wastewater management, such as computer modeling of water supply and sewage networks, the implementation of an integrated SCADA system, and the history of water supply and wastewater management in Paris and the Republic of Belarus [19].
2305.18705
Algorithmic Foundations of Inexact Computing
Inexact computing also referred to as approximate computing is a style of designing algorithms and computing systems wherein the accuracy of correctness of algorithms executing on them is deliberately traded for significant resource savings. Significant progress has been reported in this regard both in terms of hardware as well as software or custom algorithms that exploited this approach resulting in some loss in solution quality (accuracy) while garnering disproportionately high savings. However, these approaches tended to be ad-hoc and were tied to specific algorithms and technologies. Consequently, a principled approach to designing and analyzing algorithms was lacking. In this paper, we provide a novel model which allows us to characterize the behavior of algorithms designed to be inexact, as well as characterize opportunities and benefits that this approach offers. Our methods therefore are amenable to standard asymptotic analysis and provides a clean unified abstraction through which an algorithm's design and analysis can be conducted. With this as a backdrop, we show that inexactness can be significantly beneficial for some fundamental problems in that the quality of a solution can be exponentially better if one exploits inexactness when compared to approaches that are agnostic and are unable to exploit this approach. We show that such gains are possible in the context of evaluating Boolean functions rooted in the theory of Boolean functions and their spectra, PAC learning, and sorting. Formally, this is accomplished by introducing the twin concepts of inexactness aware and inexactness oblivious approaches to designing algorithms and the exponential gains are shown in the context of taking the ratio of the quality of the solution using the "aware" approach to the "oblivious" approach.
John Augustine, Dror Fried, Krishna V. Palem, Duc-Hung Pham, Anshumali Shrivastava
2023-05-30T03:02:57Z
http://arxiv.org/abs/2305.18705v1
# Algorithmic Foundations of Inexact Computing ###### Abstract _Inexact computing_ also referred to as _approximate computing_ is a style of designing algorithms and computing systems wherein the accuracy of correctness of algorithms executing on them is deliberately traded for significant resource savings. Significant progress has been reported in this regard both in terms of hardware as well as software or custom algorithms that exploited this approach resulting in some loss in solution quality (accuracy) while garnering disproportionately high savings. However, these approaches tended to be ad-hoc and were tied to specific algorithms and technologies. Consequently, a principled approach to designing and analyzing algorithms was lacking. In this paper, we provide a novel _model_ which allows us to characterize the behavior of algorithms designed to be inexact, as well as characterize _opportunities_ and benefits that this approach offers. Our methods therefore are amenable to standard asymptotic analysis and provides a clean unified abstraction through which an algorithm's design and analysis can be conducted. With this as a backdrop, we show that inexactness can be significantly beneficial for some fundamental problems in that the quality of a solution can be _exponentially_ better if one exploits inexactness when compared to approaches that are agnostic and are unable to exploit this approach. We show that such gains are possible in the context of evaluating _Boolean functions_ rooted in the theory of Boolean functions and their spectra [37], PAC learning [48], and sorting. Formally, this is accomplished by introducing the twin concepts of _inexactness aware_ and _inexactness oblivious_ approaches to designing algorithms and the exponential gains are shown in the context of taking the ratio of the quality of the solution using the "aware" approach to the "oblivious" approach. Introduction Much of the impetus for increased performance and ubiquity of information technologies is derived from the exponential rate at which technology could be miniaturized. Popularly referred to as Moore's law [35], this trend persisted from the broad introduction of integrated circuits over five decades ago, and was built on the promise of halving the size of transistors which are hardware building blocks roughly every eighteen months. As transistors started approaching 10 nanometers in size, two major hurdles emerged and threatened the hitherto uninterrupted promise of Moore's law. First, engineering reliable devices that provide a basis for viewing them as "deterministic" building blocks started becoming increasingly hard. Various hurdles emerged ranging from vulnerability to noise [29, 31] to vulnerabilities such as ensuring reliable interconnections [6]. Additionally, smaller devices held out the allure that more of them could be packed into the same area or volume thus increasing the amount of computational power that could be crammed into a single chip, while at the same time supporting smaller switching times implying faster clock speeds characterized as _Dennard scaling_. However, this resulted in more switching activity within a given area causing greatly increased energy consumption, often referred to as the "power wall" [6], as well as heat dissipation needs. Motivated by these hurdles, intense research along dimensions as diverse as novel devices and materials such as graphene [36], as well as fundamentally novel computing frameworks including quantum [4, 19, 13] and DNA [1, 5] based approaches have been proposed. However, a common theme in all of these efforts is the need to preserve the predictable and repeatable or deterministic behavior that the resulting computers exhibit, very much in keeping with Turing's original vision [47]. Faced with a similar predicament when digital computers were in their infancy and their components were notoriously unreliable, pioneers such as von Neumann advocated methods for realizing reliable computing from unreliable elements, achieved through error correction [50]. Thus, the march towards realizing computers which retain their impressive reliability continues unabated. In sharp contrast, _inexact computing_[38, 40] was proposed as an unorthodox alternative to overcoming these hurdles, specifically by embracing "unreliable" hardware without attempting to rectify erroneous behavior. The resulting computing architectures solve the problem where the _quality_ of the solution is traded for _disproportionately_ high savings in (energy) resource consumption. The counter-intuitive consequence of this approach was that by embracing hardware architectures that operate erroneously as device sizes shrink, and deliberately so [9, 10], one could simultaneously garner energy savings! Thus, by accepting less than accurate hardware as a design choice, we can simultaneously overcome the energy or power wall. Therefore, in the inexactness regime, devices and resulting computing architectures are allowed to remain unreliable, and the process of deploying algorithms involves _co-designing_[9, 10] them with the technology and the architecture. This resulted in the need for novel algorithmic methods that trade off the quality (accuracy) of their solutions for savings in cost and (energy) resource consumption. To give this context, let us consider the behavior of a single inverter (gate) shown in Figure 1. Here, the probability of correct operation \(q\) of the gate is measured as the energy consumed by the gate increases. It is interesting to note that the energy consumed _increases_ exponentially with \(q\) given by the relationship. Suppose we have spend \(e\) units of energy to inexactly read a bit \(b\). Due to the inexactness, the bit read is \(b^{\prime}\). The probability with which \(b^{\prime}\) differs from \(b\) will depend on \(e\). Modeled on empirically validated physical measurements1, we will use the clean abstraction that the probability of error \(p=(1-q)=Pr[b\neq b^{\prime}]=\frac{1}{2^{e}}\). Thus, a small decrease on the probability of correctness from the "desired" value of \(1\) will result in a disproportionately large savings in energy consumed [32]. The _inexact design philosophy_ is to assign different amounts of energy (or other resources) strategically to different parts of the computation in order to achieve useful trade offs between energy and the quality of the outcome. Footnote 1: In its full form using CMOS characteristics, the probability of error is \(p=\frac{1}{2}\mathsf{erfc}\left(\frac{V_{dd}}{2\sqrt{2}\sigma}\right)\) where the error function \(\mathsf{erfc}(x)=\frac{2}{\sqrt{\pi}}\int_{x}^{\infty}e^{-u^{2}}du\). Building on the inexact design philosophy, quite a few results were published through architectural artifacts that enabled trading the accuracy or quality of a solution, notably for energy consumption. Early examples included specialized architectures for signal processing [23], neural networks [14] and floating point operations for weather prediction [18, 16]. The overarching template for these designs was that of a co-processor or a processor parts of which could be rendered inexact [10, 41]. In literature, inexact computing also goes by approximate computing. Mittal's survey [34] and references therein (along with its many citations) are a testament to the broad impact of inexact/approximate computing. The approaches in general involved exposing the hardware features to and customize the algorithm design to realize the solution by being cognizant of the architectural tradeoffs that the technology offered. This process was heuristic and generally ad-hoc due to the lack of _principled_ methodologies for design and analysis of algorithms in this setting. In this paper, we aim to remedy this situation by providing a clean and simple framework for exposing the unreliable aspects of underlying hardware to the algorithm design process through a foundational model, amenable to rigorous mathematical analysis. Intuitively, the more unreliable an element, the cheaper it is. Thus, the trade-off ubiquitous to our contribution in this paper is to strike the correct balance between cost and quality, the latter being the accuracy of the result. Thus, given a computing substrate which we model below, we can design an algorithm and determine through rigorous analysis whether it meets the quality or accuracy needs. Here, by rigorous analysis, we mean using asymptotic methods used by algorithm designers every day using \(O(n)\) and \(\Omega(n)\) where \(n\) is the size of the input. To the best of our knowledge, the model we present in this paper is the first instance that offers a clean framework for algorithm design and analysis where the architectural and hardware variability is exposed thereby enabling us to leverage it for greater efficiency either in terms of speedup or energy consumption, or a suitable trade off between the two. Many models were used in earlier works informally where researchers used heuristics to take a model of hardware and map an algorithm onto it while trading "cost" for "quality" (see [26], [15], [33] for example, or through ad-hoc experimental methods [17], with some exceptions from earlier works in the limited domain of integer arithmetic [11, 42, 8] based on experimental findings [22]). In these contexts, the researchers were able to navigate a space of solutions, to reiterate heuristically and find a solution that provides the best "quality" or accuracy subject to a cost constraint or vice-versa. In contrast, the model we introduce here provides a mathematically tractable framework that is amenable for a principled approach to algorithm design and analysis through judiciously abstracting the parts of hardware variations that affect cost and quality. In so doing, we claim our model strikes a balance between providing an abstraction that provides adequate detail to capture the impact of hardware (cost) variations, while being simple enough for rigorous mathematical analysis. We demonstrate the value of our model in the context of analyzing the effect of inexactness for a variety of fundamental algorithmic problems. To lay the foundation for our work, we start with Boolean functions and basic operations like binary evaluation, XOR, etc. We next show the power of inexactness in the context of machine learning, a popular topic of interest, and of sorting, an important practical application. Using those functions, we provide a glimpse of the spectrum of possible results and build the big picture that demonstrates the usefulness of the model. In the interest of eliciting the principles of inexactness, the model we present is mathematically clean and provides an effective abstraction for theoretical investigation of inexactness. In reality, the error probability of a complex operation can be calculated by breaking down that operation to computational steps and propagate through the computation. Such analyses quickly become mathematically complicated. We have therefore deliberately simplified our model of inexactness wherein we concentrate the effects of inexactness at the point where data is read. We believe that this simplification retains the principles of inexactness while dispensing with details that can be analyzed more naturally through simulation and experiments, which we hope to do in the future. ### Related work There has been significant progress in inexact computing over the past fifteen years. Early foundational models [39, 43, 44] were aimed at modeling the complexity of algorithms akin to random access machines and circuits [3], and are not well-suited to support algorithm analysis and design. Since then, much progress has been made in the context of inexact VLSI circuits and architectures (see [38] for a historical perspective and partial survey). Problem specific mathematical methods do exist for analyzing the effect of inexactness when specific problems are considered notably arithmetic [9, 12], along with optimization problems through which cost-accuracy tradeoffs were explored [27]. More recently, there has been quite a surge of interest in studying sorting using approximate operators but the associated models do not have an explicit associated cost dimension to optimize [7, 30, 2, 21, 20]. ### Roadmap of the paper In section 2 and 3 of this paper, we respectively describe our inexactness model in its full generality followed by a way of specifying Boolean functions using this model. We choose Boolean functions since they are at the core of understanding computational complexity and algorithmic behavior. For decision problems based on evaluating Boolean functions, In section 4, we show that an optimal energy allocation always exists. In section 5 we look at the question of conditions under which being aware of the importance of a variable in the Boolean function characterized through its influence helps. Thus influence becomes our parameter to base decisions on how an algorithm designer could make decisions about energy investments. This dichotomy is captured by the complementary notions of being "influence aware" versus "influence oblivious" approaches to algorithm design. In section 6, we apply these insights in the context of the well-known PAC learning [48] problem. Next, in section 7 we study the difference between influence aware and influence oblivious approach in sorting. ## 2 The general inexactness model In inexact computing, a function or algorithm \(f\) which could be Boolean is computed in a noisy environment (see Figure 1) where the result can be erroneous. To formalize this notion, we postulate a _reader_ as a function \(\mathcal{R}:\{0,1\}^{n}\rightarrow\{0,1\}^{n}\) that "scrambles" the data by flipping (changing from \(0\) to \(1\) or vice-versa) some of the input bits. The result of the interference of the reader is that instead of evaluating the function using "correct" values, we end up evaluating \(f\circ\mathcal{R}\). Figure 1: The relationship between energy consumed and probability of correctness \(q\) of a single inverter built out of CMOS technology from [32] The extent to which the reader obfuscates \(\mathbf{x}\) depends on the energy invested. We are given an energy _budget_\(\mathcal{E}\geq 0\) that can be apportioned into a vector of \(n\) elements \(\mathbf{e}=(e_{1},\cdots e_{n})\) while ensuring that \(\sum_{i=1}^{n}e_{i}\leq\mathcal{E}\). Each of the \(e_{i}\)'s determines the probability with which our reader provides incorrect values of the corresponding \(\mathbf{x}_{i}\). This effect is characterized by a transformation \(\mathcal{F}:\mathbb{R}\rightarrow[0,1]\) such that for every \(i\), the reader flips bit \(i\) with probability \(p_{i}=\mathcal{F}(e_{i})\), namely with probability \(q_{i}=1-p_{i}\), \(\mathbf{x}_{i}\) must be read correctly. In keeping with measured behavior of CMOS devices outlined above, \(\mathcal{F}(e_{i})=1/2^{e_{i}}\). Clearly, the bigger \(e_{i}\) is, the _lower_ the chance that bit \(i\) is flipped, and \(\mathbf{p}=(p_{1},\cdots p_{n})\). Note that \(\mathbf{p}\) is not a probability but rather, each \(p_{i}\) is. As mentioned in the previous section, this model is inspired by the behavior of physical gates such as the inverter as shown in Figure 1, or NAND gate as mentioned in [9], where an approximately exponential relationship between the error probability and the energy investment (such as energy investment in switching the value of the bit for Probability CMOS switch) was observed. The probability that an error occurs in a computational gate can be abstracted to be the probability of error of reading input bits to that gate. In reality, the error probability of a complex operation can be calculated by breaking down that operation to computational steps, and aggregate the error probability throughout the computational steps. However, analyzing at such level of details will quickly become infeasible. Our approach is to abstract away the details and place the error probability in certain key points in the computation. We believe this abstraction strikes the right balance between capturing what is essential on the one hand, while on the other hand retaining a level of simplicity in the model that allows researchers to be able to analyze algorithmic ideas. ## 3 Modeling inexactness in the context of Boolean functions The previous section proposes the general model for inexactness in the general settings. In this section we want to examine the model further in the context of Boolean functions, a fundamental component of computer science theory and practice. In addition, for the next parts of the paper let us consider a more general version of Boolean functions \(f:\{0,1\}^{n}\rightarrow\mathbb{N}\), because this version of Boolean functions are remarkably more popular in computing. _The overarching theme of this paper is inspired by the following: Given a function \(f:\{0,1\}^{n}\rightarrow\mathbb{N}\), an energy budget \(\mathcal{E}\), and a transformation function \(\mathcal{F}\), what is the optimal way to distribute \(\mathcal{E}\) to \(n\) segments in order to minimize the obfuscation of the reader overall as \(f\) is computed._ Consider an \(n\)-bit binary vector denoted \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\in\{0,1\}^{n}\). For an index \(i\in[n]\), where \([n]\) denotes \(\{1,2,\ldots,n\}\), we use \(\mathbf{x}^{\oplus i}\) to denote the vector that is identical to \(\mathbf{x}\) apart from the bit \(i\), which is "flipped" to \(1-x_{i}\). Similarly, \(\mathbf{x}^{(i\to 0)}\) and \(\mathbf{x}^{(i\to 1)}\) denote the vectors identical to \(\mathbf{x}\) with changing only \(x_{i}\) to either \(0\) or \(1\) respectively, and \(\mathbf{x}\sim\{0,1\}^{n}\) denotes a random value \(\mathbf{x}\)_uniformly_ drawn from \(\{0,1\}^{n}\) and so \(\mathbf{x}\sim\{0,1\}^{n}\). The key concept of _influence_ of the \(i\)th bit for a function \(f:\{0,1\}^{n}\rightarrow\mathbb{N}\) is \[\mathsf{Inf}(i)\triangleq|(f(\mathbf{x})-f(\mathbf{x}^{\oplus i}))|,\] where \(\mathbf{x}\) is drawn uniformly from \(\{0,1\}^{n}\). For convenience, we will refer to \(\mathsf{Inf}(i)\) and the _influence_ of index \(i\) without explicitly referring to \(\mathbf{x}\) when there is no ambiguity. We note that here, we differ from the traditional definition of influence [37] which is the expectation of \(f:\{0,1\}^{n}\rightarrow\mathbb{N}\)_with respect to_\(\mathbf{x}\) over all uniformly drawn vectors \(\mathbf{x}\). However, it is technically more convenient in our case to explicitly express this expectation as \(\mathbb{E}[\mathsf{Inf}(i)]\) averaged over all uniformly drawn vectors \(\mathbf{x}\) and we will adopt this convention in the sequel. Furthermore, for convenience, we arrange the input bits so that \(\mathbb{E}[\mathsf{Inf}(i)]\leq\mathbb{E}[\mathsf{Inf}(i+1)]\) for all \(i<n\). To understand the value of inexactness, influence and its expectation gives us the potential impact of assigning energy to a certain index \(i\) preferentially over another index \(j\) on the _quality_ of the answer. Informally, we wish to assign more energy to variables associated with indices that have greater expected influence. To formalize this idea, let us define the _total impact_ of a function \(f\) given an energy vector \(\mathbf{e}=(e_{i})_{i\leq n}\) and with induced error probabilities \(\mathbf{p}=(p_{i})_{i\leq n}=(2^{-e_{i}})_{i\leq n}\) to be \[\mathsf{TIm}_{f}(\mathbf{p})=\sum_{i\leq n}\mathbb{E}[\mathsf{Inf}(i)]\cdot p _{i}\] We can then use total impact as the measure of how far from the correct values the function drifts given a particular energy vector \(\mathbf{e}\). Now, given a function \(f\) and an energy budget \(\mathcal{E}\), our goal is to find \(\mathbf{e}=(e_{i})_{i\leq n}\) that gives the best quality and thus _minimize_\(\mathsf{TIm}_{f}\). The most obvious and naive approach is to consider an energy vector that is _influence oblivious_ where we allocate the energy equally to all the indices and therefore, \(p_{i}=2^{-\mathcal{E}/n}\) for every \(i\); this corresponds to the traditional architectural design that treats all bits equally. In this case, the _expected total influence oblivious impact_, is \[\mathsf{TIm}_{f}(\mathbf{p}_{O})=2^{-\mathcal{E}/n}\sum_{i\leq n}\mathbb{E}[ \mathsf{Inf}(i)] \tag{1}\] In contrast, an _influence aware_ allocation would be guided by the influence values to where indices with higher influence are assigned "proportionately" higher energy. Let \(\mathsf{TIm}_{f}(\mathbf{p}_{A})\) be the _expected total influence aware impact_. Then, to understand the value of inexactness in the context of a function \(f\), we define a figure of merit \[\alpha=\frac{\mathsf{TIm}_{f}(\mathbf{p}_{O})}{\mathsf{TIm}_{f}(\mathbf{p}_{ A})} \tag{2}\] the ratio of the total impact of the oblivious assignment (numerator) to the aware assignment (denominator). Intuitively, the closer \(\alpha\) is to 1, the less profitable it is to be influence aware as the naive influence oblivious solution can suffice almost as well. Conversely, \(\alpha\) being large is a strong indication that influence aware solutions are likely to have a much higher impact on the quality of the solution. To understand this point, let us consider a simple example of evaluating a binary string. Due to binary representation, the impact of an error grows as we progress from the least significant bit to the most significant bit. Thus, we should expect an influence oblivious approach to perform poorly when compared to one which is influence aware. To capture this notion of increased "weight" ubiquitous to computer science due to binary numbers we will compare influences as we step through the indices and define \[\beta_{i}\triangleq\frac{\mathbb{E}[\mathsf{Inf}(i+1)]}{\mathbb{E}[\mathsf{ Inf}(i)]}. \tag{3}\] where \(\beta_{i}\) is the relative influence of index \(i+1\) compared to \(i\). A straightforward observation is to note that functions with \(\beta_{i}=1\) for all \(1\leq i\leq n\) are functions where all the indices are equally influential; we will refer to such functions as being _influence symmetric_; classical problems such as _parity_ and the _OR_ function are examples. In contrast, _influence asymmetric_ functions have \(\beta_{i}>1\) for some indices \(i\). We are particularly interested in functions where all \(\beta_{i}\) values are equal and denoted \(\beta\). ## 4 Existence of an Optimal Energy Assignment for any Boolean Function To capture the possible benefits of inexactness aware approaches precisely, we formulate the following problem since the model is new and we wish to characterize its properties. **Problem 1**.: _[Basic Inexactness Problem] We define our basic inexactness problem comprising a basic inexact problem instance and an optimization criterion. The basic inexactness problem instance is a tuple \((f,\mathcal{E},\mathcal{F})\) where \(f:\{0,1\}^{n}\rightarrow\mathbb{N}\) is a Boolean function, \(\mathcal{E}\) is the inexactness energy amount, and \(\mathcal{F}\) is the energy translation function. Given the inexactness problem instance \((f,\mathcal{E},\mathcal{F})\), our optimization criterion is to find an energy vector \((e_{1},\cdots,e_{n})\) whose elements sum to at most \(\mathcal{E}\) (i.e., \(\sum_{i}e_{i}\leq\mathcal{E}\)) such that if \(\mathbf{p}=(p_{1},\cdots,p_{n})\) where \(p_{i}=\mathcal{F}(e_{i})\) for every \(i\), then \(\mathsf{Tm}_{f}(\mathbf{p})\) is minimized._ Without loss in generality, we assume that \(\mathcal{F}(e)=1/2^{e}\). We will now show that an optimal solution always exists. **Theorem 1**.: _For every inexactness problem with any given any \(\mathcal{E}\), a solution that minimizes the total impact exists and can be computed._ Proof.: Since \(p_{i}=1/2^{e_{i}}\), we have that \(e_{i}=-\log_{2}(p_{i})\). Therefore for the constraint \(\sum_{i\leq n}e_{i}=\mathcal{E}\) we have: \[\sum_{i\leq n}e_{i}=\mathcal{E}\iff\sum(\log(p_{i}))=-\mathcal{E}\iff\log( \prod p_{i})=-\mathcal{E}\iff\prod_{i\leq n}p_{i}=1/2^{\mathcal{E}} \tag{4}\] Therefore we can redefine the inexact problem in Definition 1 as follows **Problem 2**.: _The problem denoted by \(GP(f,\mathcal{E},\mathcal{F})\) is to find \(\mathbf{e}=(e_{1},\cdots,e_{n})\) such that_ * \(\sum_{i\leq n}\mathbb{E}[\mathsf{Inf}(i)]p_{i}\) _is minimized_ * \(p_{i}=1/2^{e_{i}}\)__ * \(\prod_{i\leq n}p_{i}=1/2^{\mathcal{E}}\)__ * \(0<p_{i}\leq 1\) _for every_ \(i\leq n\) _(_\(p_{i}\) _cannot be_ \(0\)_)_ To solve Problem 2 we use the AM-GM inequality according to which for every non-negative reals \(a_{1},\cdots,a_{n}\) we have. \[\frac{1}{n}\sum_{1\leq n}a_{i}\geq(\prod_{i\leq n}a_{i})^{\frac{1}{n}} \tag{5}\] Since all the \(p_{i}\) and the \(\mathbb{E}[\mathsf{Inf}(i)]\) are non-negative, we can apply the AM-GM inequality to get: \[\frac{1}{n}\sum_{i\leq n}\mathbb{E}[\mathsf{Inf}(i)]p_{i}\geq(\prod_{i\leq n }\mathbb{E}[\mathsf{Inf}(i)]p_{i})^{\frac{1}{n}} \tag{6}\] Thus, \[\sum_{i\leq n}\mathbb{E}[\mathsf{Inf}(i)]p_{i}\geq n(\prod_{i\leq n}\mathbb{E }[\mathsf{Inf}(i)]\prod_{i\leq n}p_{i})^{\frac{1}{n}} \tag{7}\] Since we have a constraint that \(\prod_{i\leq n}p_{i}=1/2^{\mathcal{E}}\), we all in all have that: \[\sum_{i\leq n}\mathbb{E}[\mathsf{Inf}(i)]p_{i}\geq n(\prod_{i\leq n}\mathbb{E }[\mathsf{Inf}(i)]2^{-\mathcal{E}})^{\frac{1}{n}} \tag{8}\] Recall that we need to find values for \(p_{i}\) such that \(\sum_{i\leq n}\mathbb{E}[\mathsf{Inf}(i)]p_{i}\) is minimized. Since no matter what values of \(p_{i}\) we choose, the left side of the equation above will always be at least as the right side of the equation, minimization will come only when both sides are equal. In AM-GM we have that equality is reached when \(\mathbb{E}[\mathsf{Inf}(i)]p_{i}\) is identical for all \(i\). Using this we can now establish \(p_{i}\) as follows. Assuming \(\mathbb{E}[\mathsf{Inf}(i)]p_{i}=k\) for every \(i\), we get \[nk=n(\prod_{i\leq n}\mathbb{E}[\mathsf{Inf}(i)]2^{-\mathcal{E}})^{\frac{1}{n}}. \tag{9}\] Thus, \[k=(\prod_{i\leq n}\mathbb{E}[\mathsf{Inf}(i)]2^{-\mathcal{E}})^{\frac{1}{n}}. \tag{10}\] Therefore for every \(i\leq n\) we have \[p_{i}=(\prod_{i\leq n}\mathbb{E}[\mathsf{Inf}(i)]2^{-\mathcal{E}})^{\frac{1}{n} }/\mathbb{E}[\mathsf{Inf}(i)] \tag{11}\] and setting \(e_{i}=-\log(p_{i})\) solves the problem as required. It is not always clear how such an influence aware energy assignment can be efficiently computed. Even the task of determining whether \(\mathbb{E}[\mathsf{Inf}(i)]>0\) for a bit \(i\) is a co-NP-hard problem as it encompasses asking whether the given CNF formula has no satisfying assignment. ## 5 Where does inexactness help? We have seen that for a Boolean function, there always exists an optimal energy vector that minimizes the total impact. We now ask when exactly it pays to be influence aware? We shed some light into this question by examining the ratio \(\alpha\) of the two extreme cases where for all \(i\)\(\beta_{i}=\beta\) a constant greater than 1, and the case where \(\beta\) is 1. Recall that \(\alpha\) is the ratio between the expected total influence oblivious impact and the expected total influence aware impact and \(\beta_{i}\) is the ratio between the expected influence of bit \(i+1\) and bit \(i\). For a given inexactness problem \((f,\mathcal{E},\mathcal{F})\), let \(\mathbf{e}=(e_{1},e_{2},\ldots,e_{n})\) be the optimal energy vector (with corresponding error probability vector \(\mathbf{p}=(p_{1},p_{2},\ldots,p_{n})\)) obtained by the influence aware solution. Influence Aware InvestmentsWe now focus on the important case when all the \(\beta_{i}\) values equal a common constant value \(\beta\). This special case is in fact quite common and is exemplified by our previous example of evaluating a binary bit string where the influence of bit values decrease exponentially, in the context of binary numbers. **Theorem 2**.: _Let \(f:\{0,1\}^{n}\to\mathbb{N}\) be a Boolean function with parameter \(\beta>1\). Then, the corresponding \(\alpha\) is at least \(\Omega\left(\frac{\beta^{n/2}}{n}\right)\), thereby implying that an influence aware investment is exponentially better (with respect to \(n\)) than its influence oblivious counterpart._ Proof.: We first have that \[GM=(\prod_{i<n}\mathbb{E}[\mathsf{Inf}(i)])^{1/n}=(\mathbb{E}[\mathsf{Inf}(1) ]^{n}\prod_{i=0}^{n-1}\beta^{i})^{1/n}=\mathbb{E}[\mathsf{Inf}(1)]\beta^{(n-1 )/2}\] and \[AM=1/n(\sum_{i\leq n}\mathbb{E}[\mathsf{Inf}(i)])=1/n(\mathbb{E}[\mathsf{Inf}(1)] \sum_{i=0}^{n-1}\beta^{i})=\frac{\mathbb{E}[\mathsf{Inf}(1)]}{n}\frac{\beta^{n }-1}{\beta-1}\] Therefore we have that \[\alpha=AM/GM=\frac{\beta^{n}-1}{n\beta^{(n-1)/2}(\beta-1)}\] This ratio is \(\Omega(\frac{\beta^{n/2}}{n})\) when \(\beta>1\). To continue the example of evaluating \(n\)-bit binary strings, we present the following. **Corollary 1**.: _The value of \(\alpha\) for Binary Evaluation (BE) function \(f:\{0,1\}^{n}\to\mathbb{N}\) that takes a binary input and returns the decimal evaluation of that input is at least \(\Omega\left(\frac{2^{n/2}}{n}\right)\)._ Influence Oblivious InvestmentsTo reiterate, formally, a Boolean function \(f\) is influence-symmetric if all of the bits of \(f\) have the same influence (i.e., \(\mathsf{Inf}(i)=\mathsf{Inf}(j)\) for every \(i\neq j\)). Recall that from the definition of \(\alpha\), we see that if \(f\) is an influence-symmetric function then \(\alpha=1\). An important class of Boolean functions are the _symmetric Boolean functions_ defined as the set of functions \(f\) such that for all \(\mathbf{x}\) and any permutation \(\sigma\), \(f(\mathbf{x})=f(\sigma(\mathbf{x}))\) and therefore changing the order of the bits does not change the output of the function. We now have: **Theorem 3**.: _The influence oblivious assignment is an optimal energy distribution for influence-symmetric Boolean functions. Furthermore, every symmetric function is also influence-symmetric, so an influence oblivious investment is optimal._ Proof.: The key observation that we need to make first is that symmetric functions \(f\) can be evaluated just by counting the number of 1's in the input. Let \(d_{j}\triangleq f(x^{j})\), where \(x^{j}\in\{0,1\}^{n}\) has exactly \(j\) 1's. Let us consider \(\mathsf{Inf}(1)\). What is the probability that \(\mathsf{Inf}(1)\) takes the value, say, \(a\)? To answer this, let \(J_{a}\subset[n]\) denote the set of all \(j\) such that \(|d_{j}-d_{j-1}|=a\). Then, clearly, \(\mathsf{Inf}(1)=a\) with probability \(\sum_{j\in J_{a}}\frac{\binom{n-1}{j}}{2^{n}}\). This same argument can be repeated for obtaining \(\Pr[\mathsf{Inf}(i)=a]\). Thus, the random variables \(\mathsf{Inf}(i)\) and \(\mathsf{Inf}(i^{\prime})\), \(i\neq i^{\prime}\), have the same distributions, thereby implying that \(f\) is influence-symmetric. Let us consider the parity function \(\mathsf{XOR}(\mathbf{x})\) - a quintessential symmetric function - that outputs 1 when the number of 1's in \(\mathbf{x}\) is odd, and \(0\) otherwise. In this case, the \(\mathsf{Inf}(i)=1\) for all \(i\). Thus, the \(\mathsf{TIm}(f,\mathcal{E})=n2^{\mathcal{E}/n}\), and this matches the total influence of the assignment that is influence oblivious wherein each bit is assigned energy \(\mathcal{E}/n\). Thus, \(\alpha=1\) for XOR. The influence ratio and PAC learning Machine learning has been one of the most popular topics in computer science for decades. In this section, we would like to establish a direct relation between the influence ratio \(\beta\) and a widely studied form of theoretical machine learning called Probably Approximate Correct (PAC) Learning where Boolean functions are learned with some margin for error. By exploring the relation between the concepts of fixed \(\beta\) and PAC learning, we show that a function is more PAC-learnable if its influence ratio is greater than 1. _Thus, this establishes the connection between the cases where machine learning performs well and the cases which can benefit from an influence aware approach_. In this section, we use an alternative more general form of Boolean functions \(f:\{-1,+1\}^{n}\rightarrow\mathbb{R}\). This form is widely used in the study of PAC learning and analysis of Boolean functions. For a subset \(S\subseteq[n]\) let \(x^{S}=\prod_{i\in S}x_{i}\) where every \(x_{i}\in\{-1,1\}\). For a Boolean function \(f\), another way to describe \(f\) is as a mutli-polynomial called the Fourier expansion of \(f\), \[f(\mathbf{x})=\sum_{S\subseteq[n]}\hat{f}(S)x^{S}\] where the real number \(\hat{f}(S)\) is called the coefficient of \(f\) on \(S\). Then we have from [37] that the influence of a bit \(i\) is as follows. **Definition 1**.: _Define \(Var(i)=\frac{1}{4}E[\mathsf{Inf}(i)^{2}]\),_ **Claim 1**.: _For every \(i\leq n\)\(Var(i)=\sum_{S\subseteq[n],i\in S}\hat{f}(S)^{2}\)._ The proof, as well as more on analysis of Boolean functions can be found in [37]. **Definition 2**.: _Let \(\epsilon>0,0\leq k\leq n\). A function \(f:\{0,1\}^{n}\rightarrow\Re\) is called \(\epsilon\)-concentrated up to degree \(k\), if \(\sum_{S\subseteq[n],|S|>k}\hat{f}(\hat{S})^{2}<\epsilon\)._ The notion of \(\epsilon\)-concentration up to degree \(k\) is particularly interesting as it allows us to efficiently learn the function. **Theorem 4** (The **"Low-Degree Algorithm"** from page 81 in [37]).: _Let \(k\geq 1\) and let \(C\) be a concept class for which every function \(f:\{-1,1\}^{n}\rightarrow\{-1,1\}\) in \(C\) is \(\epsilon/2\)-concentrated up to degree \(k\). Then \(C\) can be learned from random examples only with error \(\epsilon\) in time \(\text{poly}(n^{k},1/\epsilon)\)._ We are ready to state our main theorem, which states that having influence ratio \(\beta>1\) implies PAC learnability. Note that quite often we do not get an exact measure \(\beta\), for the influence ratio \(1\leq\beta\leq\frac{\mathsf{Inf}(i+1)}{\mathsf{Inf}(i)}\) of a function, and it is much easier to obtain lower and upper bounds \(\beta_{1},\beta_{2}\) such that \(1\leq\beta_{1}\leq\beta\leq\beta_{2}\). By using these assumptions, and an additional bound \(\mathsf{Inf}\) for which \(\mathsf{Inf}(n)<\mathsf{Inf}\) we can say the following 2. Given \(\epsilon>0\) calculate constant \(k>0\) such that Footnote 2: The bounds \(\beta_{1},\beta_{2},\mathsf{Inf}\) can be learned by means such as random sampling. \[\mathsf{Inf}\cdot\frac{\beta_{1}^{-k}}{\beta_{1}-1}<\epsilon/2 \tag{12}\] For simplicity denote \(\mathsf{Inf}^{f}(n)\) the parameter \(\mathsf{Inf}(n)\) for the specific function \(f\). Then we have. **Theorem 5**.: _Let \(C\) be a concept class and \(\beta_{1},\beta_{2}\geq 1\), such that every function \(f:\{-1,1\}^{n}\to\{-1,1\}\) in \(C\) has \(\beta_{f}\) where \(\beta_{1}\leq\beta_{f}\leq\beta_{2}\), and such that \(\mathsf{Inf}^{f}(n)\leq\mathsf{Inf}\) for some given parameter \(\mathsf{Inf}>0\), and let \(\epsilon>0\). Then \(C\) can be learned from random examples with only error \(\epsilon\) in time \(poly(n^{k},1/\epsilon)\)._ The proof of the theorem can be found in Appendix A. ## 7 Modeling inexactness in the context of sorting We have been studying the idea of inexactness applied to Boolean functions. However, in real world applications, not all computational tasks are Boolean functions. Hence in this section, we examine the problem of sorting, an important computing task, to illustrate the benefit of influence aware investments. Here we employ a setting wherein the data is an array \(C\) of \(N\) items stored in the "cloud" and a local computer called the client must compute a sorted ordering of the data. We begin with each data item \(C[j]\), \(1\leq j\leq N\), being \(n\) bits drawn uniformly at random from \(\{0,1\}^{n}\) representing integers in the range \([0,2^{n}-1]\). Since \(C\) is in the cloud, the client can only access the data items indirectly through a predefined functions \(\mathsf{Compare}(a,b,\mathbf{e})\), where \(a\) and \(b\) are two indices of the array \(C\) and \(\mathbf{e}\) is an energy vector. Since comparison seeks to find the most significant bit in which \(C[a]\) and \(C[b]\) differ, it employs bit-wise comparison. Thus, \(\mathbf{e}\) serves the purpose of apportioning energy values across the bits. The client's goal is to compute a permutation of \([N]\triangleq\{1,2,\ldots,N\}\) that matches the sorted ordering of \(C\). Our outcome will be an approximation of the correctly sorted ordering where, in the spirit of inexactness, minor errors that wrongly order numbers close in magnitude are more acceptable than egregious errors that reorder numbers that differ a lot. Thus, we measure sortedness using a measure that we call the _weighted Kendall's \(\tau\) distance_[21] that we now seek to define. We establish some notations first. Let \(C^{*}\) denote the sorted permutation of the arbitrary array \(C\). Consider two indices \(a\) and \(b\), both in the range \([1,N]\). Let \(X(a,b)\) be an indicator random variable that is 1 when \(C[a]\) and \(C[b]\) are ordered differently in \(C\) and \(C^{*}\), and 0 when they are ordered the same way. The classical Kendall's \(\tau\) distance [28] counts the number of inversions and is defined as \(\sum_{a\neq b}X(a,b)\). We are however interested in the weighted Kendall's \(\tau\) distance of \(\pi\) denoted \(\mathsf{wkt}^{C}()\) and it is defined as \[\mathsf{wkt}^{C}\triangleq\sum_{a\neq b}\left[|C[a]-C[b]|\cdot X(a,b)\right] \tag{13}\] The intuition behind this measure is that bigger difference between two numbers having incorrect relative order should result in bigger penalty, and vice versa. A reasonable inexact comparison scheme should have a smaller error chance for numbers that are farther apart - this is correct in the case of comparing using inexactness aware energy allocation scheme as we will see in this section. Now, let us abuse the notation and use \(\mathsf{wkt}^{C}(\mathbf{e})\) to denote the expected weighted Kendall's \(\tau\) distance of the permutation that we receive when we perform quicksort on input array \(C\) using energy vector \(\mathbf{e}\). Note that this value is averaged over all the runs of quicksort, with the random factors being the pivot choices of quicksort and the comparison error from inexactness: \[\mathsf{wkt}^{C}(\mathbf{e})\triangleq\mathbb{E}_{C^{\mathbf{e}}}\left[ \mathsf{wkt}^{C^{\mathbf{e}}}\right] \tag{14}\] where \(C^{\mathbf{e}}\) denotes a permutation of array \(C\) after quicksort using energy vector \(\mathbf{e}\). In the next part of this section we are interested in analyzing the ratio of the expected weighted Kendall's \(\tau\) distance using inexactness oblivious energy to its inexactness aware energy counterpart (the expectation is taken over all possible input arrays \(C\)) \[\alpha^{*}\triangleq\mathbb{E}_{C}[\mathsf{wkt}^{C}(\mathbf{e}_{\mathbf{o}}) ]/\mathbb{E}_{C}[\mathsf{wkt}^{C}(\mathbf{e}_{\mathbf{a}})] \tag{15}\] which is analogous to the ratio \(\alpha\) defined for Boolean functions. Our goal is to show that this ratio grows exponentially (in \(n\)). For the energy aware case the energy vector \(\mathbf{e}_{\mathbf{a}}=(1,2,\ldots,n)\), thereby assigning higher energy values to higher order bits. On the other hand, for the energy oblivious case, we use equal energy for all the bits, so the energy vector is \(\mathbf{e}_{\mathbf{o}}=(\frac{n+1}{2},\frac{n+1}{2},\cdots,\frac{n+1}{2})\). Both the inexactness aware and the inexactness oblivious algorithms employ quicksort using \(\mathsf{Compare}(\cdot,\cdot,\cdot)\) functions, but with their respective energy vectors. In the sequel theorems and proofs, we will use \(I(a,b,\mathbf{e})\) to denote the event that the comparison between two numbers \(a\) and \(b\) is incorrect using the energy vector \(\mathbf{e}\). We use \(Q(a,b,\mathbf{e})\) to denote the event that the quicksort algorithm with input \(C\) using energy vector \(\mathbf{e}\) results in two numbers \(a\) and \(b\) having the incorrect relative positions (for simplicity we omit the input array \(C\) from this notation). For simplicity, we will assume that the elements in our input array are distinct. Finally, from its definition in equations 13 and 14 and from linearity of expectation, \(\mathsf{wkt}^{C}(\mathbf{e})\) can be calculated as follows \[\mathsf{wkt}^{C}(\mathbf{e})=\sum_{1\leq a<b\leq n}|C[a]-C[b]|\cdot\Pr[Q(C[a],C[b],\mathbf{e})] \tag{16}\] We state our desired result of the lower bound of \(\alpha*\) as follows. The proof of this theorem can be found in Appendix B. **Theorem 6**.: _The ratio \(\alpha^{*}\) is \(\Omega(\frac{2^{n/2}}{N\log N})\)._ The advantage of influence-aware approach can be shown not only through the ratio \(\alpha*\) which is based on the difference between the two approaches' average weighted Kendall's \(\tau\) distance over all inputs, but also through the distance difference of the majority of individual inputs. Let us define a good input as one for which the inexactness aware assignments results in an exponentially lower value of weighted Kendall's \(\tau\) distance; the rest of the inputs are called bad. More specifically, a good input \(C\) is such that the ratio between \(\mathsf{wkt}^{C}(\mathbf{e}_{\mathsf{o}})\) and \(\mathsf{wkt}^{C}(\mathbf{e}_{\mathsf{a}})\), the expected weighted Kendall's \(\tau\) of quicksort with input \(C\) under energy oblivious and energy awareness, is \(\Omega(\frac{2^{n/6}}{N\log N})\). Let \(g\) and \(b\) denote the number of good and bad inputs, respectively. We will show that \(g/b\) is exponential in \(n\). **Theorem 7**.: _The ratio of the number of good vs. bad inputs is \(\Omega(\frac{2^{n/3}}{N^{2}})\). Therefore, as \(n\to\infty\), \(g/b\to\infty\)._ The proof of this theorem can be found in Appendix C ## 8 Variable Precision Computation In practice, manufacturers usually lack the resource to assign a different level of energy to every bit in a chip. A more practical approach is that only \(\gamma\) different levels of energy are assigned to the bits, usually with the lowest energy level being \(0\). This approach is usually referred to as _variable precision computation_, and has been studied in some works such as [24], owing to its simplicity and effectiveness. In this section, we will focus in the scenario where \(\gamma=2\), i.e. a large proportion of the energy is equally focused on the most significant \(\frac{n}{k}\) bits, where a small proportion, if not none, of the energy is assigned to the remaining \(n(1-1/k)\) bits. We denote this energy vector \(\mathbf{e}_{\mathsf{t}}\). Since the total energy is \(\approx n^{2}/2\), \(\mathbf{e}_{\mathsf{t}}=\{0,0,\ldots 0,2nk,\ldots 2nk\}\). The goal of this section is to study the effect of using this energy vector compared to the inexactness oblivious approach for basic functionalities, which we again use sorting and the weighted Kendall's \(\tau\) metric as an example. We are interested in bounding the value of the ratio between \(\mathbb{E}_{C}[\mathsf{wkt}^{C}(\mathbf{e}_{\mathsf{o}})]\)\(/\mathbb{E}_{C}[\mathsf{wkt}^{C}(\mathbf{e}_{\mathsf{t}})]\) which is analogous to \(\alpha^{*}\) in Section 7, and the ratio between good and bad inputs, whereas bad (and good) inputs are generally the ones that make \(\mathsf{wkt}^{C}(\mathbf{e}_{\mathsf{o}})\)\(/\mathsf{wkt}^{C}(\mathbf{e}_{\mathsf{t}})\) exponential in \(n\) following the convention in Section 7. Toward that goal, we prove the following two theorems. The combined result of the two theorems gives us an estimate of a 'good' truncation ratio \(k\), which is inside the interval \((\frac{5}{3},4)\). **Theorem 8**.: _Let \(k\) be a parameter and assume we use an energy allocation scheme \(\mathbf{e}_{\mathsf{t}}\) where energy is divided equally on \(\frac{n}{k}\) most significant bits. Then, for an arbitrary input array \(C\) drawn from the uniform random distribution,_ \[\Pr\left[\frac{\mathsf{wkt}^{C}(\mathbf{e}_{\mathbf{o}})}{\mathsf{wkt}^{C}( \mathbf{e}_{\mathbf{t}})}=O(\frac{2^{n(k-5/3)/6}}{N\log N})\right]=O(\frac{N^{ 2}}{2^{\frac{n}{max(3,k)}}})\] _Consequently, for constant \(k>5/3\), if we define bad inputs to be the ones that make the ratio \(\frac{\mathsf{wkt}^{C}(\mathbf{e}_{\mathbf{o}})}{\mathsf{wkt}^{C}(\mathbf{e}_{ \mathbf{t}})}\)\(O(\frac{2^{n(k-5/3)/6}}{N\log N})\) and good inputs to be the remaining, then the ratio between good and bad inputs is at least \(\Omega(\frac{2^{n/max(3,k)}}{N^{2}})\)._ **Theorem 9**.: _Let \(k\) be a parameter and assume we divide energy on \(\frac{n}{k}\) most significant bits. Then, for \(k<4\), the ratio \(\frac{\mathbb{E}_{C}[\mathsf{wkt}^{C}(\mathbf{e}_{\mathbf{o}})]}{\mathbb{E}_{C }[\mathsf{wkt}^{C}(\mathbf{e}_{\mathbf{t}})]}\) is exponential in \(n\)._ **Remarks** The variable precision energy allocation scheme is a more practical approach to inexactness where the energy is focused only on the most significant \(n/k\) bits. In this section, we have shown that for a value of \(k\) in the interval \((\frac{5}{3},4)\), sorting using variable precision energy allocation scheme is exponentially better than using inexactness oblivious energy allocation in the weighted Kendall's \(\tau\) metric. This is true for both the average case (Theorem 9) and for most of the possible inputs with only an exponentially small number of exceptions3 (Theorem 8). Note that the specific value \(\frac{5}{3}<k<4\) resulted from the analysis with \(\approx\frac{n^{2}}{2}\) total energy. For the analyses using different levels of total energy and different restrictions (such as the number \(\gamma\) of distinct energy levels), we might arrive at different schemes of energy distribution. Nevertheless, the core principle of inexactness should remain applicable. Footnote 3: Of course, input data will often depend on the particular application at hand and may not be immediately suitable for variable precision in the manner we have presented. However, we believe that the principle can be adapted to work nevertheless. ## 9 Concluding remarks The algorithmic end of computing has a rich history of examples such as _randomization_[45, 46] and approximation algorithms [49], and combined approaches such as _fully polynomial Randomized approximation schemes (FPRAS)_[25] which departed radically from traditional computing philosophy of guaranteeing correctness. Specifically, they embraced the possibility that computations can yield results that are not entirely correct while offering (potentially) significant savings in resources consumed, typically running time. Despite this relaxed expectation on the the quality of their solutions, randomized and approximation algorithms were always deployed on reliable computing systems. In contrast, inexact computing crucially differs by advocating the use of "unreliable" computing architectures and systems directly and thus, blend in the behavior from the platform on which it is executing directly into the algorithm. Thus, one can view the inexactness in our model as a way of extending the principles of randomization and approximation down to the hardware level, thereby improving the overall gains that we can garner. Thus, the ability to lower cost by lowering energy, and its allocation to different parts of the computation guided by influence are made explicit and can be managed by the algorithm designer. By demonstrating the value of this idea in canonical and illustrative settings, namely theory of Boolean functions, PAC learning, inexact sorting, we aimed to have demonstrated its value in a range of settings. In principle, the model we have introduced and whose value we demonstrated through several foundational building blocks is truly general in the following sense: _given any computing engine and hence an instance of our model, an algorithm can be designed and evaluated. Additionally, due to its theoretical generality, our model parameters allow us to assert the cost and quality of algorithms as functions of parameter values and thus can, in the spirit of the foundations of computer science, be characterized as theorems are true asymptotically._ In addition to extending the notion of randomized and approximate computation to the hardware level, we believe that the framework of inexactness that we have introduced can seamlessly extend beyond its immediate motivation from CMOS technology. At its core, the potential for inexactness stems from the notion of influence that is orthogonal to the computing technology that is employed. In our work, we have framed the model using CMOS principles and the concomitant error function that decays exponentially with energy. Alternative technologies like quantum computing may offer slightly different modeling parameters, but we believe that the core principles based on the notion of influence will remain intact and effective. Thus, we hope that our work will enable future work resulting in a principled injection of inexactness in a wide range of contexts.
2305.17436
Creating Personalized Synthetic Voices from Post-Glossectomy Speech with Guided Diffusion Models
This paper is about developing personalized speech synthesis systems with recordings of mildly impaired speech. In particular, we consider consonant and vowel alterations resulted from partial glossectomy, the surgical removal of part of the tongue. The aim is to restore articulation in the synthesized speech and maximally preserve the target speaker's individuality. We propose to tackle the problem with guided diffusion models. Specifically, a diffusion-based speech synthesis model is trained on original recordings, to capture and preserve the target speaker's original articulation style. When using the model for inference, a separately trained phone classifier will guide the synthesis process towards proper articulation. Objective and subjective evaluation results show that the proposed method substantially improves articulation in the synthesized speech over original recordings, and preserves more of the target speaker's individuality than a voice conversion baseline.
Yusheng Tian, Guangyan Zhang, Tan Lee
2023-05-27T10:17:42Z
http://arxiv.org/abs/2305.17436v1
# Creating Personalized Synthetic Voices from Post-Glossectomy Speech with Guided Diffusion Models ###### Abstract This paper is about developing personalized speech synthesis systems with recordings of mildly impaired speech. In particular, we consider consonant and vowel alterations resulted from partial glossectomy, the surgical removal of part of the tongue. The aim is to restore articulation in the synthesized speech and maximally preserve the target speaker's individuality. We propose to tackle the problem with guided diffusion models. Specifically, a diffusion-based speech synthesis model is trained on original recordings, to capture and preserve the target speaker's original articulation style. When using the model for inference, a separately trained phone classifier will guide the synthesis process towards proper articulation. Objective and subjective evaluation results show that the proposed method substantially improves articulation in the synthesized speech over original recordings, and preserves more of the target speaker's individuality than a voice conversion baseline. Yusheng Tian, Guangyan Zhang, Tan Lee Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR {ytstian0617, gyzhang}@link.cuhk.edu.hk, [email protected] **Index Terms**: personalized speech synthesis, post-glossectomy speech, articulation disorder, guided diffusion models ## 1 Introduction People diagnosed with oral cancer may need to have the entire voice box removed by surgery, losing the ability to speak thereafter. One way to save their voices is to collect audio recordings of their speech before the scheduled operation date and create personalized text-to-speech (TTS) models from those speech data. The TTS systems would allow these individuals to communicate with other people using their own voices. Ideally the speech used to create TTS models should carry accurate and clear pronunciation such that highly intelligible speech can be generated. In reality, some of the patients already suffer speech impairment at the time of recording. For example, tongue cancer, one of the most common sites of oral cancer, is often treated with surgery. People who underwent tongue surgery typically show consonant and vowel alterations [1, 2]. We encountered one such case recently. A young female Cantonese speaker approached us, expressing the hope of saving her voice through speech synthesis technology. She received partial glossectomy six years ago, and about 3/4 of her tongue was removed surgically. This has resulted in significant difficulties in articulating accurately due to the defect in the tongue. Consonant and vowel alterations are reflected noticeably in this young lady's speech recordings. This application context leads to the task committed in the present study: developing a personalized TTS system from post-glossectomy speech. Our goal is to restore articulation in the synthesized speech and maximally preserve the target speaker's individuality. The problem of voice reconstruction from impaired speech was tackled in a few previous studies. They share the same idea of substituting impaired speech segments by normal ones. The Voicebank project [3, 4, 5] applied HMM-based speech synthesis techniques to create personalized synthetic voices for people with speech disorder. Synthetic voices were repaired by substituting selected acoustic feature parameters with those of an average healthy voice. In [6, 7], neural speech synthesis systems were developed for individuals with dysarthria. A two-step process was developed: first train a neural TTS model on recordings from a healthy speaker, then perform voice conversion on the synthesized speech from the healthy speaker to the impaired speaker. The above substitution-based approaches are considered suboptimal for the task we are considering, where the major symptoms of speech disorder is alteration of certain phones. On the one hand, the articulation style is part of an individual's personality. Preserving only the voice imbre as in [6, 7] would completely discard the target speaker's articulation style, including both bad and good aspects, leading to undesirable loss of individuality. On the other hand, although HMM-based systems can allow acoustic feature substitution on selected phones [4], the average voice model may not provide a replacement close enough to the target speaker's voice. We propose to restore articulation in the synthesized speech with guided diffusion models. Specifically, a diffusion-based TTS model [8] is trained on the target speaker's original recordings (with impaired articulation on certain speech sounds), and hence capture and preserve the original articulation style. When using the trained model for inference, the synthesis process is guided by the gradient from a separately trained phone classifier. The output of the phone classifier indicates how precise the articulation is on a continuous scale, which facilitates finer adjustment of articulation than simple substitution. The proposed system design is inspired by GuidedTTS [9], which employs a phone classifier to guide an unconditional diffusion model for speech generation. However, the motivation of using classifier guidance in GuidedTTS is to develop TTS systems with untranscribed speech data. The phone classifier guidance in the present study is in the spirit more similar to shallow-fusion biasing in end-to-end automatic speech recognition (ASR) [10, 11], in which an external language model steers the recognition process towards a particular context at each decoding step. In the proposed system, the external phone classifier guides the synthesis process towards correct pronunciation at each reverse diffusion step. In the next section, we outline key concepts in diffusion models that are related to diffusion-based speech synthesis. Section 3 describes the proposed system of guided speech synthesis. Section 4 and 5 present the experimental results on the aforementioned real patient case. Section 6 concludes and discusses the limitations of the proposed approach. ## 2 Background Diffusion models are a family of probabilistic generative models. The modelling involves a forward process that progressively contaminates the data with random noise, and a reverse process that generates data from random noise. The forward and backward process were originally formulated as Markov chains [12, 13, 14]. In [15], the discrete-time Markov process is generalized to a continuous-time stochastic differential equation (SDE). Specifically, the forward process is defined by the following equation: \[\mathrm{d}\mathbf{x}_{t}=\mathbf{f}(\mathbf{x}_{t},t)\mathrm{d}t+g(t)\mathrm{d}\mathbf{w}_{t}\;, \tag{1}\] where \(t\sim\mathcal{U}(0,T)\); \(\mathbf{w}_{t}\) is a standard Brown motion; \(\mathbf{f}(\cdot)\) and \(g(\cdot)\) are the so-called drift and diffusion coefficient. These two coefficients are constructed such that as \(t\) grows from \(0\) to \(T\), the probability distribution of \(\mathbf{x}_{t}\) would evolve from the original data distribution to a tractable prior, typically a Gaussian distribution with fixed mean and variance. The corresponding reverse process also forms a SDE as stated in [16]: \[\mathrm{d}\mathbf{x}_{t}=\left[\mathbf{f}(\mathbf{x}_{t},t)-g^{2}(t)\nabla_{\mathbf{x}_{t}} \log P(\mathbf{x}_{t})\right]\mathrm{d}t+g(t)\mathrm{d}\bar{\mathbf{w}}_{t} \tag{2}\] where \(\bar{\mathbf{w}}_{t}\) is a standard Brown motion running backward in time. The core part of a diffusion model is to train a neural network \(S_{\mathbf{\theta}}\) to estimate the value of \(\nabla_{\mathbf{x}_{t}}\log P(\mathbf{x}_{t})\), a.k.a., the score. Once the score is known for all \(t\), we can sample data by solving the reverse SDE using numerical solvers. It was proved in [15] that the reverse SDE is associated with an ordinary differential equation (ODE) as follows: \[\mathrm{d}\mathbf{x}_{t}=\left[\mathbf{f}(\mathbf{x}_{t},t)-\frac{1}{2}g^{2}(t)\nabla_{ \mathbf{x}_{t}}\log P(\mathbf{x}_{t})\right]\mathrm{d}t\;, \tag{3}\] which shares the same marginal distribution for all \(t\). It is empirically shown in [8] that inference with the ODE formulation requires fewer sampling steps. In the remainder of this paper, we use the ODE formulation to model the reverse process. ## 3 Approach ### Diffusion-based TTS We follow GradTTS [8] and use diffusion models to generate Mel-Spectrograms conditioned on the input text and speaker labels. The forward process SDE is defined as: \[\mathrm{d}\mathbf{x}_{t}=\frac{1}{2}\left(\mathbf{\mu}-\mathbf{x}_{t}\right)\beta_{t} \mathrm{d}t+\sqrt{\beta_{t}}d\mathbf{w}_{t}\;, \tag{4}\] where \(t\sim\mathcal{U}(0,1)\), \(\beta_{t}=\beta_{0}+\left(\beta_{1}-\beta_{0}\right)t\) is a predefined linear noise scale, and \(\mathbf{\mu}\) is an average Mel-spectrogram corresponding to the input phone sequence. One important result derived from (4) is the conditional distribution of \(\mathbf{x}_{t}\) given \(\mathbf{x}_{0}\): \[P(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{\rho}(\mathbf{x}_{0},t),\sigma_{t}^{2} \mathbf{I})\;, \tag{5}\] where \(\mathbf{\rho}(\mathbf{x}_{0},t)=(1-e^{-\frac{1}{2}\int_{0}^{t}\beta_{s}\mathrm{d}s}) \mathbf{\mu}+e^{-\frac{1}{2}\int_{0}^{t}\beta_{s}\mathrm{d}s}\mathbf{x}_{0}\), and \(\sigma_{t}^{2}=1-e^{-\int_{0}^{t}\beta_{s}\mathrm{d}s}\). If \(\mathbf{x}_{0}\) is known, we can then draw samples of \(\mathbf{x}_{t}\) using the reparameterization trick: \[\mathbf{x}_{t}=\mathbf{\rho}(\mathbf{x}_{0},t)+\sigma_{t}\mathbf{\epsilon}_{t}\;,\mathbf{ \epsilon}_{t}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\;. \tag{6}\] The reverse time ODE is given by: \[\mathrm{d}\mathbf{x}_{t}=\frac{1}{2}\beta_{t}\left[\mathbf{\mu}-\mathbf{x}_{t}-\nabla_{ \mathbf{x}_{t}}\log P(\mathbf{x}_{t}|\mathbf{\mu},s)\right]\mathrm{d}t\;, \tag{7}\] where \(s\) stands for the speaker label. Note that that unlike the unconditional reverse process given by (3), the reverse process in the context of TTS is conditioned on the input text and the speaker label. The neural network \(S_{\mathbf{\theta}}\) is trained to predict the conditional score function \(\nabla_{\mathbf{x}_{t}}\log P(\mathbf{x}_{t}|\mathbf{\mu},s)\) using a weighted L2 loss: \[\mathcal{L}(\mathbf{\theta})=\mathbb{E}_{t}\sigma_{t}^{2}\mathbb{E}_{\mathbf{x}_{0}} \mathbb{E}_{\mathbf{x}_{t}}\|S_{\mathbf{\theta}}(\mathbf{x}_{t},t,\mathbf{\mu},s)+\sigma_{t} ^{-1}\mathbf{\epsilon}_{t}\|_{2}^{2}\;, \tag{8}\] where we have made use of the following results: \[P\left(\mathbf{x}_{t}|\mathbf{x}_{0},\mathbf{\mu},s\right)=P\left(\mathbf{x}_{t}|\mathbf{x}_{0} \right)=\mathcal{N}(\mathbf{\rho}(\mathbf{x}_{0},t),\sigma_{t}^{2}\mathbf{I})\;, \tag{9}\] \[\nabla_{\mathbf{x}_{t}}\log P(\mathbf{x}_{t}|\mathbf{x}_{0})=-\sigma_{t}^{-1}\mathbf{\epsilon }_{t}\;. \tag{10}\] ### Guided synthesis process Once the diffusion-based acoustic model is trained, the target speaker's articulation style will be embedded into the generated Mel-Spectrogram through the speaker condition \(s\). The synthetic voice will inevitably preserve the problematic articulation pattern. In order to improve articulation in the synthesized speech, another condition \(\mathbf{y}_{t}\) is provided to the score estimator. \(\mathbf{y}_{t}\) stands for the phone label sequence of \(\mathbf{x}_{t}\) predicted by an external frame-level phone classifier, i.e., what the external phone classifier thinks the synthetic voice is saying. Now the reverse time ODE is augmented as \[\mathrm{d}\mathbf{x}_{t} =\frac{1}{2}\beta_{t}(\mathbf{\mu}-\mathbf{x}_{t})\mathrm{d}t \tag{11}\] \[-\frac{1}{2}\beta_{t}\left[\nabla_{\mathbf{x}_{t}}\log P(\mathbf{x}_{t}| \mathbf{\mu},s)+\nabla_{\mathbf{x}_{t}}\log P(\mathbf{y}_{t}|\mathbf{x}_{t})\right]\mathrm{d }t\;,\] where we have assumed that the phone classifier is speaker-independent, i.e. \(\log P(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\mu},s)=\log P(\mathbf{y}_{t}|\mathbf{x}_{t})\). To see why incorporating this external phone classifier can improve the articulation in the synthetic voice, consider the following scenarios. First, suppose the articulation is not good, i.e. a low value of \(P(\mathbf{y}_{t}|\mathbf{x}_{t})\). Thus the additional term \(\nabla_{\mathbf{x}_{t}}\log P(\mathbf{y}_{t}|\mathbf{x}_{t})\) will bias the synthesis process towards a direction that increases \(\log P(\mathbf{y}_{t}|\mathbf{x}_{t})\). If the phone classifier is trained on speech from healthy speakers, then the articulation would be improved as \(\log P(\mathbf{y}_{t}|\mathbf{x}_{t})\) grows. Second, suppose the articulation is good, i.e., a high log probability of \(P(\mathbf{y}_{t}|\mathbf{x}_{t})\). In such case the additional term is close to zero and will have little impact on the synthesized speech. Consequently, the good part of the target speaker's articulation style will be preserved. In practice, we usually scale \(\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{y}_{t}|\mathbf{x}_{t})\) with a positive coefficient \(\gamma\) in order to obtain better sample quality [9, 17]. We adopt the norm-based scale [9], which is computed as \(\gamma=\alpha\cdot\|\nabla_{\mathbf{x}_{t}}\log P(\mathbf{x}_{t}|\mathbf{\mu},s)\|_{2}/\| \nabla_{\mathbf{x}_{t}}\log P(\mathbf{y}_{t}|\mathbf{x}_{t})\|_{2}\), where \(\alpha\) is a hyperparameter to be tuned. Assume that the prediction of phone label for each frame is independent of each other, then we have \[\nabla_{\mathbf{x}_{t}}\log P(\mathbf{y}_{t}|\mathbf{x}_{t})=\sum_{i=1}^{L}\nabla_{\mathbf{x}_{t} }\log P(y_{t}^{(i)}|\mathbf{x}_{t})\;, \tag{12}\] where \(L\) refers to the number of acoustic frames. This property enables flexible guidance weights on different frames. For example, we can put extra weights on frames corresponding to phones that the target speaker tends to make mistakes. ### System design Figure 1 gives an overview of the proposed speech synthesis system. The input phone sequence \(\mathbf{p}\) is fed into a text encoder. The resulted phone embeddings, as well as the speaker label \(s\), are sent to a duration predictor to estimate the frame length for each phone. The input phone sequence \(\mathbf{p}\) is then expanded to \(\mathbf{\tilde{p}}\) according to the predicted duration \(\mathbf{d}\), similar to the length regulator mechanism in [18, 19, 20]. In GradTTS \(\mathbf{\mu}\) is designed to be dependent on both the input phone sequence and the speaker label, but we augment it to be conditioned only on the phone sequence, with the hope that a shared phone embedding across speakers will transfer some articulation knowledge from healthy speakers to the target speaker, and therefore alleviate the articulation disorder in the synthetic voice. Specifically, \(\mathbf{\mu}\) is obtained by looking up a predefined phone-to-Mel-spectrum dictionary computed over training data, similar to that in [21]. The score estimator \(S_{\mathbf{\theta}}\) is trained on speech data from a multi-speaker speech corpus, plus recordings from the target speaker. In contrast, the phone classifier is trained only on recordings from healthy speakers, to ensure that it is sensitive to abnormal articulation style. ## 4 Experimental Setup ### Baseline systems for comparison We compare the proposed system with two TTS baselines. The first baseline, _DuriTaco_, is adapted from Tacotron [22], with the attention module replaced by a duration-informed aligner as in DurIAN [19]. The second baseline, _DuriTaco+VC_, has exactly the same architecture as DuriTaco, but is trained on voice-converted speech: unimpaired speech from a healthy source speaker converted into the target speaker's voice via a voice conversion (VC) model. We use the recently proposed NANSY [23] for voice conversion, as it shows strong performance in cross-lingual setting. The underlying assumption is that impaired speech can be viewed as a new language, therefore a strong cross-lingual VC model is expected to perform well on voice conversion between normal and impaired speech. ### Datasets We use the following four datasets for different purposes. * **CUSENT**[24] for multi-speaker TTS pre-training, as well as for training the external phone classifier and many-to-many VC model. It is a Cantonese speech corpus of around 20 hours clean read speech from 80 speakers, sampled at 16 kHz. * **Recording.T** for personalized TTS fine-tuning. It contains 377 utterances, giving a total of approximately 24 minutes speech from our target speaker, a young Cantonese-speaking female. The recording script is adapted from CUSENT and all Cantonese initials, finals and tones are included. The audio was recorded with a TASCAM DR-44WL at 44.1 kHz in her living place, under reasonably quiet condition. * **Recording.S** as the source speaker's data for voice conversion. It contains 377 utterances that share exactly the same content as Recordings.T, recorded by another female Cantonese speaker with no articulation disorder. The recording was carried out in a sound-proof studio. * **KingASR086** for training a CTC-based ASR model for objective evaluation. It is a commercial Cantonese speech corpus purchased from SpeechCcean1, which contains 80-hour read speech from 136 speakers, sampled at 44.1 kHz. Footnote 1: [https://en.speechocean.com/datacenter/recognition.html](https://en.speechocean.com/datacenter/recognition.html) ### Implementation details We adopted the official implementation of GradTTS 2 for the proposed model, but augmented the text encoder to be conditioned only on the input phone sequence as mentioned earlier. For simplification, we also used forced alignment [25], instead of monotonic alignment search to obtain the duration labels. We used the Jasper architecture [26] for the frame-level phone classifier, but removed the convolution stride so that it could predict phone labels for each frame. The DuriTaco baseline was similar to a public implementation of a baseline DurIAN3, except that the duration predictor and the acoustic model were separately trained. The VC model in the DuriTaco+VC baseline followed a public implementation of NANSY4. The CTC-based ASR model for objective evaluation was implemented following the recipe from SpeechBrain5. Footnote 2: [https://github.com/huawei-noah/Speech-Backbones/tree/main/Grad-TTS](https://github.com/huawei-noah/Speech-Backbones/tree/main/Grad-TTS) Footnote 3: [https://github.com/ivanovk/durian-pytorch](https://github.com/ivanovk/durian-pytorch) All TTS models generate 80-dimensional log Mel-Spectrograms to drive a pre-trained HiFi-GAN vocoder [27]. TTS audio data were resampled to 22.05 kHz for Mel-Spectrogram computation, to be consistent with the settings of HiFi-GAN. The one-hot speaker embedding dimension is 16 in all TTS models. The coefficient \(\alpha\) for classifier guidance in the proposed model is set to 0.3. We also assigned extra guidance weight of 5.0 to several selected phones that the target speaker had difficulties or articulate. All TTS models were pre-trained on CUSENT for 700 epochs at batch size 32, then fine-tuned on the target speaker's data (original or voice-converted) for 5000 steps at batch size 16. A total of 30 sentences from target speaker's data were held out for evaluation. The number of reverse steps for the proposed diffusion-based TTS model is 25 using the DPM solver [28]. ## 5 Results ### Objective evaluation We resynthesized all 377 sentences of the target speaker's recordings with the proposed system and the two baselines, Figure 1: Schematic diagram of the proposed guided speech synthesis process. and used the separately trained CTC-based ASR to evaluate the quality of articulation. The Phone Error Rate (PER%) results in Table 1 show that the proposed system GuidedDiff produces significantly better articulation than the voice in the original recordings and the DuriTaco baseline, though not as good as the VC baseline. By comparing the last two rows we can conclude that the improvement comes from the classifier guidance, rather than the diffusion model. ### Subjective evaluation The subjective evaluation was conducted through a web-based listening test. It consists of three parts. The first two parts was in MUSHRA-style format (no anchor but reference was given) [29]. Specifically, the same 30 sentences from the evaluation set were synthesized by the proposed and baseline voices. For each sentence, the three synthetic voices plus the original recordings from the target and source speaker formed five stimuli in total. In Part1, recordings from the source speaker were provided as the reference, and listeners were asked to rate the _overall impression_ of each stimulus on a scale from 0 to 100. We avoided the term _naturalness_ as it might be confusing whether disordered speech from a natural person count as natural. Part2 was similar to Part1, except that this time recordings from the target speaker were provided as the reference to evaluate speaker similarity. Part3 used A/B test format to examine whether perceptually the proposed GuidedDiff improves articulation over the DuriTaco baseline. We skipped the comparison between the proposed GuidedDiff and the VC baseline because results from objective test clearly show that the VC baseline produces articulation nearly as good as natural speech. For the A/B test 15 sentences were selected from a script written by the target speaker. These sentences contain at least one word that the target speaker found difficult to articulate. The listening test is no-paid. In order to constrain the test duration to be within 15 minutes, we chose to randomly expose 10 out of 30 questions to the listeners in the first two parts. We received 87 effective responses after filtering out listeners who failed to spot the hidden reference. For the MUSHRA-style tests we performed a two-sided Wilcoxon signed-rank test on all pairs of stimuli, and corrected with Holm-Bonferroni. Results in Figure 2 reveal similar pattern as the objective test: GuidedDiff improves the overall quality over the DuriTaco baseline and the target speaker's original recordings(\(p<0.001\)), though not as good as the VC baseline. One interesting finding is that even the DuriTaco baseline produces perceptually better speech than those in original recordings, indicating that a shared phone embedding across speakers does help to alleviate the articulation disorder in the synthetic voice. Results in Figure 3 demonstrate the advantage of GuidedDiff over the VC baseline in terms of preserving the target speaker's individuality, with the voice in GuidedDiff and DuriTaco not significantly different from that in original recordings (\(p>0.09\)), while the same conclusion does not apply to DuriTaco+VC (\(p<0.001\)). The preference test result given in Figure 4 further demonstrates that GuidedDiff synthesizes better articulation than the DuriTaco baseline. Readers are encouraged to visit [https://diffcorrect.github.io/](https://diffcorrect.github.io/) to listen to audio samples. ## 6 Conclusion and Discussion We introduced the use of classifier-guided diffusion models for the creation of personalized synthetic voices from post-glossectomy speech. Experimental results on a real patient case show that the synthetic voice can restore articulation for phones that the target speaker had difficulties to articulate, and at the same time maximally preserve the target speaker's individuality. The proposed model takes around 1.4 seconds to synthesize 30 Cantonese characters on a single 2080Ti GPU. In the future we will extend the current model to take speech as additional condition, aiming at correcting improper articulations in the recordings directly. In this way the modified recordings with correct articulation can be used to train other TTS models to enable faster inference speed. ## 7 Acknowledgements We thank all the participants in the listening test, for their kindness and contributions. Our deepest gratitude belongs to the young lady, who generously agreed to reveal her synthetic voice on the demo page. The first author is supported by the Hong Kong Ph.D. Fellowship Scheme of the Hong Kong Research Grants Council. \begin{table} \begin{tabular}{l c c} \hline \hline Case & Audio Source & PER\% \\ \hline \hline Recording\_S & Real & 11.2 \\ Recording\_T & Real & 43.1 \\ \hline DuriTaco & TTS & 37.6 \\ DuriTaco + VC & TTS & **14.9** \\ Diffusion & TTS & 36.4 \\ GuidedDiff & TTS & **22.1** \\ \hline \hline \end{tabular} \end{table} Table 1: PER(%) on resynthesized speech and the original recordings with the separately trained ASR model. Figure 4: Preference test demonstrating that GuidedDiff produces better articulation than the DuriTaco baseline. Figure 3: MUSHRA scores for speaker similarity. Mean value and 95% confidence interval are reported at the bottom. Figure 2: MUSHRA scores for overall impression. Mean value and 95% confidence interval are reported at the bottom.
2310.14389
Evaluating Subjective Cognitive Appraisals of Emotions from Large Language Models
The emotions we experience involve complex processes; besides physiological aspects, research in psychology has studied cognitive appraisals where people assess their situations subjectively, according to their own values (Scherer, 2005). Thus, the same situation can often result in different emotional experiences. While the detection of emotion is a well-established task, there is very limited work so far on the automatic prediction of cognitive appraisals. This work fills the gap by presenting CovidET-Appraisals, the most comprehensive dataset to-date that assesses 24 appraisal dimensions, each with a natural language rationale, across 241 Reddit posts. CovidET-Appraisals presents an ideal testbed to evaluate the ability of large language models -- excelling at a wide range of NLP tasks -- to automatically assess and explain cognitive appraisals. We found that while the best models are performant, open-sourced LLMs fall short at this task, presenting a new challenge in the future development of emotionally intelligent models. We release our dataset at https://github.com/honglizhan/CovidET-Appraisals-Public.
Hongli Zhan, Desmond C. Ong, Junyi Jessy Li
2023-10-22T19:12:17Z
http://arxiv.org/abs/2310.14389v1
# Evaluating Subjective Cognitive Appraisals of Emotions ###### Abstract The emotions we experience involve complex processes; besides physiological aspects, research in psychology has studied _cognitive appraisals_ where people assess their situations subjectively, according to their own values (Scherer, 2005). Thus, the same situation can often result in different emotional experiences. While the _detection_ of emotion is a well-established task, there is very limited work so far on the automatic prediction of cognitive appraisals. This work fills the gap by presenting CovidET-Appraisals, the most comprehensive dataset to-date that assesses 24 appraisal dimensions, each with a natural language rationale, across 241 Reddit posts. CovidET-Appraisals presents an ideal testbed to evaluate the ability of large language models -- excelling at a wide range of NLP tasks -- to automatically assess and explain cognitive appraisals. We found that while the best models are performant, open-sourced LLMs fall short at this task, presenting a new challenge in the future development of emotionally intelligent models. _We release our dataset at_[https://github.com/honglizhan/CovidET-Appraisals-Public](https://github.com/honglizhan/CovidET-Appraisals-Public). ## 1 Introduction Emotions constitute a crucial aspect of people's lives, and understanding them has a profound impact on improving public mental health problems as well as policy-making (Choudhury and De, 2014; Gjurkovic and Snajder, 2018; Arora et al., 2021; Uban et al., 2021). The emotions we experience involve complex processes: the same situation can often result in different emotional experiences, based on an individual's subjective evaluations. These are called _cognitive appraisals_, and have been extensively studied in psychology through theoretical, behavioral, and hand-coded studies (Arnold, 1960; Lazarus, 1966; Lazarus et al., 1980; Roseman, 1984; Scherer et al., 1984; Smith and Ellsworth, 1985; Weiner, 1985; Clore and Ortony, 2000; Roseman and Smith, 2001; Scherer et al., 2001; Ellsworth and Scherer, 2003; Sander et al., 2005; Ong et al., 2015, 2019; Ortony et al., 2022; Yeo and Ong, 2023). For instance, being fired from a job, if judged to be due to one's own controllable mistakes, could result in regret; if evaluated to be unfair and due to someone else's intentional actions, would make one feel angry; and if appraised to be leaving a toxic work environment, could instead result in relief and even happiness. **The different dimensions along which people subjectively inter Figure 1: An example from CovidET-Appraisals. The fact that the narrator is blaming nobody but circumstances beyond anyone’s control for causing long-COVID contributes to their feeling of _sadness_. We showcase an annotation together with LLMs’ responses in Appendix SSA. pret or _appraise_ the situation characterizes the specific emotions they feel_(Moors et al., 2013). Although emotion _detection_ is a well-established NLP task (Strapparava and Mihalcea, 2007; Mihalcea and Strapparava, 2012; Wang et al., 2012; Lei et al., 2014; Abdul-Mageed and Ungar, 2017; Khanpour and Caragea, 2018; Liu et al., 2019; Sosea and Caragea, 2020; Demszky et al., 2020; Desai et al., 2020; Sosea et al., 2022), it mostly involves classification from text to emotion labels directly, skipping the appraisal step that is necessary to interpret why the emotion is experienced by an individual in a particular event. Hence, we do not yet have a data-driven understanding of these cognitive appraisals in textual data. Yet recent work has started to show its necessity: Hofmann et al. (2020) showed that appraisals are informative for an emotion detection model; Zhan et al. (2022) further recognized appraisals to be an integral part of emotion triggers, though appraisals were not explicit in their work. This work aims at construing an empirical, explicit understanding of _perceived_ cognitive appraisals in human readers and large language models (LLMs) alike, via a comprehensive \(24\) dimensions, along with their corresponding natural language rationales. A language model's capability of assessing cognitive appraisals reflects a more nuanced understanding of emotions, where it could contextualize individual subjectivity in responses to the same situation, while offering explanations ("they are feeling [_emotion_] because of [_appraisal_]"). This could be groundwork for emotional support agents, e.g., one capable of positive reframing (Ziems et al., 2022) or producing empathetic responses. We first introduce CovidET-Appraisals, a dataset of \(24\) appraisal dimensions annotated across \(241\) Reddit posts sourced from Zhan et al. (2022) about COVID-19. Each post was manually annotated with \(24\) appraisal dimensions from a recent meta-analysis covering all appraisal dimensions proposed and studied in the literature (Yeo and Ong, 2023). For each appraisal dimension, annotators not only rated the extent to which they perceived the narrator is experiencing the said dimension, but also provided a _rationale_ in their own language to justify their rating selection. An example from CovidET-Appraisals is shown in Figure 1. CovidET-Appraisals serves as an ideal testbed to evaluate the capability of a model to uncover implicit information for emotion understanding. Benchmarking on CovidET-Appraisals, we evaluate the performance of LLMs to (1) provide Likert-scale ratings for the appraisal dimensions; and (2) generate natural language rationales for their ratings. The elicitation of the rationales can be seen as a way of probing (Le Scao and Rush, 2021; Gu et al., 2022), where we prefix a question with an elaborated situation. We evaluate a range of LLMs, including ChatGPT, Flan-T5 (Chung et al., 2022), Alpaca (Taori et al., 2023), Dolly (Conover et al., 2023). With an extensive human evaluation of the natural language rationales from LLMs as well as our annotators, we find that ChatGPT performs on par with (and in some cases better than) human-annotated data; this opens a new avenue of investigation to improve its performance on emotion-related tasks (Kocon et al., 2023). In comparison, other open-sourced LLMs fall short on this task, presenting a new challenge in the future development of emotionally intelligent open models. _We publicly release our annotated dataset_CovidET-Appraisals_, model outputs, and our human evaluation data at_[https://github.com/honglizhan/CovidET-Appraisals-Public](https://github.com/honglizhan/CovidET-Appraisals-Public). ## 2 Background and Related Work Cognitive Appraisal Theories.The cognitive appraisal theories of emotion state that emotions arise from an individual's subjective understanding and interpretation of situations that hold personal importance for their overall well-being (Arnold, 1960; Lazarus, 1966; Lazarus et al., 1980; Roseman, 1984; Scherer et al., 1984; Smith and Ellsworth, 1985; Weiner, 1985; Clore and Ortony, 2000; Roseman and Smith, 2001; Scherer et al., 2001; Sander et al., 2005; Ortony et al., 2022). In practical terms, people interpret and appraise situations along a range of different dimensions, and it is the specific manner in which they appraise their situations that give rise to the distinct emotions they experience. The primary focus of cognitive appraisal theories of emotions revolves around the identification of these appraisal dimensions that are associated with specific emotional experiences and how these dimensions contribute to distinguishing between different emotional states (Lazarus, 1993; Roseman, 1996; Scherer et al., 2001; Moors, 2010; Scherer and Moors, 2019). While appraisal theorists agree on the impor tance of motivationally-relevant appraisals in triggering emotions, they have not reached a consensus on the specific appraisal dimensions that play a significant role in this process (Yeo and Ong, 2023). Various theories have put forth distinct sets of appraisal dimensions that are considered crucial in triggering and distinguishing emotions. From prior literature, Yeo and Ong (2023) identified and assembled a taxonomy of all appraisal dimensions that have been studied, and produced a condensed list of \(24\) cognitive appraisal dimensions which we focus on in this paper. Cognitive Appraisals in NLP.Appraisals provide the necessary computational structure allowing for the distillation of real-life situations that depend on a multitude of factors into a (large but) finite set of appraisal dimensions (Ong et al., 2015). Despite its importance, however, few works have explored the implications of cognitive appraisals on emotions in NLP. Hofmann et al. (2020) experimented with a small set of cognitive appraisal dimensions (including _attention_, _certainty_, _effort_, _pleasantness_, _responsibility_, _control_, and _circumstance_) to assist the automatic detection of emotions in text, and found that accurate predictions of appraisal dimensions boost emotion classification performance. They introduced a dataset of \(1,001\) sentences following the template "I feel [_emotion_], when..." (average sentence length: 27 tokens). In comparison, our work covers a much wider range of \(24\) appraisal dimensions found in prior literature, over lengthy (176 tokens on average) Reddit posts that were natural and emotionally charged. We also collect natural language rationales as a key contribution to reveal human's in-depth understanding of such cognitive appraisals in context. Recent studies (Zhan et al., 2022; Sosea et al., 2023) acknowledged both _what happened and how one appraised the situation_ as inherent components of emotion triggers, although the appraisal of events was not explicit in their work. Instead we provide datasets and perform evaluation on appraisals explicitly, such that language models can build on this work to achieve a comprehensive and explicit understanding of cognitive appraisals from written text. LLMs on Emotion-Related Tasks.Autoregressive LLMs have been explored extensively in emotion-related tasks such as sentiment analysis (Zhong et al., 2023; Qin et al., 2023; Susnjak, 2023), emotion recognition (Kocon et al., 2023), disclosing the representation of human emotions encapsulated in LLMs (Li et al., 2023), and interpreting mental health analysis (Yang et al., 2023). However, few have tapped into the understanding of cognitive appraisals of emotions innate in LLMs. In this work, we dive into the extent to which LLMs comprehend the profound cognitive appraisals underlying emotions in situations, and further elicit natural language rationales from the language models to disclose the reason behind such predictions from the otherwise baffling black-box LLMs (Gilpin et al., 2018). Aligning with Marasovic et al. (2020) who performed human evaluation on rationales generated by GPT, we additionally perform an in-depth human evaluation of the rationales from human annotators and LLMs alike on the novel task of providing natural language explanations for cognitive appraisals of situations that underlie narrators' emotional experiences. ## 3 The CovidET-Appraisals Dataset CovidET-Appraisals contains \(241\) Reddit posts sampled from the CovidET dataset (Zhan et al., 2022), where the Reddit posts are sourced from r/COVID19_support. Each post is manually annotated with one or more of the \(7\) emotions: _anger_, _anticipation_, _joy_, _trust_, _fear_, _sadness_, and _disgust_. The \(241\) posts in CovidET-Appraisals have an average of \(175.82\) tokens and \(2.67\) emotions per post. From Yeo and Ong (2023)'s work, we identify \(24\) cognitive emotion appraisal dimensions (Table 1). We provide the instructions given to the annotators (including the full questions for each of these 24 dimensions) in Appendix SSB. Annotators.We recruited \(2\) linguistics students at a university to work on our annotation task; both of them are native speakers of English. Both annotators underwent training using a set of posts already annotated by our group. Throughout the annotation, we monitored the inter-annotator agreement and provided feedback on their work. Instructions.Given a Reddit post from CovidET, annotators are asked to judge \(24\) emotion appraisal dimensions pertaining to how the narrator feels about and views the situation that they are going through (e.g., whether the narrator feels the situation they are in is something they could control). For each appraisal dimension, annotators need to select a Likert rating on the scales of \(1\) to \(9\). A _"not mentioned"_ (NA) option is provided in case the dimension being asked is absent in the given post. In addition, we also ask the annotators to provide rationales for their ratings in the form of _natural language explanations_. On average, our trained annotators spent around \(30\) minutes to complete the annotation of one post. Owing to the immense effort involved, we doubly annotate 40 posts to measure inter-annotator agreement while leaving the rest annotated by one annotator. Post-Processing and Aggregation.Given a fixed topic (COVID-19 in our case), it is highly likely that certain dimensions frequently don't apply [23]. This can be seen in Figure 2 which plots the percentage of NA labels: dimensions such as _civ_ (consistency with internal values), _fair_ (fairness), and _csn_ (consistency with social norms) contain mostly NA labels (around \(80\%\)). Therefore, we remove these dimensions from subsequent analyses and evaluations of the dataset. **This results in a total of 21 _applicable_ appraisal dimensions in CovidET-Appraisals.** We collected \(241\) posts in total. For the subset of \(40\) posts that are doubly annotated, we aggregate the Likert-scale ratings by taking the mean of each post's ratings for each appraisal dimension (if an annotator labels a dimension as NA, we then exclude the particular dimension of that post that they annotate). In terms of the rationales, we consider both rationales as ground truth references and use multi-reference metrics in our experiments. Inter-Annotator Agreement.We report inter-annotator agreement on the Likert-scale ratings. Since there is no reliable, automatic way to evaluate natural language rationales (as discussed in SS4), we evaluate them with human validation in SS7.2. To measure the agreement for selecting the NA label, we average the Fleiss' Kappa values [17, 16] across _all_\(24\) appraisal dimensions, yielding a value of \(0.769\) indicating substantial agreement [1]. For the \(1\)-\(9\) Likert-scale ratings, we report on the \(21\) applicable dimensions: (1) Spearman's \(\rho\) between our two annotators, calculated per dimension then averaged across all dimensions; (2) Krippendorff's alpha (using interval distance) [18]; and (3) mean absolute difference (_abs. delta_). Here the agreement is calculated if neither annotator gave a NA judgment. Krippendorff's alpha yields a value of \(0.647\) indicating substantial agreement [1]. The average Spearman's correlation is \(0.497\) with significance, and the absolute delta values also have a small mean of \(1.734\). These measures indicate that while the task is subjective, annotators do align with each other with only a small difference compared to the scale of ratings (\(1\)-\(9\)). Agreement values differ by dimension, which we showcase in Appendix C. ## 4 Dataset Analysis How do the scales distribute across dimensions and emotions?The distribution of the Likert-scale ratings is shown in Figure 3. The rat \begin{table} \begin{tabular}{r l l} \hline \hline ID & Abbrv. & Reader-Friendly Labels \\ \hline 1 & _ssrp_ & _Self-responsibility_ \\ 2 & _orsp_ & _Other-responsibility_ \\ 3 & _csrp_ & _Circumstances-responsibility_ \\ 4 & _ptc_ & _Problem-focused coping_ \\ 5 & _griv_ & _Goal Relevance_ \\ 6 & _attn_ & _Attentional activity_ \\ 7 & _efc_ & _Emotion-focused coping_ \\ 8 & _scrl_ & _Self-Controllable_ \\ 9 & _ocrl_ & _Other-Controllable_ \\ 10 & _ccrl_ & _Circumstances-Controllable_ \\ 11 & _pnd_ & _Predictability_ \\ 12 & _thr_ & _Threat_ \\ 13 & _pls_ & _Pleasantness_ \\ 14 & _crt_ & _Certainty_ \\ 15 & _gend_ & _Goal Conduciveness_ \\ 16 & _fair_ & _Fairness_ \\ 17 & _fex_ & _Future expectancy_ \\ 18 & _csn_ & _Consistency with social norms_ \\ 19 & _loss_ & _Loss_ \\ 20 & _fml_ & _Familiarity_ \\ 21 & _eff_ & _Effort_ \\ 22 & _chl_ & _Challenge_ \\ 23 & _civ_ & _Consistency with internal values_ \\ 24 & _exp_ & _Expectedness_ \\ \hline \hline \end{tabular} \end{table} Table 1: The 24 appraisal dimensions and their abbreviations we used throughout this paper. See Appendix §B for full questions for each dimension, and Figure 1 for an example of how the items for 1: _self-responsibility_, 2: _other-responsibility_, 3: _circumstances-responsibility_, and 24: _expectedness_ were framed. Figure 2: Percentage of _“not mentioned”_ labels in each dimension in CovidET-Appraisals. ings for some dimensions are consistent (e.g., dimensions _crsp_ (circumstances-responsibility), _ccrl_ (circumstances-controllable), and _chl_ (challenge)), whereas for some other dimensions, the ratings have higher variance (e.g., dimensions _ocrl_ (other-controllable) and _loss_). We analyze the connections between our Likert-scale annotations and CovidET's emotion annotations. Figure 4 shows the mean Likert-scale rating for each dimension within each post with respect to the perceived emotion. While it is evident that most dimensions show consistency (the posts are all related to COVID-19), some emotions stand out distinctly in particular dimensions. For example, _trust_ and _joy_ have higher Likert-scale ratings on dimensions _pfc_ (problem-focused coping) and _gcnd_ (goal conduciveness) compared to other emotions, suggesting the inter-correlation between these appraisal dimensions with positive emotions. We further explore whether appraisal dimensions alone are indicative of perceived emotions already annotated in CovidET in Appendix SSD.1. What are the characteristics of the natural language rationales?On average, each rationale is \(1.2\) sentences (std.dev \(=0.4\)) and \(28.9\) tokens (std.dev \(=10.0\)) long. Following Marfurt and Henderson (2021), we also measure the abstractiveness of the rationales from our human annotators by calculating the percentage of novel bigrams in the rationales with respect to the Reddit posts and instructions (i.e., evaluating a specific appraisal dimension) that the annotators were given. As shown in Table 4, our human annotators attain a % of novel bigrams of \(86.7\%\), indicating a high abstractiveness. We showcase the most prominent topics extracted from the annotated rationales using Latent Dirichlet Allocation (LDA) (Blei et al., 2003) in Appendix SSD.2. Are rationales repetitive?We also look into automatic measures of similarity to assess how much rationales from different annotators, or from different dimensions/posts, differ from one another. Specifically, we calculate BLEU-4 (Papineni et al., 2002), ROUGE-L (Lin, 2004), and re-scaled BERTScore (Zhang et al., 2019) between our two annotators' rationales. We establish \(2\) random baselines for comparison: (1) rationales of the same dimension from different posts; (2) rationales from different dimensions within the same post. In each case we report similarity between \(3\) randomly sampled rationales and the annotated ones. Table 2 shows that the textual similarity in all conditions are somewhat low; the BLEU and ROUGE scores show that there is very little lexical overlap, although BERTScore shows higher semantic similarity between two annotators for the same dimension within the same post. Upon closer inspection, we observe that these commonly used automatic measures do not adequately capture semantic similarity in our dataset (see Appendix SSD.3 for an example). This adds to the challenge of evaluating rationales; as a result, we resort to the human evaluation in SS7.2. ## 5 Can LLMs understand emotional appraisals? CovidET-Appraisals provides an ideal testbed that evaluates models' performance on predicting both the Likert ratings, as well as their nat \begin{table} \begin{tabular}{l||c|c|c} \hline \hline & \multicolumn{3}{c}{Rationale} \\ & BLEU-4 & ROUGE-L & BERTSc \\ \hline Annotators & \(0.042\) & \(0.253\) & **0.357** \\ Baseline-P & **0.060** & **0.261** & \(0.336\) \\ Baseline-D & \(0.059\) & \(0.247\) & \(0.332\) \\ \hline \hline \end{tabular} \end{table} Table 2: Automatic measures of similarity on the natural language rationales of CovidET-Appraisals.Baseline-P denotes “baseline (same dimension, _different posts_)”, and Baseline-D denotes “baseline (same post, _different dimensions_)”. Figure 4: Mean Likert-scale ratings for each dimension in each emotion. Figure 3: Distribution of the ratings for each dimension. ural language explanations. Using CovidET-Appraisals, we evaluate the zero-shot performance of LLMs in an attempt to evaluate their innate ability to comprehend emotional appraisals from social media text without in-context learning. Models.We evaluate the following instruction-tuned LLMs1: **1) ChatGPT**, i.e., GPT-3.5-Turbo; **2) FLAN-T5-XXL (11B)**Chung et al. (2022), which is the instruction fine-tuned version of T5 Raffel et al. (2020); **3) Alpaca (7B, 13B)**Taori et al. (2023) is fine-tuned from LLaMA (7B and 13B) (Touvron et al., 2023) on \(52\)K instruction-following examples created with GPT text-davinci-003 in the manner of self-instruct Wang et al. (2022); **4) Dolly-V2 (7B, 12B)**Conover et al. (2023) is an instruction-tuned LLM trained on ~\(15\)k demonstrations consisting of both instructions and responses. Footnote 1: While we have also experimented with non-instruction-tuned LLMs (including GPT-3 davinci and LLaMA (7B and 13B), they largely fail to generate sensible outputs for this task. We showcase examples of responses from non-instruction-tuned models in Appendix §A. For these reasons, we do not include their results in this paper. Prompts and Setup.The templates for prompting the LLMs are shown in Appendix Figure 17. After extensive experimentation, we found that only ChatGPT is able to generate both a rating and a rationale with a single prompt; this type of "1-step" prompting leads to ill-formed responses for other models. Thus, for models other than ChatGPT, we instead use a pipeline or "2-step" prompting similar to the strategy used in Press et al. (2022): we first elicit the rating for the appraisal dimension, then conditioned on the response for the rating we further elicit the rationale for the selection. We carry out all our experiments on \(4\) Nvidia A\(40\) GPUs. We use the HuggingFace Transformers Wolf et al. (2020) library for model inference. We set the temperature value of all models to \(0.1\).2 To enable a fair comparison of models, we sample from the LLMs five times with different model initializations and report average values for both scales and rationales. Footnote 2: We experimented with higher temperatures on a validation set consisting of 10 Reddit posts annotated by our group which are not included in CovidET-Appraisals, and the models yielded worse and more unstable performance. ## 6 Evaluation: Likert-Scale Ratings We report model performance for Likert-scale ratings on the \(21\)_applicable_ dimensions using two standard regression metrics: Mean Absolute Error (MAE) and Spearman's correlation. We treat the selection of the NA labels as a binary classification task and report F1 measures across _all_\(24\) dimensions. For the \(40\) gold examples that were doubly annotated by human annotators, we consider a dimension as NA when both annotators select the label. Results.To evaluate the performance, we clean the responses elicited from the LLMs. Specifically, we use regular expressions to extract the first numeric value ranging from \(1\)-\(9\) from the scale responses3. The results of the models' performance are shown in Table 3. We showcase examples of the models' responses in Appendix SSA. Additional analyses of the LLMs' responses are shown in Appendix SSG. Footnote 3: For example, one of Alpaca-7B’s scale responses is “_The narrator thought that Circumstances Beyond Anyone’s Control were responsible for causing the situation to a moderate extent (4 on a scale of 1-9)._/_c/s?_”. After cleaning, the response is formatted to “_4”. For the NA labels (Table 3, right), ChatGPT and Alpaca-7B score the highest with an F1 of \(0.918\). In general, the average performance across the language models we evaluate is \(0.774\) for F1, indicating these models are performant at predicting whether a dimension applies. For the Likert-rating predictions, results show that ChatGPT-3.5 consistently yields the highest performance compared to the other language models, with a significant Spearman's correlation of \(0.388\) and an MAE of \(1.694\). We note that FLAN-T5-XXL is the second best-performing model. Alpaca and Dolly perform poorly on our task, with negative correlations with the gold labels4. Inter \begin{table} \begin{tabular}{l||c|c||c} \hline \hline & \multicolumn{2}{c||}{Scale} & NA \\ & MAE & Spearman’s \(\rho\) & F1 \\ \hline ChatGPT & **1.694** & **0.388\({}^{\dagger\dagger}\)** & **0.918** \\ Flan-T5 & \(3.266\) & \(0.225^{\dagger}\) & \(0.852\) \\ Alpaca-7b & \(2.353\) & \(0.081\) & **0.918** \\ Alpaca-13b & \(3.872\) & \(-0.035\) & \(0.602\) \\ Dolly-7b & \(2.812\) & \(-0.013\) & \(0.645\) \\ Dolly-12b & \(2.747\) & \(0.022\) & \(0.711\) \\ \hline \hline \end{tabular} \end{table} Table 3: Experiment results from LLMs. \({}^{\dagger}\) indicates \(p<0.1\) for Spearman correlation, and \({}^{\dagger\dagger}\) indicates \(p<0.05\). In addition, we also provide the results of the F1 score on measuring the agreement between the models’ ratings and the gold ratings for selecting the “_not mentioned_” label across _all_\(24\) dimensions. estingly, we notice a drop in performance when the size of the model parameters increases for Alpaca. The results highlight the challenging nature of our task, and the gap between open-sourced LLMs vs. ChatGPT [1]. Footnote 1: [https://github.com/google-learning/](https://github.com/google-learning/) Additionally, we also measure the systems' performance on all \(24\) appraisal dimensions, including the \(3\) appraisal dimensions where the NA rates are around \(80\%\). Results revealed marginal change in performance across all LLMs. For most LLMs the performance dropped as expected: measured with Spearman's \(\rho\), ChatGPT-3.5 (\(\downarrow 0.018\)), Alpaca-7B (\(\downarrow 0.008\)), and Dolly-12B (\(\downarrow 0.007\)). On the other hand, the performance of FLAN-T5 (\(\uparrow 0.005\)), Alpaca-13B (\(\uparrow 0.027\)), and Dolly-7B (\(\uparrow 0.020\)) increased. ## 7 Evaluation: Rationales As rationalizing emotional appraisals with natural language is a novel task, we perform both automatic (SS7.1) and human evaluation (SS7.2). ### Automatic Evaluation We use commonly used automatic reference-based metrics including BLEU [11], ROUGE [15] and BERTScore [13], comparing generated rationales vs. annotated ones (in a multi-reference fashion). Results.Similar to the performance in selecting Likert-scale ratings, ChatGPT remains the best-performing language model in providing natural language rationales (Table 4). The values ChatGPT achieves are lower than, though comparable to, those between different rationales from our two annotators. Alpaca-7B also achieves comparable performance in these automatic measures, despite its relatively poor capability in terms of selecting Likert-scale ratings. We note that FLAN-T5 lags behind considerably compared to ChatGPT and Alpaca-7B. We provide the additional auto-evaluation statistics for other LLMs including Dolly-7B, Dolly-12B, and Alpaca-13B in Appendix Table 11. How long and how abstractive are the rationales generated by LLMs?In addition, we also measure the length and abstractiveness of the rationales generated by LLMs. Following the setup in SS4, we evaluate abstractiveness using % of novel bigrams, comparing LLMs' generated rationales against the Reddit posts as well as the prompts (i.e., evaluating a specific appraisal dimension) they were given. As shown in Table 4, rationales generated by LLMs are at least \(1.5\)x longer than those provided by our annotators, with ChatGPT being the most verbose. The LLMs also provide rationales that are more extractive compared to our annotators, with FLAN-T5 being the most extractive. ### Human Evaluation Data.Because the natural language rationales are explanations for a particular rating, we only evaluate and analyze LLM-generated rationales when the model made a near-correct prediction of the Likert-scale rating for that particular dimension compared against the gold human ratings. Specifically, we sample the _intersection_ of (post, dimension) tuples where the \(3\)_best-performing_ LLMs' (i.e., ChatGPT, FLAN-T5, and Alpaca-7B) ratings fall in the range of an absolute difference of \(1\) to one of the annotated scale-ratings. In cases where there are \(2\) gold annotations for a particular dimension, both are evaluated. In Appendix SSF we also show the human evaluation of rationales for such intersection of _all_ LLMs. We additionally evaluate **human-written rationales** as well, and we mix those (in random order) with LLMs' responses. The above desiderata results in an evaluation \begin{table} \begin{tabular}{l||c||c||c||c||c||c||c|c} \hline \hline & Length & Abstractiveness & \multicolumn{3}{c||}{Auto Eval} & \multicolumn{3}{c}{Human Eval} \\ & \# Tokens & \%Novel Bigrams & BLEU-4 & ROUGE-L & BERT5c & FAC & REL & JUS & USE \\ \hline Annotators & **28.9** & **86.7\%** & \multicolumn{3}{c||}{——} & \(0.73\) & **0.88** & **0.95** & 0.72 \\ \hline ChatGPT & \(58.0\) & \(81.8\%\) & **0.044** & \(0.224\) & **0.347** & **0.84** & **0.88** & \(0.93\) & **0.85** \\ Flan-T5 & \(45.3\) & \(16.0\%\) & \(0.008\) & \(0.066\) & \(0.053\) & \(0.40\) & \(0.29\) & \(0.24\) & \(0.13\) \\ Alpaca-7B & \(48.6\) & \(71.9\%\) & \(0.040\) & **0.230** & \(0.297\) & \(0.55\) & \(0.82\) & \(0.82\) & \(0.51\) \\ \hline \hline \end{tabular} \end{table} Table 4: Experiment results from LLMs. Additional evaluations of _all_ language models (including Alpaca-13B, Dolly-7B, and Dolly-12B) are provided in Table 11. A more comprehensive report of the automatic metrics BLEU-4, ROUGE-L, and BERTScore is provided in Table 9, Appendix §F. of \(108\) rationales annotated by human annotators and \(65\) natural language rationales from each LLM. The evaluation covers \(19\) out of the \(21\) applicable dimensions (no such overlap is found for dimensions _crsp_ (_circumstances-responsibility_) and _pls_ (_pleasantness_)). Moreover, we make sure that there are no ground truth labels annotated by the human annotators in which the rating is NA. Instructions.Given a Reddit post and the scale provided by the human annotators or the LLM (blinded to the annotators), annotators are asked to judge the rationales pertaining to the emotion appraisal dimension regarding the post as well as the stated scale. The rationales are distributed to annotators at random. We evaluate the natural language rationales based on the following criteria. In Appendix SSH, We provide the detailed instructions and examples given to the annotators, together with the layout of the human evaluation task. _1) Factuality_: For the rationale, the model may not generate something that is factual: sometimes it generates rationales for the sole purpose of justifying its answer [22]. Therefore, we include the aspect of _hallucination and factuality_ as one of our evaluation criteria, and ask evaluators whether the rationale faithfully reflects what's stated in the post. Options of "_Yes_", "_Minor Error_", and "_No_" are provided. _2) Relevance_: We evaluate whether the rationale directly addresses the specific appraisal dimension question that is being asked about the post. We ask evaluators on a Likert-scale of \(1\) to \(5\), with \(1\) being "_least relevant_" and \(5\) being "_most relevant_", whether the rationale focuses on the specific aspect of the post that is being appraised, and whether it strays off-topic or provides irrelevant information. _3) Justification_: We ask human evaluators whether the rationale justifies the selected scale by adequately explaining why the selected rating scale is the most appropriate or relevant one to use for the aspect being evaluated. Annotators need to select either "_Yes_" or "_No_". _4) Usefulness_: Finally, we evaluate whether the rationale provides useful or informative insights or explanations of useful information pertaining to the appraisal dimension being judged. Options of "_Yes_", "_Maybe_", and "_No_" can be selected. Annotators.We recruit annotators from the Amazon Mechanical Turk (MTurk) to work on our human evaluation task. The crowd workers were involved in a pre-annotation _qualification as well as training_ process before commencing the evaluation of the natural language rationales. We assign \(2\) crowd workers per natural language rationale evaluation. We ensure that the crowd workers earn a minimum salary of \(\$10\) per hour. We report the inter-evaluator agreement using Krippendorff's Alpha with interval distance in Table 5, showing substantial agreement [1] across all criteria. Label Transformation.For the convenience of measuring inter-annotator agreement as well as interpreting the results, we convert the labels of each criterion to numeric values within the range of \(0\) to \(1\). Specifically, for criteria _Factuality_, _Justification_, and _Usefulness_, "_Yes_" is converted to \(1\), "_Minor Error/Maybe_" to \(0.5\), and "_No_" to \(0\). As for the criterion _Relevance_ which is judged on a 5-scale Likert rating, we map the Likert scale of \(1\) into \(0\), \(2\) into \(0.25\), \(3\) into \(0.5\), \(4\) into \(0.75\), and \(5\) into \(1\). Results.The result of the mean ratings for each criterion from the human evaluation task is provided in Table 4. We provide box plots of the ratings as well as the human evaluation results for the rationales from all \(6\) LLMs in Appendix SSF. From Table 4 we observe that our human annotators and ChatGPT provide natural language rationales of the highest quality among all models, according to human evaluators. Surprisingly, we find ChatGPT performs on par with our human annotators, with (slightly) better performance in terms of _factuality_ and _usefulness_. This can be attributed to the verbosity and extractiveness of ChatGPT (as shown in Table 4), especially in dimensions where the scale rating is low. We showcase an example in Appendix SSI. Alpaca-7B attains lower results compared to the other LLMs, especially in terms of the criteria _factuality_ and _usefulness_. FLAN-T5, on the other hand, ranks the worst on all criteria among the LLMs. Further analysis reveals that FLAN-T5 occasionally generates responses for natural language rationales that are the same as its scale answers, resulting in irrelevant and useless rationales. ## 8 Conclusion To achieve a more accurate and holistic understanding of emotions from written text, NLP models need to work towards understanding the subjective cognitive appraisals of emotions underlying situations. In this work, we construe an empirical and explicit understanding of _perceived_ cognitive appraisals in human readers and LLMs alike. We present CovidET-Appraisals, a dataset of \(241\) Reddit posts annotated with a comprehensive range of \(24\) subjective cognitive appraisals that follow a situation, along with their corresponding natural language rationales. Experiments reveal that CovidET-Appraisals is a vital resource to evaluate the capability of a language model to uncover implicit information for emotional understanding. Our thorough evaluation of LLMs' performance on assessing emotion appraisal dimensions emphasizes that CovidET-Appraisals is a challenging benchmark, and our in-depth human evaluation of the natural language rationales indicates potential areas of improvement (e.g., improving the _factuality_ and _usefulness_ of the rationales) for open-source LLMs. ## Limitations This work presents a new dataset entitled CovidET-Appraisals to evaluate LLMs' capability in cognitive emotion appraisals. Due to the highly demanding nature of our task (e.g., the same situation can result in different subjective evaluations), CovidET-Appraisals is annotated by 2 annotators. Future work can explore a larger pool of annotators. Furthermore, it should be acknowledged that CovidET-Appraisals is restricted to social media posts during the COVID-19 pandemic, and they are written in English solely. This makes it challenging to evaluate LLMs' ability in other domains as well as languages. Also, we note the appraisals we collect are from the _perceived_ end, which are not subjective appraisals from the narrators and authors themselves. We note that the size of CovidET-Appraisals is relatively small. We have not intended this dataset to be one for supervised model training but rather a very high-quality dataset for evaluation (since this is the first dataset of its kind). A key reason is that the collection of appraisal annotations is both challenging and time-consuming: we have \(24\) dimensions to analyze per post, and the annotation for one post for one trained annotator takes half an hour. Future work may establish the validity of training data obtained from LLMs, and explore approaches such as distillation. In addition, we experiment with LLMs under a zero-shot setup only, while we highlight that this is the first work towards the assessment of cognitive appraisals of emotions in language models, and it lays the foundation for future research on deciphering the intrinsic emotional dynamics that remain unexplored in current state-of-the-art models. We believe that this warrants a careful construction of the dataset with thorough analysis; and we leave these interesting engineering questions to future work. ## Acknowledgements This research was partially supported by National Science Foundation (NSF) grant IIS-2107524. We thank Kathryn Kazanas and Keziah Kaylyn Reina for their dedication and hard work on the annotation of CovidET-Appraisals. We also thank our reviewers for their insightful feedback and comments.
2307.06342
ConvNeXt-ChARM: ConvNeXt-based Transform for Efficient Neural Image Compression
Over the last few years, neural image compression has gained wide attention from research and industry, yielding promising end-to-end deep neural codecs outperforming their conventional counterparts in rate-distortion performance. Despite significant advancement, current methods, including attention-based transform coding, still need to be improved in reducing the coding rate while preserving the reconstruction fidelity, especially in non-homogeneous textured image areas. Those models also require more parameters and a higher decoding time. To tackle the above challenges, we propose ConvNeXt-ChARM, an efficient ConvNeXt-based transform coding framework, paired with a compute-efficient channel-wise auto-regressive prior to capturing both global and local contexts from the hyper and quantized latent representations. The proposed architecture can be optimized end-to-end to fully exploit the context information and extract compact latent representation while reconstructing higher-quality images. Experimental results on four widely-used datasets showed that ConvNeXt-ChARM brings consistent and significant BD-rate (PSNR) reductions estimated on average to 5.24% and 1.22% over the versatile video coding (VVC) reference encoder (VTM-18.0) and the state-of-the-art learned image compression method SwinT-ChARM, respectively. Moreover, we provide model scaling studies to verify the computational efficiency of our approach and conduct several objective and subjective analyses to bring to the fore the performance gap between the next generation ConvNet, namely ConvNeXt, and Swin Transformer.
Ahmed Ghorbel, Wassim Hamidouche, Luce Morin
2023-07-12T11:45:54Z
http://arxiv.org/abs/2307.06342v1
# ConvNeXt-ChARM: ConvNeXt-based Transform for Efficient Neural Image Compression ###### Abstract Over the last few years, neural image compression has gained wide attention from research and industry, yielding promising end-to-end deep neural codecs outperforming their conventional counterparts in rate-distortion performance. Despite significant advancement, current methods, including attention-based transform coding, still need to be improved in reducing the coding rate while preserving the reconstruction fidelity, especially in non-homogeneous textured image areas. Those models also require more parameters and a higher decoding time. To tackle the above challenges, we propose ConvNeXt-ChARM, an efficient ConvNeXt-based transform coding framework, paired with a compute-efficient channel-wise auto-regressive prior to capturing both global and local contexts from the hyper and quantized latent representations. The proposed architecture can be optimized end-to-end to fully exploit the context information and extract compact latent representation while reconstructing higher-quality images. Experimental results on four widely-used datasets showed that ConvNeXt-ChARM brings consistent and significant BD-rate (PSNR) reductions estimated on average to \(5.24\%\) and \(1.22\%\) over the versatile video coding (VVC) reference encoder (VTM-18.0) and the state-of-the-art learned image compression method SwimT-ChARM, respectively. Moreover, we provide model scaling studies to verify the computational efficiency of our approach and conduct several objective and subjective analyses to bring to the fore the performance gap between the next generation ConvNet, namely ConvNeXt, and Swin Transformer. All materials, including the source code of SwimT-ChARM, will be made publicly accessible upon acceptance for reproducible research. ## I Introduction Visual information is crucial in human development, communication, and engagement, and its compression is necessary for effective storage and transmission over constrained wireless/wireline channels. Thus, thinking about new lossy image compression approaches is a goldmine for scientific research. The goal is to reduce an image file size by permanently removing less critical information, particularly redundant data and high frequencies, to obtain the most compact bit-stream representation while preserving a certain level of visual fidelity. Nevertheless, the high compress rate and low distortion are fundamentally opposing objectives involving optimizing the rate-distortion tradeoff. Conventional image and video compression standards including JPEG [1], JPEG2000 [2], H.265/high-efficiency video coding (HEVC) [3], and H.266/VVC [4], rely on hand-crafted creativity to present module-based encoder/decoder block diagram. In addition, these codecs employ intra-prediction, fixed transform matrices, quantization, context-adaptive arithmetic codders, and various in-loop filters to reduce spatial and statistical redundancies, and alleviate coding artifacts. However, it has taken several years to standardize a conventional codec. Moreover, existing image compression standards are not anticipated to be an ideal and global solution for all types of image content due to the rapid development of new image formats and the growth of high-resolution mobile devices. Lossy image compression consists of three modular parts: transform, quantization, and entropy coding. Each of these components can be represented as follows: i) autoencoders as flexible nonlinear transforms where the encoder (i.e., analysis transform) extracts latent representation from an input image and the decoder (i.e., synthesis transform) reconstructs the image from the decoded latent, ii) various differentiable quantization approaches which encode the latent into bitstream through arithmetic coding algorithms, iii) deep generative models as potent learnable entropy models estimating the conditional probability distribution of the latent to reduce the rate. Moreover, these three components can be optimized with end-to-end training by reducing the joint loss of the distortion between the original image and its reconstruction and the rate needed to transmit the bitstream of latent representation. Thanks to recent advances in deep learning, we have seen many works exploring the potential of artificial neural networks (ANNs) to form various learned image and video Fig. 1: BD-rate (%) versus decoding time (ms) on the Kodak dataset. Left-top is better. Star and diamond markers refer to decoding on GPU and CPU, respectively. compression frameworks. Over the past two years, the performance of neural compression has steadily improved thanks to the prior line of study, reaching or outperforming state-of-the-art conventional codecs. Some previous works use local context [5, 6, 7], or additional side information [8, 9, 10] to capture short-range spatial dependencies, and others use non-local mechanism [11, 12, 13, 14] as long-range spatial dependencies. Recently, Toderici _et al_. [15] proposed a generative compression method achieving high-quality reconstructions, Minnen _et al_. [16] introduced channel-conditioning and latent residual prediction taking advantage of an entropy-constrained model that uses both forward and backward adaptations, and Zhu _et al_. [17] replaced all convolutions in the channel-wise auto-regressive model (ChARM) prior approach [16] with Swin Transformer [18] blocks, Zou _et al_. [19] combined the local-aware attention mechanism with the global-related feature learning and proposed a window-based attention module, Koyuncu et al. [20] proposed a Transformer-based context model, which generalizes the standard attention mechanism to spatio-channel attention, Zhu _et al_. [21] proposed a probabilistic vector quantization with cascaded estimation under a multi-codebooks structure, Kim _et al_. [22] exploited the joint global and local hyperpriors information in a content-dependent manner using an attention mechanism, and He _et al_. [23] adopted stacked residual blocks as nonlinear transform and multi-dimension entropy estimation model. One of the main challenges of learned transform coding is the ability to identify the crucial information necessary for the reconstruction, knowing that information overlooked during encoding is usually lost and unrecoverable for decoding. Another main challenge is the tradeoff between performance and decoding speed. While the existing approaches improve the transform and entropy coding accuracy, they remain limited by the higher decoding runtime and excessive model complexity leading to an ineffective real-world use. Finally, we found that attention-based networks taking advantage of attention mechanisms to capture global dependencies, such as Swin Transformer [18], have over-smoothed and contain undesirable artifacts at low bitrates. Furthermore, the global semantic information in image compression is less effective than in other computer vision tasks [19]. In this paper, we propose a nonlinear transform built on ConvNeXt blocks with additional down and up sampling layers and paired with a ChARM prior, namely ConvNeXt-ChARM. Recently proposed in [24], ConvNeXt is defined as a modernized ResNet architecture toward the design of a vision Transformer, which competes favorably with Transformers in terms of efficiency, achieving state-of-the-art on ImageNet classification task [25] and outperforming Swin Transformer on COCO detection [26] and ADE20K segmentation [27] challenges while maintaining the maturity and simplicity of convolutional neural networks (ConvNets) [24]. The contributions of this paper are summarized as follows: * We propose a learned image compression model that leverages a stack of ConvNeXt blocks with down and up-sampling layers for extracting contextualized and non-linear information for effective latent decorrelation. We maintain the convolution strengths like sliding window strategy for computations sharing, translation equivariance as a built-in inductive bias, and the local nature of features, which are intrinsic to providing a better spatial representation. * We apply ConvNeXt-based transform coding layers for generating and decoding both latent and hyper-latent to consciously and subtly balance the importance of feature compression through the end-to-end learning framework. * We conduct experiments on four widely-used evaluation datasets to explore possible coding gain sources and demonstrate the effectiveness of ConvNeXt-ChARM. In addition, we carried out a model scaling analysis to compare the complexity of ConvNeXt and Swin Transformer. Extensive experiments validate that the proposed ConvNeXt-ChARM achieves state-of-the-art compression performance, as illustrated in Figure 1, outperforming conventional and learned image compression methods in the tradeoff between coding efficiency and decoder complexity. The rest of this paper is organized as follows. Section II presents our overall framework along with a detailed description of the proposed architecture. Next, we dedicate Section III to describe and analyze the experimental results. Finally, Section IV concludes the paper. ## II Proposed ConvNeXt-ChARM Model ### _Problem Formulation_ The objective of learned image compression is to minimize the distortion between the original image and its reconstruction under a specific distortion-controlling hyper-parameter. Assuming an input image \(\mathbf{x}\), the analysis transform \(g_{a}\), with parameter \(\phi_{g}\), removes the image spatial redundancies and generates the latent representation \(\mathbf{y}\). Then, this latent is quantized to the discrete code \(\hat{\mathbf{y}}\) using the quantization operator \(\lceil.\rfloor\), from which a synthesis transform \(g_{s}\), with parameter \(\theta_{g}\), reconstructs the image denoted by \(\hat{\mathbf{x}}\). The overall process can be formulated as follows: \[\mathbf{y} =g_{a}(\mathbf{x}\mid\phi_{g}), \tag{1}\] \[\hat{\mathbf{y}} =\lceil\mathbf{y}\rfloor,\] \[\hat{\mathbf{x}} =g_{s}(\hat{\mathbf{y}}\mid\theta_{g}).\] A hyperprior model composed of a hyper-analysis and hyper-synthesis transforms \((h_{a},h_{s})\) with parameters \((\phi_{h},\theta_{h})\) is usually used to reduce the statistical redundancy among latent variables. In particular, this hyperprior model assigns a few extra bits as side information to transmit some spatial structure information and helps to learn an accurate entropy model. The hyperprior generation can be summarized as follows: \[\mathbf{z} =h_{a}(\mathbf{y}\mid\phi_{h}), \tag{2}\] \[\hat{\mathbf{z}} =\lceil\mathbf{z}\rfloor,\] \[p_{\hat{\mathbf{y}}|\hat{\mathbf{z}}}(\hat{\mathbf{y}}\mid\hat{\mathbf{z}}) \gets h_{s}(\hat{\mathbf{z}}\mid\theta_{h}).\] Transform and quantization introduce a distortion \(D=MSE(\mathbf{x},\hat{\mathbf{x}})\), for mean squared error (MSE) optimization that measures the reconstruction quality with an estimated bitrate \(R\), corresponding to the expected rate of the quantized latents and hyper-latents, as described bellow: \[R=\mathbb{E}\left[-\log_{2}(p_{\hat{\mathbf{y}}|\hat{\mathbf{z}}}(\hat{\mathbf{y}}\mid\hat{ \mathbf{z}}))-\log_{2}(p_{\hat{\mathbf{z}}}(\hat{\mathbf{z}}))\right]. \tag{3}\] Representing \((g_{a},g_{s})\), \((h_{a},h_{s})\), and entropy model by deep neural networks (DNNs) enables jointly optimizing the end-to-end model by minimizing the rate-distortion tradeoff \(\mathcal{L}\), giving a rate-controlling hyper-parameter \(\lambda\). This optimization problem can be presented as follows: \[\mathcal{L} =R+\lambda D,\] \[=\underbrace{\mathbb{H}(\hat{\mathbf{y}})+\mathbb{H}(\hat{\mathbf{z}})}_ {R}+\lambda\,MSE(\mathbf{x},\hat{\mathbf{x}}), \tag{4}\] where \(\mathbb{H}\) stands for the entropy. ### _ConvNeXt-ChARM network architecture_ To better parameterize the distributions of the quantized latent features with a more accurate and flexible entropy model, we adopted the ChARM prior approach proposed in [16] to build an efficient ConvNeXt-based learning image compression model with strong compression performance. As shown in Figure 2, the analysis/synthesis transform \((g_{a},g_{s})\) of our design consists of a combination of down and up-sampling blocks and ConvNeXt encoding/decoding blocks [24], respectively. Down and up-sampling blocks are performed using Conv2D and Normalisation layers sequentially. The architectures for hyper-transforms \((h_{a},h_{s})\) are similar to \((g_{a},g_{s})\) with different stages and configurations. ### _ConvNeXt design description_ Globally, ConvNeXt incorporates a series of architectural choices from a Swin Transformer while maintaining the network's simplicity as a standard ConvNet without introducing any attention-based modules. These design decisions can be summarized as follows: macro design, ResNeXt's grouped convolution, inverted bottleneck, large kernel size, and various layer-wise micro designs. In Figure 2, we illustrates the ConvNeXt block, where the DConv2D(.) refers for the a depthwise 2D convolution, LayerNorm for the layer normalization, Dense(.) for the densely-connected NN layer, and GELU for the activation function. **Macro design**: The stage compute ratio is adjusted from (3, 4, 6, 3) in ResNet-50 to (3, 3, 9, 3), which also aligns the FLOPs with Swin-T. In addition, the ResNet-style stem cell is replaced with a patchify layer implemented using a 2\(\times\)2, stride two non-overlapping convolutional layers with an additional normalization layer to help stabilize the training. In ConvNeXt-ChARM diagram, we adopted the (3, 3, 9, 3) and (5, 1) as stage compute ratios for transforms and hyper-transforms, respectively. **Depthwise convolution**: The ConvNeXt block uses a depthwise convolution, a special case of grouped convolution used in ResNeXt [28], where the number of groups is equal to the considered channels. This is similar to the weighted sum operation in self-attention, which operates by mixing information only in the spatial dimension. **Inverted bottleneck**: Similar to Transformers, ConvNeXt is designed with an inverted bottleneck block, where the hidden dimension of the residual block is four times wider than the input dimension. As illustrated in the ConvNeXt block Figure 2, the first dense layer is 4 times wider then the second one. **large kernel**: One of the most distinguishing aspects of Swin Transformers is their local window in the self-attention Fig. 2: Overall ConvNeXt-ChARM Framework. We illustrate the image compression diagram of our ConvNeXt-ChARM with hyperprior and channel-wise auto-regressive context model. We also present the ConvNeXt block used in both transform and hyper-transform coding for an end-to-end feature aggregation. block. The information is propagated across windows, which enables each layer to have a global receptive field. The local window is at least 7\(\times\)7 sized, which is still more extensive than the 3\(\times\)3 ResNeXt kernel size. Therefore, ConvNeXt adopted large kernel-sized convolutions by using a 7\(\times\)7 depthwise 2D convolution layer in each block. This allows our ConvNeXt-ChARM model to capture global contexts in both latents and hyper-latents, which are intrinsic to providing a better spatial representation. **Micro design**: In ConvNeXt's micro-design, several per-layer enhancements are applied in each block, by using: a single Gaussian error linear unit (GELU) activation function (instead of numerous ReLU), using a single LayerNorm as normalization choice (instead of numerous BatchNorm), and using separate down-sampling layers between stages. ## III Results First, we briefly describe used datasets with the implementation details. Then, we assess the compression efficiency of our method with a rate-distortion comparison and compute the average bitrate savings on four commonly-used evaluation datasets. We further elaborate a model scaling and complexity study to consistently examine the effectiveness of our proposed method against pioneering ones. ### _Experimental Setup_ **Datasets.** The training set of the CLIC2020 dataset is used to train the proposed ConvNeXt-ChARM model. This dataset contains a mix of professional and user-generated content images in RGB color and grayscale formats. We evaluate image compression models on four datasets, including Kodak [29], Tecnick [29], JPEG-AI [29], and the testing set of CLIC21 [29]. For a fair comparison, all images are cropped to the highest possible multiples of 256 to avoid padding for neural codecs. **Implementation details.** We implemented all models in TensorFlow using tensorFlow compression (TFC) library [30], and the experimental study was carried out on an RTX 5000 Ti GPU. All models were trained on the same CLIC2020 training set with 3.5M steps using the ADAM optimizer with parameters \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). The initial learning rate is set to \(10^{-4}\) and drops to \(10^{-5}\) for another 100k iterations, and \(L=R+\lambda D\) as loss function. The MSE is used as the distortion metric in RGB color space. Each batch contains eight random 256 \(\times\) 256 crops from training images. To cover a wide range of rate and distortion, for our proposed method, we trained five models with \(\lambda\in\{0.006,0.009,0.020,0.050,0.150\}\). Regarding the evaluation on CPU, we used an Intel(R) Xeon(R) W-2145 @ 3.70GHz. **Baselines.1** We compare our approach with the state-of-art neural compression method SwinT-ChARM proposed by Zhu _et al_. [17], and non-neural compression methods, including better portable graphics (BPG)(4:4:4), and the most up-to-date VVC official Test Model VTM-18.0 in All-Intra profile configuration. Footnote 1: For a fair comparison, we only considered SwinT-ChARM [17] from the state-of-the-art models [17, 19, 20, 21, 22, 23], due to the technical feasibility of models training and evaluation under the same conditions and in an adequate time. ### _Rate-Distortion coding performance_ To demonstrate the compression efficiency of our proposed approach, we visualize the rate-distortion curves of our model and the baselines on each of the considered datasets. Considering the Kodak dataset, Figure 3 shows that our ConvNeXt-ChARM outperforms the state-of-the-art learned approach SwinT-ChARM, as well as the BPG(4:4:4) and VTM-18.0 traditional codecs in terms of PSNR. Regarding rate savings over VTM-18.0, SwinT-ChARM has more compression abilities only for low PSNR values. Our model can be generalized to high resolution image datasets (Tecnick, JPEG-AI, and CLIC21), and can still outperform existing traditional and the learned image compression method SwinT-ChARM in terms of PSNR. Besides the rate-distortion curves, we also evaluate different models using Bjontegaard's metric [31], which computes the average bitrate savings (%) between two rate-distortion curves. In Table I, we summarize the BD-rate of image codecs across all four datasets compared to the VTM-18.0 as the anchor. On average, ConvNeXt-ChARM is able to achieve 5.24% rate reduction compared to VTM-18.0 and 1.22% relative gain from SwinT-ChARM. Figure 1 shows the BD-rate (with VTM-18.0 as an anchor) versus the decoding time of various approaches on the Kodak dataset. It can be seen from the figure that our ConvNeXt-ChARM achieves a good tradeoff between BD-rate performance and decoding time. Fig. 3: Rate-distortion comparison on Kodak dataset. ### _Models Scaling Study_ We evaluated the decoding complexity of the three considered image codecs by averaging decoding time across 7000 images at 256\(\times\)256 resolution, encoded at 0.6 bpp. We present the image codec complexity in Table II, including decoding time on GPU and CPU, floating point operations per second (GFLOPs), the memory required by model weights, and the total model parameters. The models run with Tensorflow 2.8 on a workstation with one RTX 5000 Ti GPU. The Conv-ChARM model refers to the Minnen _et al_. [16] architecture with a latent depth of 320 and a hyperprior depth of 192, and can be considered as ablation of our model without ConvNeXt blocks. We maintained the same slice transform configuration of the ChARM for the three considered models. The total decoding time of SwinT-ChARM decoder is less than ConvNets-based decoder on GPU but is the highest on CPU. Our ConvNeXt-ChARM is lighter than the Conv-ChARM in terms of the number of parameters, which proves the ConvNeXt block's well-engineered design. Compared with SwinT-ChARM, our ConvNeXt-ChARM shows lower complexity, requiring lower training time with less memory consumption. In addition, Figure 4 shows that our method is in an interesting area, achieving a good tradeoff between BD-rate score on Kodak, total model parameters, and MFLOPs per pixel, highlighting an efficient and hardware-friendly compression model. ### _Comparison with SwinT-ChARM_ ConvNeXt-ChARM achieves good rate-distortion performance while significantly reducing the latency, which is potentially helpful to conduct, with further optimizations, high-quality real-time visual data transmission, as recently proposed in the first software-based neural video decoder running HD resolution video in real-time on a commercial smartphone [32]. Since fewer works attempt to explicitly compare Swin Transformer and ConvNet-based blocks, here, we compare our ConvNeXt-ChARM with SwinT-ChARM under the same conditions and configurations. We found that a well-designed ConvNet, without any additional attention modules, can outperform the highly coveted Swin Transformer in learned transform coding in terms of BD-rate, with more visually pleasing reconstruction and comparable decoding latency. In addition, ConvNeXt-ChARM maintains the efficiency and maturity of standard ConvNets and the fully-convolutional nature for both training and inference. There is no doubt that Transformers are excellent architectures with enormous potential for the future of various computer vision applications. However, their vast hunger for data and computational resources [33] poses a big challenge for the computer vision community. Taking SwinT-ChARM as an example, it needs, on average, \(\times\)1.33 more time than ConvNeXt-ChARM, to train on the same number of epochs. ## IV Conclusion In this work, we reconcile compression efficiency with ConvNeXt-based transform coding paired with a ChARM prior and propose an up-and-coming learned image compression model ConvNeXt-ChARM. Furthermore, we inherit the advantages of pure ConvNets in the proposed method to improve both efficiency and effectiveness. The experimental results, conducted on four datasets, showed that our approach outperforms previously learned and conventional image compression methods, creating a new state-of-the-art rate-distortion performance with a significant decoding runtime decrease. Future work will further investigate efficient low-complexity entropy coding approaches to further enhance decoding latency. With the development of GPU chip technology and the further optimization of engineering, learning-based codecs will be the future of coding, achieving better Fig. 4: Model size scaling. BD-Rate versus MFLOPs per pixel for our model ConvNeXt-ChARM compared to Conv-ChARM and SwinT-ChARM (for both encoding and decoding). compression efficiency when compared with traditional codecs and aiming to bridge the gap to a real-time operation. We hope our study will challenge certain accepted notions and prompt people to reconsider the significance of convolutions in computer vision.
2306.09358
Modular Controllers Facilitate the Co-Optimization of Morphology and Control in Soft Robots
Soft robotics is a rapidly growing area of robotics research that would benefit greatly from design automation, given the challenges of manually engineering complex, compliant, and generally non-intuitive robot body plans and behaviors. It has been suggested that a major hurdle currently limiting soft robot brain-body co-optimization is the fragile specialization between a robot's controller and the particular body plan it controls, resulting in premature convergence. Here we posit that modular controllers are more robust to changes to a robot's body plan. We demonstrate a decreased reduction in locomotion performance after morphological mutations to soft robots with modular controllers, relative to those with similar global controllers - leading to fitter offspring. Moreover, we show that the increased transferability of modular controllers to similar body plans enables more effective brain-body co-optimization of soft robots, resulting in an increased rate of positive morphological mutations and higher overall performance of evolved robots. We hope that this work helps provide specific methods to improve soft robot design automation in this particular setting, while also providing evidence to support our understanding of the challenges of brain-body co-optimization more generally.
Alican Mertan, Nick Cheney
2023-06-12T16:36:46Z
http://arxiv.org/abs/2306.09358v1
# Modular Controllers Facilitate the Co-Optimization of Morphology and Control in Soft Robots ###### Abstract. Soft robotics is a rapidly growing area of robotics research that would benefit greatly from design automation, given the challenges of manually engineering complex, compliant, and generally non-intuitive robot body plans and behaviors. It has been suggested that a major hurdle currently limiting soft robot brain-body co-optimization is the fragile specialization between a robot's controller and the particular body plan it controls, resulting in premature convergence. Here we posit that modular controllers are more robust to changes to a robot's body plan. We demonstrate a decreased reduction in locomotion performance after morphological mutations to soft robots with modular controllers, relative to those with similar global controllers - leading to fitter offspring. Moreover, we show that the increased transferability of modular controllers to similar body plans enables more effective brain-body co-optimization of soft robots, resulting in an increased rate of positive morphological mutations and higher overall performance of evolved robots. We hope that this work helps provide specific methods to improve soft robot design automation in this particular setting, while also providing evidence to support our understanding of the challenges of brain-body co-optimization more generally. 1 Footnote 1: Code repository: [https://github.com/mertan-a/jececo-23](https://github.com/mertan-a/jececo-23) Footnote 2: Code repository: [https://github.com/mertan-a/jececo-23](https://github.com/mertan-a/jececo-23) ## 1. Introduction A highly touted feature in the evolution of biological creatures is the adaptive advantage of complex biological forms and the environmental and behavioral niches which these unique morphologies satisfy (Han et al., 2017; Cheney et al., 2018; Cheney et al., 2019; Cheney et al., 2020). Artificial creatures and engineered robots also benefit from highly effective and efficient body plans (Kang et al., 2019). This is especially true in soft robots, where flexible and compliant materials enable a variety of complex robots, and lifelike behaviors that are inaccessible to their rigid counterparts (Sheney et al., 2019; Cheney et al., 2020; Cheney et al., 2020). These Figure 1. Main hypothesis: In the brain-body co-optimization of soft robots, the locomotion performance of modular controllers is significantly more robust than global controllers to perturbations of their robot’s morphology. This allows better transfer of that controller to a robot’s offspring with morphological mutations. This increases the rate of successful morphological mutations during the co-optimization process and leads to a more efficient and effective search over robot designs. materials also hold an increased potential for morphological computation (Steintein and Steintein, 2002; Steintein and Steintein, 2002), making the design of robot body plans, and the tight integration of that body plan with its control strategy, especially critical to the robot's performance. However, these advantages also come with a significant challenge: the compliance, complexity, and abundance of non-linear interactions across the form and dynamics of a soft robot lead to a particularly non-intuitive design space, suggesting the increased potential for automated design over manual engineering. Despite the interest and potential in brain-body design automation, co-optimization of morphology and control through evolutionary algorithms remains an open and challenging problem. It has been suggested that this is due, in part, to the specialization of robot controllers and behaviors to the particular morphologies that they inhabit - as tightly coupled and specialized controllers or body plans are not amenable to change in either component without a coordinated change in the other (Beng et al., 2016). While prior work attempted to create genetic representations that increase the likelihood of more coordinated changes (Steintein and Steintein, 2002; Steintein and Steinteintein, 2002) or rely on diversity maintenance to reduce selection pressure during the re-adaption of brain and body post-mutation (Steintein and Steintein, 2002; Steintein and Steinteintein, 2002), all of these works accept the fragile co-adaptation at the heart of the problem as a given. Here, we take a slightly different perspective and ask how we can reduce the amount of fragility in our co-optimization by making the components of our brain-body system more robust to changes in the other - which, we hypothesize, will help to reduce the challenges of making successful mutations to only one component of the system and enable more effective brain-body co-optimization. In effect, we are suggesting that certain implementation decisions about our morphologies and controllers may smooth the fitness landscape by reducing the coupling between the controller and morphological genes/parameters. This philosophy could be studied in the context of producing more robust morphologies, though we first start in this work focusing on an investigation of more robust controllers to morphological changes. In particular, we note the robustness attributed to modular controllers (Steintein and Steintein, 2002) and firstly hypothesize that soft robots undergoing brain-body co-optimization with modular controllers will more effectively transfer to offspring with morphological mutations, then secondarily hypothesize that an increased rate of positive morphological mutations will lead to an overall increase in the effectiveness of brain-body co-optimization in soft robots. ## 2. Related Work **Modular Control** Modularity is considered an important feature and is under investigation in both natural (Steintein and Steintein, 2002) and artificial systems (Beng et al., 2016). Especially for robotics, modular robots are considered versatile, robust, and adaptive (Steintein and Steintein, 2002). Yet it is challenging to design and control such systems, and it is an active area of research in both rigid (Steintein and Steintein, 2002; Steintein and Steinteintein, 2002; Steintein and Steinteintein, 2002) and soft robotics (Stein and Steinteintein, 2002; Stein and Steinteintein, 2002; Stein and Steinteintein, 2002); Steintein and Steintein, 2002). **Soft Robotics** The field of soft robots with volumetric actuation started with (Steintein and Steintein, 2002; Stein and Steinteintein, 2002; Stein and Steinteintein, 2002) and with the availability of simulators such as (Beng et al., 2016; Stein and Steintein, 2002; Stein and Steinteintein, 2002), many others have followed. Soft robots are evolved for locomotion tasks in different environments (Beng et al., 2016; Stein and Steintein, 2002; Stein and Steinteintein, 2002), their ability to change their shape volumetrically are investigated (Beng et al., 2016; Stein and Steintein, 2002; Stein and Steintein, 2002), different types of control strategies are developed (Stein and Steintein, 2002; Stein and Steinteintein, 2002; Stein and Steinteintein, 2002; Stein and Steinteintein, 2002). Lifetime development in a co-optimization setting is studied in (Stein and Steintein, 2002; Stein and Steinteintein, 2002; Stein and Steinteintein, 2002). The difficulty of co-optimization due to fragile co-dependence of brain and body is explored (Beng et al., 2016), and algorithmic solutions that combat the resulting premature convergence through increased diversity are proposed (Stein and Steintein, 2002). Different representations and their effects on the evolutionary optimization process are studied in (Stein and Steintein, 2002; Stein and Steintein, 2002; Stein and Steinteintein, 2002; Stein and Steinteintein, 2002). Closer to our work are the works of (Stein and Steintein, 2002; Stein and Steinteintein, 2002; Stein and Steinteintein, 2002; Stein and Steinteinteintein, 2002). Huang et al. (Huang et al., 2002) train modular controllers with a message-passing scheme to control rigid robots for a locomotion task. Medvet et al. (Medvet et al., 2002) evolve modular controllers with message-passing for various fixed morphologies. In follow-up work, Medvet et al. (Medvet et al., 2002) co-optimizes morphology and control for soft robots but focuses on the effects of evolutionary algorithm and representation on biodiversity and performance. Pigozzi et al. (Pigozzi et al., 2002) evolves modular controllers that use indexing and self-attention mechanism for soft robots with fixed morphologies. Kvalsund et al. (Kvalsund et al., 2002) explore centralized and decentralized control in modular rigid robots and demonstrate the trade-off between them. As opposed to (Stein and Steintein, 2002; Stein and Steinteintein, 2002), our modular controllers don't use any message-passing scheme or indexing and can work with arbitrary morphology without any change. Instead of experimenting with fixed morphologies as in (Stein and Stein, 2002; Stein and Steintein, 2002), we focus on the more challenging problem of co-optimization of morphology and control and investigate the dynamics of co-optimization with modular controllers. ## 3. Methods ### Simulation Our work uses the open-source Evolution GYM (EvoGym) benchmark (Beng et al., 2016). It consists of a mass-spring system-based soft-body simulation engine and various task environments. Similar to the simulation engines in (Stein and Steintein, 2002; Stein and Steintein, 2002; Stein and Steintein, 2002; Stein and Steintein, 2002; Stein and Steinteintein, 2002; Stein and Steinteintein, 2002), EvoGym works in 2D. The simulation engine and the provided environments are open-source and provide Python API for fast prototyping and experimenting. Please see Section 4 for the details of the environment used in our experiments. The soft robots are represented as a mass-spring system in a grid-like layout. Each voxel is initialized as a cross-braced square with masses in four corners and ideal springs in the edges. These springs can have different spring constants depending on the voxel material type. A voxel can be initialized from rigid or soft passive material or horizontally or vertically actuating active material. Figure 4 shows example robots with all four materials. The black voxels are rigid, the grey voxels are soft, and the orange and blue voxels are horizontal and vertical actuating voxels, respectively. The active materials' color shade represents their volume and gets darker as the voxel contracts and gets lighter as the voxel expands. Following the standard practice of using a bounding box in the literature (Beng et al., 2016; Stein and Steintein, 2002; Stein and Steintein, 2002; Stein and Steintein, 2002; Stein and Steintein, 2002; Stein and Steintein, 2002; Stein and Steintein, 2002), we limit the robot design to a 5x5 bounding box in our experiments to keep the design space tractable. The simulation provides a number of observations coming from the robot and the environment. A controller can observe a voxel's velocity \(V\in\mathbb{R}^{2}\), its volume \(v\in\mathbb{R}\), and its material (or its absence) as a one-hot encoded vector \(M\in[0,1]^{5}\). A periodic time signal \(t\in[0..24]\) (simulation time step mod 25) is also available to help controllers to create a periodic behavior. ### Control paradigm The soft robots are controlled by specifying the actuation \((a\in[0,1])\) of each voxel with active material in the robot at each time step of the simulation. The controllers are modeled with a function \(f\) that maps observations \(O\) to actions \(A\) as \(f(O)=A\). **Global Control** Similar to (Han et al., 2017; Li et al., 2018; Li et al., 2019), we devise a straightforward control for the robots where a centralized, global controller processes the complete information of the soft robot to assign actions to each voxel, as illustrated in Figure 1(a). Observations coming from each voxel are concatenated into vector \(O\), and the global controller \(f_{G}\) processes all the available information to output vector \(A\) that contains actions for each voxel as in \(f_{G}(O)=A\). Given this formulation of global controllers and the brain-body co-optimization setting, compatibility problem could arise. A controller \(f_{G}\) designed for a particular morphology could be incompatible with another morphology that comes up during optimization, which causes issues for simultaneous optimization of control and morphology. To overcome this issue and create a fair comparison with modular control, we use a simple caching trick similar to the one used in (Li et al., 2018), where the global controllers always expect observations from and output actions for the biggest robot possible (a 5x5 robot). To make it compatible with any given morphology in our design space, we zero-pad the observations for missing voxels and mask out the unnecessary actions for the morphology at hand. **Modular Control** We investigate the modular control of soft robots due to voxel-based soft robots' aptness for modular control and the advantages of modularity (Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). In modular control, each active voxel of the soft robot contains a copy of the same controller \(f_{M}\) with the same parameters \(\theta_{M}\). As shown in Figure 1(b), these controllers make observations from a local neighborhood of voxels \(N^{d}\) with distance \(d\) and output an action \(a\) for the voxel they belong to. Therefore modular controller \(f_{M}\) takes the form: \(f_{M}(O_{i})=a_{i}\), where \(O_{i}\) is the observation for the \(i^{\text{th}}\) voxel and consists of the concatenation of observations from each voxel belonging to Moore neighborhood \(N^{d}_{i}\) of voxel \(i\) with distance \(d\), \(a_{i}\) is the action for the \(i^{\text{th}}\) voxel, and \(i\in\textit{active voxels}\). If a voxel in the neighborhood \(N^{d}_{i}\) is missing, its velocity and volume are observed as a zero-vector and zero, respectively. This allows the controller to make sense of its local neighborhood's structure. We also note that this formulation of modular control is analogous to neural cellular automatons (Les and Sorn, 2017). By design, the modular controller is agnostic to the robot's morphology in the sense that it can work with any robot morphology without needing any change. Importantly, since each voxel shares the same controller, the difference in the behavior arises from the different observations each controller makes. ### Controller model Following the common practice of utilizing neural networks as powerful function approximators (Han et al., 2017; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019), the controllers \(f_{M}\) and \(f_{G}\) are modeled by a single hidden layer MLP with learnable parameters \(\theta_{M}\) and \(\theta_{G}\), respectively. The hidden layer consists of 32 units with ReLU activations for both controllers and maps the observations into a feature vector. The MLP for modular controllers has a single output unit with a sigmoid activation, outputting an action based on the feature vector. The MLP for global controllers has 25 output units with sigmoid activations that map the feature vector to actions for each voxel separately. In all of our experiments, we use a Moore neighborhood of distance \(d=2\) for modular controllers that we choose empirically based on our initial experiments and to have a similar number of parameters for both controllers. Additionally, we assume a 5x5 bounding box for the design space for global controllers. With these models and hyperparameters, both controllers have 201 inputs, modular controllers have 6497 parameters, and global controllers have 7289 parameters to optimize. While global controllers have slightly more parameters to optimize, they have the advantage of separately tuning the behavior of each voxel. On the other hand, modular controllers have fewer parameters to optimize, but changes in the controller could potentially affect the behavior of all voxels. ### Training algorithm Following (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019), we use an evolutionary algorithm to optimize the morphology and control of the soft robots simultaneously. In particular, we use \((\mu+\lambda)\)-Evolution Strategies where \(\mu=16\) and \(\lambda=16\). Similar to (Li et al., 2019; Li et al., 2019), we add a random Figure 2. Illustration of controller variants: (a) The global controller utilizes all the available information and outputs an action matrix that contains actions for all possible voxels. Actions for active voxels are assigned, and the rest is discarded. (b) Each active voxel shares the same modular controller. Observations from a local neighborhood are used to predict actions. We use the observation range of \(1\) in the figure for ease of illustration, and a Moore neighborhood of size \(2\) in our experiments. individual to the population at each generation to increase diversity. The selection is based on a multi-objective Pareto ranking on an individual's age and fitness as in (Kolmogorov, 1966), where the individual's age is increased at every generation and set to 0 for newly created individuals. Unlike (Kolmogorov, 1966), we also set age to 0 after each mutation, incentivizing high levels of diversity and turnover in the population, loosely related to a less extreme version of Real et al. (Real et al., 2017)'s method of giving all children a selection advantage over their parents. Recombination was not considered in this work. The morphology is represented directly in the genome as a 2D matrix consisting of materials of voxels ([1..4]) or 0 if no voxel exists in that location. New morphologies are created through mutating existing morphologies or mutating empty morphology. Similar to (Bartos et al., 2017), the mutation operator for morphology works by going through each possible voxel location and changing it randomly to one of [0..4] with 10% probability. We also ensure that each morphology has at least two active materials and 20% of their voxels filled by rejecting mutations that violate these constraints. The controller genome consists of a vector of parameters (\(\theta_{M}\) or \(\theta_{C}\)). New controllers are created from scratch by Pytorch's default initialization (Kolmogorov, 1966) or created through mutating existing controllers by adding a noise vector of the same size sampled from \(\mathcal{N}(0,0.1)\). During evolution, offspring are created either by mutating an existing individual's controller or its morphology. Following (Bartos et al., 2017), we heuristically choose 50% probability to decide which part of the individual to be mutated. ## 4. Experiments We use EvoGym's _Walker-v0_ environment (Bartos et al., 2017) as the locomotion task for the evolved robots. In this task, the robot must locomote in a flat terrain as far as possible. We use a modified reward function \[R(r,T)=\Delta p_{x}^{r}+\mathbf{I}(r)+\sum_{t=0}^{T}-0.01+5, \tag{1}\] to encourage the robot \(r\) to move as fast as possible, where \(\Delta p_{x}^{r}\) is the change in robot \(r\)'s position in the positive x direction, \(\mathbf{I}(r)\) is the indicator function that takes the value of 1 if the robot \(r\) has reached to the end of terrain and 0 otherwise. The summation term applies a small penalty at each time step to encourage the robot to reach the end of the terrain faster. The last term, a positive constant equal to the max time penalty, is used to shift the rewards to be positive for ease of analysis. The environment runs until the robot reaches the end of the terrain or 500 time steps, whichever happens first. Similar to (Srivastava et al., 2017; Srivastava et al., 2017; Srivastava et al., 2017), we apply action repetition to speed up the simulations and prevent controllers from exploiting high-frequency dynamics. The controllers are queried every fourth time step, and the last actions are repeated in other time steps. We apply the Wilcoxon rank-sum test to report the statistical significance of our results wherever possible. ### Brain-Body co-optimization We compare the global controller and the modular controller in the problem of simultaneously optimizing morphology and control. We run the evolutionary algorithm described in Section 3.4 for 5000 generations for each controller and repeat the same experiments 100 times with different random seeds. Figure 3 illustrates the results of brain-body co-optimization experiments. In the main plot, we plot the performance of the best individuals that evolved in each generation. While the solid lines show the average fitness values of best-performing individuals over 100 repetitions, the shaded region shows the 95% bootstrapped confidence intervals. Both experimental treatments considered converged as more than 90% of the runs in each treatment are not showing improvement bigger than 0.1 in fitness for the last 500 generations at the 5000 generations mark. We find that the best solution found ("run champion") across each of the 100 trials is, on average, more fit for modular controllers (fitness of 10.42) than global controllers (9.67; p=0.0016). It is also the case that the champions evolved in runs featuring modular controllers displayed less variation than trial employing a global controller (range: [6.51, 12.5] vs. [4.73, 12.23]; IQR: [9.1, 11.5] vs. [7.9, 11.2] ) (inset plot in Figure 3). If robots with modular controllers were simply a scaled-up faster version of their globally controlled counterparts, we might expect the modular control treatment to have both higher average fitness and higher variability. This reduction in absolute variability across runs, despite higher overall values, may be suggestive of fundamentally differing fitness landscapes in the two settings - or perhaps differing abilities of evolution in these two settings to escape local optima in similarly rugged fitness landscapes. In addition to reaching a higher level of fitness, robots evolved with modular controllers converge to their max fitness value (the fitness level found at generation 5000) significantly (all p<0.05) faster than those with global controllers - with modular controllers reaching 99% of their final performance by generation 2710.06 on average compared to 3059.37 for global controllers (and 1379.71 generations for modular vs. 1786.89 for global controllers to reach 95% of their respective max fitness levels, 1001.74 gens vs. 1466.28 to reach 90%, and 439.84 vs. 709.61 to reach 80%). Figure 3. Comparison of modular and global controllers on the problem of brain-body co-optimization for the experimented locomotion task. Modular controllers adapt more rapidly in the early generations and significantly outperform global controllers after 5000 generations of brain-body co-optimization. Lines show the mean, and shaded regions show the 95% confidence intervals calculated over 100 experiments. The boxplot shows the distribution of the performance of champions over the 100 trials – demonstrating higher mean, median, max, and min performance for modular controllers. Figure 4 shows a few of the champions that evolved with modular controllers. The gaits are shown as a time series of snapshots. Similar to (Kumar et al., 2017; Wang et al., 2018; Wang et al., 2018), we see diverse shapes and behaviors evolve. While the robot at the top uses two leg-like limbs to gallop forward, the middle robot starts its movement by falling forward and then balances itself on a single leg that is used to throw itself forward. In the case of the bottom robot, we see more vertical movement where the robot jumps forward with the help of its forward limb and spends much of its time in the air. It exemplifies the ability to evolve diverse shapes and behaviors, even in a limited 2-D design space. In the rest of this section, we investigate where the performance difference arises during the co-optimization with different controllers. In particular, we investigate whether modular control has a better inductive bias or transfers better to similar morphologies. ### Optimization on fixed morphologies Similar to (Kumar et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), we optimize both controllers for heuristically chosen fixed morphologies to see whether modular control has a better inductive bias. Figure 5 shows the experimented fixed morphologies. Biped (Fig. 5a) and worm (Fig. 5b) are commonly experimented morphologies in the literature (Kumar et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). We also experiment with less commonly used morphologies such as triped (Fig. 5c) and block (Fig. 5d) as we intuitively think that they require different locomotion strategies. For these experiments, we only considered morphologies made out of a single material for ease of design. We optimize global and modular controllers for each morphology for 1500 generations, repeat the experiments with different random seeds 10 times, and report 95% confidence intervals. The results are illustrated in Figure 6, where we see the fitness of the best individual at each generation. Interestingly, modular controllers' advantages in the co-optimization setting are not observed when controllers are directly optimized for a fixed morphology. Both controllers achieve comparable performances for Worm, Triped, and Block (\(p>0.05\)), and the global controller performs better on Biped (\(p<0.01\)). Moreover, we don't observe faster convergence of modular controllers. While both types of controllers are evolved to locomote the robots successfully, the performance on certain morphologies, such as Biped, is better than other experimented morphologies. Since both controllers have similar performances when optimized for a single morphology, we hypothesize that the performance difference between the two control paradigms during the co-optimization arises from their effects on the search over the morphology space. This agrees with earlier findings showing that the key to successful brain-body co-optimization is preventing premature convergence of body plan via fragile co-adaptation between a robot's controller and morphology (Kumar et al., 2017; Wang et al., 2018). So the question is this: how can modular controllers prevent prematurely eliminating underperforming yet promising body plans from the population? Ideally, we would like to assess the fitness potential of a new body plan correctly, which can be approximately achieved by reducing the detrimental effects of morphological changes on immediate fitness. To test whether modular controllers have an advantage in this sense, we investigate their ability to control multiple morphologies and transferability to other morphologies. Figure 4. Time series of gaits, as robots move from left to right, exemplify robots evolved with modular control. Figure 5. Experimented fixed morphologies. Figure 6. Fitness of the best individual over evolutionary time for fixed morphology experiments. Modular controllers do not perform better than global controllers when trained in isolation on any of these single morphologies. ### Optimization on multiple fixed morphologies If a controller can successfully control multiple morphologies, it can be transferred from one morphology to another without drastic performance drops. Even though the controllers are not explicitly optimized for multiple morphologies during the co-optimization of the brain and body, we experiment with joint training on multiple fixed morphologies to determine their potential for this task. Similar to (Krishnan et al., 2017), we optimize both controllers for all experimented fixed morphologies (see Figure 5) jointly. Inspired by (Krishnan et al., 2017), we use the minimum performance among all morphologies as the fitness values for the controllers. 10 trials for each type of controller, initialized with different random seeds, are evolved for 5000 generations. As shown in Figure 7, both controllers achieve comparable performances (\(p>0.05\)), yet the populations with modular controllers converge more quickly. This demonstrates the advantage of modular control over global control; modular controllers can be optimized for multiple morphologies significantly faster. While the co-optimization setting doesn't explicitly optimize controllers for multiple morphologies, adapting quickly to multiple morphologies is important for quickly recovering from the detrimental effects of morphological changes. Combined with our evolutionary algorithm that optimizes for age and fitness where new individuals face less selection pressure, individuals with modular controllers may have a better chance of demonstrating their potential. Additionally, we note that the performances on multiple morphologies are lower than on single morphologies, suggesting that performance and generalization are competing objectives. Nonetheless, successful locomotion behaviors are evolved. This is in contrast to (Krishnan et al., 2017), where joint training on multiple rigid fixed robots oftentimes performs very poorly. We conjecture that this may be due to soft robots' compliance. While rigid robots require distinct strategies to locomote, soft robots' compliance allows for less effective yet functional strategies that work on multiple different soft robots. ### Transferability to other morphologies To further support our claims, we check controllers' transfer performances to similar morphologies. To create similar morphologies, we mutate the original morphologies and create _neighboring morphologies in the mutation space_. To see how the controller's transfer performance is affected by the (dis)similarity of the new body plan to the original morphology, we also sample neighbors with different distances, where distance is the number of mutations applied. Given the way of creating similar morphologies, we start with controllers optimized for single fixed morphologies. The champion of each run from Section 4.2 is transferred to morphologies some number of mutations away from the body plan that they are optimized for. 20 distinct neighboring morphologies are sampled per distance, and controllers' zero-shot (no additional training) and one-shot transfer (one generation of controller evolution ) performances, as the relative change in fitness, are measured. Figure 8 illustrates the results of transferring controllers optimized for a single morphology to the neighbors of that morphology. Robots with both controllers' performance drop as they are transferred to more dissimilar morphologies. We also see that both controllers regain performance when as little as one generation of controller finetuning is performed on the new morphology (one-shot transfer), suggesting the ability to rapidly re-adapt controllers to near morphologies. The finding that modular controllers' transfer performance is never worse and oftentimes better than the global controllers (for both zero and one-shot transfer), even though both perform similarly when optimized for a single morphology, suggests that modular controllers are not inherently better at finding effective locomotion strategies for soft robots, but that they are significantly more robust and adaptable to morphological changes. The above findings apply to transferred controllers that are optimized for a particular fixed morphology for 1500 generations. This doesn't necessarily represent the issues of fragile co-adaptation that may occur during a robot's brain-body co-optimization process. To convincingly demonstrate the transferability of modular controllers during co-optimization, we take the best individuals from different generations during the co-optimization process and measure their zero-shot and one-shot transfer performances. There is one difference with the previous transferability experiment. Each individual sampled from co-optimizations runs potentially has a different morphology. Therefore in this experiment, each sampled controller is transferred to a potentially different set of neighbors obtained by mutating their original body. Figure 9 demonstrates the transferability of controllers sampled from co-optimization runs. The transfer performance of modular control is never worse and oftentimes significantly better compared to global control. The performances drop as the distance to the original morphology increases. Both trends are consistent in zero and one-shot settings and all sampled generations. Modular controllers' ability to transfer better helps during co-optimization by increasing the chances of survival for individuals with newly modified morphologies. If a controller transfers better to a similar morphology, the time required to adapt the controller fully will be shorter, and the probability of eliminating a promising morphology from the population will be lower. This results in a better search over the morphology space, even without methods Figure 7. Fitness over evolutionary time plot for joint training on all four of the fixed morphologies. Even though end performances are comparable, modular control converges quicker than global control, suggesting ease of optimization for different morphologies simultaneously. We omit the performance trajectories on individual morphologies as they follow the joint fitness very closely. aimed at protecting new morphologies during search [7]. Figure 10 illustrates the advantage that modular controllers confer to morphological search - enabling changes to the morphology to make up a significantly larger percentage of the successful mutations that enable brain-body co-optimization (\(p<0.05\)). This is true both for the successful mutations that eventually lead to the champions of each run (10a) and for all successful mutations to any individual throughout the search process (10b). Figure 8. Zero-shot transfer (performance immediately after a morphological mutation; dotted lines on the left side) and one-shot transfer (after one epoch of controller evolution; solid lines on the right side) performance of controllers optimized for fixed morphologies. The champion of each run is transferred to neighboring morphologies with increasing dissimilarity (number of mutations away on the x-axis). All settings show an average decrease in performance upon morphological transfer (negative relative change on the y-axis), but the modular control is never worse and often significantly outperforms (top axis) the global controller at transfer to morphological neighbors. Figure 9. Performance of controllers sampled from various points in evolutionary time for brain-body co-optimization when transferred to their morphological neighbors of various distances (See Fig. 8 for walk-through of figure components). Despite consistently negative morphological mutations, modular control is never worse and often more robust/adaptable, with significantly smaller drops in fitness compared to global control for both zero and one-shot morphological transfer. ## 5. Discussion The findings above, that robots undergoing brain-body co-optimization with modular controllers lead to a greater number of successful morphological mutations and higher overall fitness values than those employing global controllers, suggest that enabling successful morphological mutations is a key to enabling successful brain-body co-optimization in evolving soft robots. Our demonstration that modular controllers more effectively transfer to children produced by morphological mutations helps to provide evidence for a mechanistic understanding of how these controllers convey that advantage within the evolutionary co-optimization process. The adaptability and robustness of modular controllers, relative to their similarly sized but globally oriented counterparts, may not be entirely surprising and is supported by general intuition and prior findings (Steintein et al., 2016; Steintein et al., 2016). But we are not aware of previous work that has demonstrated the connection between these more adaptable controllers and the importance of being adaptable/robust to morphological mutations during the evolutionary co-optimization process. This work investigates the effect of controller robustness to morphological perturbations. Inspired by the positive findings here, future work will similarly investigate the effect of morphologies which are more robust to controller perturbations. Their robustness may be due to the implementation choices of various types or encodings of morphologies (as done here for controllers), but the optimization of bodies and brains which are evolved or trained specifically to be robust, rapidly adaptable (Krishnan et al., 2017), or to recover in few shots (Steintein et al., 2016) will be of great interest and value. Thus, this work ties into the broader study of the evolution of robustness (Bahdan et al., 2016; Steintein et al., 2016; Steintein et al., 2016) and how robustness affects the evolution of evolvability (Bahdan et al., 2016; Steintein et al., 2016; Steintein et al., 2016). The potential for rapid re-adaptation of controllers to similar morphologies demonstrated in Figure 8 also calls into question the overarching perspective and impetus of this work given in the introduction (that we are focusing here on exploring methods for avoiding fragile co-adaptation), as controllers can gain back a significant portion of the performance lost during the morphological mutation with just one generation of retraining (one-shot performance on both controllers is significantly higher than zero-shot performance with no re-adaptation). However, the finding that even with a one-shot update for re-adaption, the robot's performance is still less than half of what it was prior to the morphological mutation supports the idea that fragile co-adaptation is a serious issue and investigating methods to avoid will likely be of value is in the presence of more advances rapid re-adaption strategies. This presence of negative morphological mutations was touted as a major hurdle for brain-body co-optimization in (Bahdan et al., 2016) and methods to sidestep were presented explicitly in (Bahdan et al., 2016) and implicitly in (Steintein et al., 2016). While we do not use such methods that specifically look for diversity of new morphologies here, our use of an evolutionary algorithm highly incentivizing age-based diversity (Sec. 3.4) may be an important part of maintaining search despite these negative mutations. Additional future work is ongoing to investigate this. In this work, we explored an elementary version of modular control in a relatively small 2D design space and utilized a simple locomotion task for performance evaluation. In future works, complex control strategies in more complicated 3D design spaces and harder tasks involving more environmental observations and closed-loop information processing should be investigated, as we believe these settings present greater challenges for co-optimization and will inform the general applicability of this approach. Moreover, the advantages presented in the paper are significant but small. As the implementation decision to employ modular control is agnostic to the evolutionary algorithm it is paired with, combining this approach with algorithms specifically designed to aid brain-body co-optimization, such as (Bahdan et al., 2016), can be easily explored and may synergistically enhance co-optimization further. ## 6. Conclusion In this paper, we investigate the potential of modular control of soft robots in the challenging co-optimization setting where both the morphology and the control are optimized together. We show that modular control enjoys better co-optimization performance in this setting. It converges faster and finds better solutions. Moreover, our work suggests that the performance gain arises from the better transferability of modular controllers to similar morphologies, enabling efficient search over morphology space. This is in line with the previous findings (Bahdan et al., 2016) and the theory of embodied cognition. ###### Acknowledgements. This material is based upon work supported by the National Science Foundation under Grant No. 2008413. Computations were performed on the Vermont Advanced Computing Core supported in part by NSF Award No. OAC-1827314.
2306.11517
Non-locally discrete actions on the circle with at most $N$ fixed points
A subgroup of $\mathrm{Homeo}_+(\mathbb{S}^1)$ is M\"obius-like if every element is conjugate to an element of $\mathrm{PSL}(2,\mathbb{R})$. In general, a M\"obius-like subgroup of $\mathrm{Homeo}_+(\mathbb{S}^1)$ is not necessarily (semi-)conjugate to a subgroup of $\mathrm{PSL}(2,\mathbb{R})$, as discovered by N. Kova\v{c}evi\'{c} [Trans. Amer. Math. Soc. 351 (1999), 4823-4835]. Here we determine simple dynamical criteria for the existence of such a (semi-)conjugacy. We show that M\"obius-like subgroups of $\mathrm{Homeo}_+(\mathbb{S}^1)$ which are elementary (namely, preserving a Borel probability measure), are semi-conjugate to subgroups of $\mathrm{PSL}(2,\mathbb{R})$. On the other hand, we provide an example of elementary subgroup of $\mathrm{Diff}^\infty_+(\mathbb{S}^1)$ satisfying that every non-trivial element fixes at most 2 points, which is not isomorphic to any subgroup of $\mathrm{PSL}(2,\mathbb{R})$. Finally, we show that non-elementary, non-locally discrete subgroups acting with at most $N$ fixed points are conjugate to a dense subgroup of some finite central extension of $\mathrm{PSL}(2,\mathbb{R})$.
Christian Bonatti, João Carnevale, Michele Triestino
2023-06-20T13:12:57Z
http://arxiv.org/abs/2306.11517v3
# Non-locally discrete actions on the circle with at most \(N\) fixed points ###### Abstract. A subgroup of \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) is Mobius-like if every element is conjugate to an element of \(\operatorname{PSL}(2,\mathbb{R})\). In general, a Mobius-like subgroup of \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) is not necessarily (semi-)conjugate to a subgroup of \(\operatorname{PSL}(2,\mathbb{R})\), as discovered by N. Kovacevic [Trans. Amer. Math. Soc. **351** (1999), 4823-4835]. Here we determine simple dynamical criteria for the existence of such a (semi-)conjugacy. We show that Mobius-like subgroups of \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) which are elementary (namely, preserving a Borel probability measure), are semi-conjugate to subgroups of \(\operatorname{PSL}(2,\mathbb{R})\). On the other hand, we provide an example of elementary subgroup of \(\operatorname{Diff}_{+}^{\infty}(\mathbb{S}^{1})\) satisfying that every non-trivial element fixes at most \(2\) points, which is not isomorphic to any subgroup of \(\operatorname{PSL}(2,\mathbb{R})\). Finally, we show that non-elementary, non-locally discrete subgroups acting with at most \(N\) fixed points are conjugate to a dense subgroup of some finite central extension of \(\operatorname{PSL}(2,\mathbb{R})\). **MSC 2020:** Primary 37C85, 57M60. Secondary 37B05, 37E05. Key words and phrases:group actions on the circle, Mobius group, maps with at most \(N\) fixed points, non-locally discrete groups. Introduction Let \(G\) be a group of finite order \(n\), and let \(\mathbb{S}^{1}\) be a finite group of finite order \(n\). We say that \(G\) is _\(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant_ if \(G\) is _\(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant_ if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant. We say that \(G\) is _\(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant_ if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant. We say that \(G\) is _\(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant_ if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant. We say that \(G\) is _\(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant_ if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant. We say that \(G\) is _\(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant_ if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant. We say that \(G\) is _\(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant_ if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant. We say that \(G\) is _\(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant_ if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant. We say that \(G\) is _\(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant_ if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\)-invariant if \(G\) is \(\mathrm{Homeo}_{+}(\mathbb{S}^{1}) It is clear that if a subgroup is locally discrete, then it is discrete in the usual sense. In fact, we will see that in the \(C^{0}\) topology, the two properties are equivalent (Lemma 4.3), whereas in the real-analytic setting of Rebelo [17], the equivalence is an open problem (see the discussion in Alvarez _et al._[1, Remark 1.9]). The condition of considering non-wandering intervals only in the definition sounds technical, but it is actually the appropriate one (see Proposition 4.1). For non-locally discrete groups with at most \(N\) fixed points, we prove the following. **Theorem C**.: _Let \(G<\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) be a non-elementary, non-locally discrete subgroup with at most \(N\) fixed points. Then, there exists \(k\geq 1\) such that \(G\) is conjugate to a dense subgroup of \(\operatorname{PSL}^{(k)}(2,\mathbb{R})\)._ _Remark 1.5_.: The degree of the extension \(k\) is given by half of the largest number of fixed points for non-trivial elements in \(G\) (which is thus necessarily even). **Corollary D**.: _Let \(G<\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) be a non-elementary, non-locally discrete subgroup with at most \(2\) fixed points. Then, \(G\) is conjugate to a dense subgroup of \(\operatorname{PSL}(2,\mathbb{R})\)._ _Remark 1.6_.: Another consequence of Theorem C is that, if moreover the group \(G\) is finitely generated, it necessarily contains elements with irrational number (see Kim, Koberda, and Mj [12, Lemma 2.24]). Actually, a key step of the proof (see Lemma 4.8) is to show that the _closure_\(\overline{G}\) contains an element with irrational rotation number. Let us also comment that when considering finitely generated subgroups \(G\leq\operatorname{Diff}^{1}_{+}(\mathbb{S}^{1})\) which are non-locally discrete with respect to the \(C^{1}\)_topology_, the analogue conclusion is still open, even in real-analytic regularity (see Matsuda [15]). Let us describe the structure of the paper. After introducing the main terminology in Section 2, we discuss Theorems A and B in Section 3. The last section is devoted to the proof of Theorem C. Let us give a quick outline of the proof. First of all, we remark that the action of \(G\) is minimal (Lemma 4.3). The next step is to understand which elements belong to the \(C^{0}\) closure \(\overline{G}\): we prove that \(\overline{G}\) has also at most \(N\) fixed points (Lemma 4.5) and, up to conjugacy, it contains the group of rotations \(\operatorname{SO}(2)\) (Lemma 4.8). We then conclude by using the classification by Giblin and Markovic [10] (see Theorem 4.9) of closed subgroups of \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) acting transitively and containing a non-constant continuous path. ## 2. Preliminaries ### Topology of the circle and of the group of circle homeomorphisms In what follows, we let \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) be the group of orientation-preserving homeomorphisms of the circle. Here, the circle \(\mathbb{S}^{1}\) will be considered as the one-dimensional torus \(\mathbb{R}/\mathbb{Z}\), with its normalized Lebesgue measure. Given a Borel subset \(A\subset\mathbb{S}^{1}\), we denote by \(|A|\) its Lebesgue measure. Any \(f\in\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) lifts to a homeomorphism \(F:\mathbb{R}\to\mathbb{R}\) commuting with integer translations, and this lift is defined uniquely, up to integer translations. This allows to identify the universal cover of \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) with the group \(\operatorname{Homeo}_{\mathbb{Z}}(\mathbb{R})\) of homeomorphisms of the line commuting with integer translations. If \(x,y\in\mathbb{S}^{1}\) are two distinct points, we denote by \((x,y)\) the (open) interval of points \(z\in\mathbb{S}^{1}\) such that the triple \((x,z,y)\) is positively ordered. When \(x=y\), \((x,x)\) is by convention, just the empty set. Similarly we denote other kind of intervals \([x,y]\), \([x,y)\), \((x,y]\). We can then define the distance \(d(x,y)=\min\{|(x,y)|,|(y,x)|\}\). When \(d(x,y)<1/2\), this is the usual distance between points in any Euclidean chart containing \(x,y\), so that we can write (with slight abuse of notation) \(|x-y|\) instead of \(d(x,y)\). We want to consider the \(C^{0}\) topology on \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\), and in order to quantify this, we consider the distance \(d_{\infty}(f,g):=\sup_{x\in\mathbb{S}^{1}}d(f(x),g(x))\). It is well-known that the distance \(d_{\infty}\) on \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) is not complete, and for this reason we will also consider the additional distance \(d_{\mathbb{C}^{0}}(f,g):=d_{\infty}(f,g)+d_{\infty}(f^{-1},g^{-1})\), which makes \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) complete. When \(d_{\infty}(f,g)<1/2\), we can take any lifts \(\widetilde{f},\widetilde{g}\in\operatorname{Homeo}_{\mathbb{Z}}(\mathbb{R})\) of \(f\) and \(g\), respectively, such that \(|\widetilde{f}(0)-\widetilde{g}(0)|<1\), and we have \(d_{\infty}(f,g)=\|\widetilde{f}-\widetilde{g}\|\), where \(\|\cdot\|\) is the usual uniform norm on continuous functions. Because of this, when \(d_{\infty}(f,g)<1/2\), we will also write \(\|f-g\|\) for the distance \(d_{\infty}(f,g)\), and tacitly make computations in a local Euclidean chart. This also justifies the following definition. **Definition 2.1**.: Let \(f\in\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) be a circle homeomorphism such that \(d_{\infty}(f,\mathrm{id})<1/2\). We say that \(f\) is _positive_ if for every \(x\in\mathbb{S}^{1}\), we have \(f(x)\in(x,x+\frac{1}{2})\). When \(f^{-1}\) is positive, we say that \(f\) is _negative_. Given an interval \(I\subset\mathbb{S}^{1}\), we say that \(f\) is positive (respectively, negative) on \(I\) if the previous conditions hold simply for any \(x\in I\). _Remark 2.2_.: Observe that the definition above implies that a positive (respectively, negative) homeomorphism does not have fixed points. Moreover, when \(f,g\in\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) are positive, the composition \(fg\) is also positive, provided that \(d_{\infty}(fg,\mathrm{id})<1/2\). As we work with homeomorphisms having a prescribed bound on the number of fixed points, it is fundamental to have in mind that if \(f,g\in\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) and \(x\in\mathbb{S}^{1}\) are such that \(f(x)=g(x)\), then \(g^{-1}f(x)=x\). When this occurs, we say that \(f\) and \(g\)_cross_. More specifically, \(I\subset\mathbb{S}^{1}\), we say that \(f\) and \(g\)cross _in \(I\)_ if there exists \(x\in I\) such that \(f(x)=g(x)\). Finally, we say that \(f\) and \(g\) cross _hyperbolically_ (respectively, parabolically) at \(x\), if \((f(x-\eta)-g(x-\eta))(f(x+\eta)-g(x+\eta))<0\) (respectively, \(>0\)) for any sufficiently small \(\eta>0\). We will be also interested in counting the number of points for which \(f(x)=g(x)\), so that we will say that \(f\) and \(g\) cross once, twice, \(3\) times, and so on. ### Semi-conjugacy of group actions on the circle The classical notion of semi-conjugacy in dynamical systems can be slightly extended to get an equivalence relation on group actions on the line and the circle. For a detailed discussion, we refer to the monograph by Kim, Koberda, and Mj [12], from which we borrow some terminology. Let \(G\) be a group, and \(\varphi,\psi:G\to\mathrm{Homeo}_{+}(\mathbb{R})\) two homomorphisms. We say that \(\varphi\) and \(\psi\) are _conjugate_ if there exists \(h\in\mathrm{Homeo}_{+}(\mathbb{R})\) which is \((\varphi,\psi)\)_-equivariant_: \(h(\varphi(g)(x))=\psi(g)(h(x))\) for any \(g\in G\) and \(x\in\mathbb{R}\). We say that \(\varphi\) and \(\psi\) are _semi-conjugate_ if there exists a non-decreasing map \(h:\mathbb{R}\to\mathbb{R}\) such that \(|h(x)|\to\infty\) as \(|x|\to\infty\), and which is \((\varphi,\psi)\)-equivariant. Similarly, when \(\varphi,\psi:G\to\mathrm{Homeo}_{+}(\mathbb{S}^{1})\), we say that \(\varphi\) and \(\psi\) are conjugate if there exists a \((\varphi,\psi)\)-equivariant circle homeomorphism. However, for extending the notion of semi-conjugacy to this situation, we have to pass through actions on the line: using the cyclic central extension \[1\to\mathbb{Z}\to\mathrm{Homeo}_{\mathbb{Z}}(\mathbb{R})\to\mathrm{Homeo}_{+}( \mathbb{S}^{1})\to 1,\] we say that \(\varphi,\psi:G\to\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) are semi-conjugate if we can find a cyclic central extension \[1\to\mathbb{Z}\to\widetilde{G}\to G\to 1,\] and semi-conjugate homomorphisms \(\widetilde{\varphi},\widetilde{\psi}:\widetilde{G}\to\mathrm{Homeo}_{\mathbb{ Z}}(\mathbb{R})\) (in the previous sense), which send any integer \(n\in\mathbb{Z}\subset\widetilde{G}\) to the translation by \(n\), and lift the homomorphisms \(\varphi\) and \(\psi\), respectively: in short, the following diagram must commute With abuse of terminology, we will also say that the subgroups \(\varphi(G)\), \(\psi(G)\) are (semi-)conjugate, and more generally, we say that two subgroups of homeomorphisms \(G_{1}\) and \(G_{2}\) are (semi-)conjugate, if there exist a group \(G\) and surjective homomorphisms \(\varphi_{i}:G\to G_{i}\) (for \(i\in\{1,2\}\)) which are (semi-)conjugate. ## 3. Elementary groups ### Basic results Recall from the introduction that a subgroup \(G\) of \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) is _elementary_ if its action preserves a Borel probability measure. Recall from the introduction that an elementary subgroup \(G\leq\operatorname{Homeo}_{+}(\mathbb{S}^{1})\), either admits a finite orbit or is semi-conjugate to a subgroup of rotations. Indeed, given any point \(x\in\mathbb{S}^{1}\), we can compute the rotation number \(\operatorname{rot}(g)\) for every element \(g\) in an elementary subgroup \(G\leq\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) using the invariant measure \(\mu\), and the result does not depend on the choices of the point \(x\) and the invariant measure \(\mu\): the map \(\operatorname{rot}:G\to\mathbb{S}^{1}\) defined by \(\operatorname{rot}(g)=\mu[x,g(x))\), is in fact a homomorphism. As a consequence we have a homomorphism \(G\to\operatorname{SO}(2)\) defined by \(g\mapsto R_{\operatorname{rot}(g)}\), which semi-conjugates \(G\) to a group of rotations. We have the following basic result. **Lemma 3.1**.: _Let \(G<\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) be a subgroup with invariant Borel probability measure \(\mu\) on \(\mathbb{S}^{1}\). Then, the kernel of \(\operatorname{rot}\) fixes \(\operatorname{supp}(\mu)\) pointwise._ Proof.: Take \(x\in\operatorname{supp}(\mu)\). If \(g\in G\) is such that \(g(x)\neq x\), then \(\mu[x,g(x))\neq 0\) and thus \(\operatorname{rot}(g)\neq 0\). Let us give a more precise statement in the case when the elementary subgroup preserves an atomless Borel probability measure, and it has at most \(N\) fixed points. **Lemma 3.2**.: _If a subgroup \(G<\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) with at most \(N\) fixed points, preserves an atomless Borel probability measure \(\mu\), then \(G\) is semi-conjugate to a group of rotations and, moreover, the corresponding morphism \(G\to\operatorname{SO}(2)\) is injective._ _In particular, \(G\) is isomorphic to a subgroup of \(\operatorname{SO}(2)\)._ Proof.: As explained before, the morphism \(g\mapsto R_{\operatorname{rot}(g)}\) gives a semi-conjugacy to an action by rotations. If there was an element in the kernel, by Lemma 3.1, it would fix the support of \(\mu\), which is infinite. As we are assuming that \(G\) acts with at most \(N\) fixed points, this gives that the kernel is trivial, as desired. ### Mobius-like elementary groups Here we discuss the first result, namely Theorem A, which states that elementary, Mobius-like subgroups of \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) are semi-conjugate to subgroups of \(\operatorname{PSL}(2,\mathbb{R})\). Proof of Theorem A.: Let \(\nu\) be a Borel probability measure preserved by the action of \(G\) on the circle. Consider the homomorphism \(\operatorname{rot}:G\to\operatorname{SO}(2)\) defined by \(g\mapsto R_{\operatorname{rot}(g)}\). Now, since each non-trivial element of \(G\) fixes at most \(2\) points, by Lemmas 3.1 and 3.2, one can notice that if the support \(\operatorname{supp}(\nu)\) has more than \(2\) points then the kernel of \(\operatorname{rot}\) is trivial and \(\operatorname{rot}:G\to\operatorname{SO}(2)\) defines an isomorphism between \(G\) and a subgroup of \(\operatorname{PSL}(2,\mathbb{R})\). Thus, from now on, we can assume that the action of \(G\) has a finite orbit, which is either a fixed point or a pair of points. Assume first that there is a unique global fixed point for \(G\). Then, by Solodov's theorem (Theorem 1.2), \(G\) is semi-conjugate to a subgroup of affine transformations, and the corresponding morphism \(G\to\operatorname{Aff}_{+}(\mathbb{R})<\operatorname{PSL}(2,\mathbb{R})\) is injective. Assume next that \(G\) has two global fixed points \(p,q\in\mathbb{S}^{1}\), and let \(I_{\lambda}\) and \(I_{\rho}\) be the connected components of \(\mathbb{S}^{1}\smallsetminus\{p,q\}\). Denote by \(\lambda:G\to\operatorname{Homeo}_{+}(I_{\lambda})\) and \(\rho:G\to\operatorname{Homeo}_{+}(I_{\rho})\) the actions of \(G\) restricted to \(I_{\lambda}\) and \(I_{\rho}\), respectively. After Theorem 1.2, they are both semi-conjugate to actions by translations, and moreover the corresponding induced homomorphisms \(G\to\mathbb{R}\) and \(G\to\mathbb{R}\) are injective. In particular, \(G\) is free abelian. If \(G\simeq\mathbb{Z}\), choose a generator \(g\in G\). As \(G\) is Mobius-like, \(g\) is conjugate to a hyperbolic element in \(\operatorname{PSL}(2,\mathbb{R})\), giving the desired conclusion. For arbitrary \(G\), fix a non-trivial element \(f\in G\), and replacing \(f\) by \(f^{-1}\) if necessary, we assume that \(f(x)>x\) for every \(x\in I_{\rho}\) and \(f(x)<x\) for every \(x\in I_{\lambda}\). Since both \(\lambda(G)\) and \(\rho(G)\) are semi-conjugate to actions by translations, we can choose a semi-conjugacy such that \(\lambda(f)\) is sent to the translation by \(x\mapsto x-1\) and \(\rho(f)\) is sent to the translation by \(x\mapsto x+1\). Now, for every other element \(g\in G\), we have \(\lambda(g)\) and \(\rho(g)\) are sent to translations \(x\mapsto x+\alpha\) and \(x\mapsto x+\beta\). We claim that \(\alpha+\beta=0\). Indeed, if \(\alpha+\beta\neq 0\) then there exists an integer \(N\in\mathbb{Z}\) such that \(N(\alpha+\beta)>1\) and so, there exists a second integer \(M\in\mathbb{Z}\) such that \(N\alpha>M>N(-\beta)\). Therefore, \(N\alpha-M>0\) and \(N\beta+M>0\) and so the element \(g^{N}f^{M}\in G\) is sent, by the semi-conjugacy, to positive translations on both sides of the circle. We conclude that the element \(g^{N}f^{M}\) is sent by the semi-conjugacy to a circle homeomorphism with two parabolic fixed points which is an absurd, because \(G\) is Mobius-like. Therefore, for any element \(g\in G\), \(\lambda(g)\) and \(\rho(g)\) are sent to translations \(x\mapsto x+\alpha\) and \(x\mapsto x-\alpha\), hence defining an element of \(\operatorname{PSL}(2,\mathbb{R})\). We conclude that the group \(G\) is sent, by semi-conjugacy, to a subgroup of \(\operatorname{PSL}(2,\mathbb{R})\), and the corresponding homomorphism is injective. For the last case, we will assume that \(G\) has a finite orbit of order \(2\), and denote by \(\nu\) the corresponding invariant probability measure. After the previous discussion, we have a short exact sequence \[1\to G_{0}\to G\to\mathbb{Z}_{2}\to 1,\] where we write \(G_{0}=\ker(\operatorname{rot}).\), By the previous case (two global fixed points), \(G_{0}\) is isomorphic to a subgroup of \(\mathbb{R}\). Let \(a\in G\) be an element of the group with \(\operatorname{rot}(a)=1/2\). Observe that if \(a^{2}\neq\operatorname{id}\) then \(a^{2}\) fixes the \(2\) atoms of \(\nu\) and no further point, so they are both parabolic fixed points. This contradicts the Mobius-like assumption. Thus, we must have \(a^{2}=\operatorname{id}\). Therefore the exact sequence splits, so that \(G\simeq G_{0}\rtimes_{A}\mathbb{Z}_{2}\), where \(A\) is the involution defined by the conjugacy by \(a\). **Claim 1**.: _We have \(A=-\operatorname{id}\), thus \(G\simeq G_{0}\rtimes_{-\operatorname{id}}\mathbb{Z}_{2}\) is semi-conjugate to a subgroup of \(\operatorname{PSL}(2,\mathbb{R})\), and the corresponding homomorphism \(G\to\operatorname{PSL}(2,\mathbb{R})\) is injective._ Proof of claim.: Take a non-trivial element \(f\in G_{0}\), and consider the subgroup \(H=\langle f,afa^{-1}\rangle\). Let us write \(g:=afa^{-1}\). Note that \(H\) is a free abelian group of rank at most \(2\). Assume first that \(H\simeq\mathbb{Z}\), so that we can find \(n,m\in\mathbb{Z}\) such that \(\gcd(n,m)=1\) and \(f^{n}=g^{m}\). Conjugating by \(a\) this equality, we have \(g^{n}=f^{m}\). Therefore, writing a Bezout identity \(1=rn+sm\), we have \(f=f^{rn}f^{sm}=g^{rm}g^{sn}\), and therefore \(f^{n}=(g^{rm}g^{sn})^{n}=g^{m}\). As \(\gcd(n,m)=1\), we must have \(n\in\{\pm 1\}\). Exchanging the roles of \(f\) and \(g\), we also get \(m\in\{\pm 1\}\). If \(f=gafa^{-1}\), we then have that \(f\) is centralized by a homeomorphism exchanging the two fixed points, so that both fixed points must have the same dynamical nature, hence parabolic, contradicting the Mobius-like assumption. So it can only be \(f^{-1}=g=afa^{-1}\), as we want. It is clear that the corresponding homomorphism \(G\to\operatorname{PSL}(2,\mathbb{R})\) is injective. It remains to rule out the case \(H\simeq\mathbb{Z}^{2}\). After the discussion for groups with two global fixed points, we can assume that \(f\) acts by translations \(x\mapsto x-\alpha\) and \(x\mapsto x+\alpha\) respectively on \(I_{\lambda}\) and \(I_{\rho}\), and similarly \(g:=afa^{-1}\) acts (up to the same semi-conjugacy) by translations \(x\mapsto x-\beta\) and \(x\mapsto x+\beta\), respectively, on the same intervals. Note that the ratio \(\alpha/\beta\) is a conjugacy invariant for these translation actions. As \(f\) and \(g\) are conjugate one to the other by \(a\), this gives \(\frac{\alpha}{\beta}=\frac{\beta}{\alpha}\). Therefore \(\alpha\) and \(\beta\) are not rationally independent, contradicting the fact that \(H\simeq\mathbb{Z}^{2}\). The claim concludes the proof. ### A group of smooth diffeomorphisms which is not isomorphic to any Mobius group Here we give an example of group of \(C^{\infty}\) circle diffeomorphisms, with at most \(2\) fixed points, but which is not isomorphic to any subgroup of \(\operatorname{PSL}(2,\mathbb{R})\). Proof of Theorem B.: For the construction of this example we will consider the following maps with respect to the projective coordinates of the circle: fix \(\lambda,\mu>1\) such that \(\log\lambda\) and \(\log\mu\) are linearly independent over \(\mathbb{Q}\), and set \[f(x)=\left\{\begin{array}{ll}\lambda x&\text{ for }x\in[0,\infty],\\ \mu x&\text{ for }x\in[\infty,0].\end{array}\right.\] For convenience, set \(g=R_{\frac{1}{2}}fR_{\frac{1}{2}}\). It is clear that \(f\) and \(g\) generate a rank \(2\) abelian free group. Moreover, conjugation by the rotation \(R_{\frac{1}{2}}\) defines an action of \(\mathbb{Z}_{2}\) on such \(\mathbb{Z}^{2}\) given by the matrix \[A=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\] (with respect to the basis \(f,g\)). In other terms, the group \(G=\langle f,R_{\frac{1}{2}}\rangle\) is isomorphic to the semi-direct product \(\mathbb{Z}^{2}\rtimes_{A}\mathbb{Z}_{2}\). By construction, one can observe that \(G\) acts with at most \(2\) fixed points. **Claim**.: _The group \(G\) is not isomorphic to any subgroup of \(\operatorname{PSL}(2,\mathbb{R})\)._ Proof of claim.: By contradiction, let us assume \(G\) is isomorphic to some subgroup \(H\leq\operatorname{PSL}(2,\mathbb{R})\). Then \(H\simeq\mathbb{Z}^{2}\rtimes_{A}\mathbb{Z}_{2}\) is virtually abelian, and therefore its action on \(\mathbb{S}^{1}\) preserves a Borel probability measure. We conclude that \(H\) is elementary and Mobius-like. In particular, it satisfies the assumptions of Theorem A. Let us go through the cases discussed in its proof. If \(H\) preserves an atomless Borel probability measure, that it must be abelian by Lemma 3.2, but this is a contradiction. In the case \(H\) has fixed points, then by Solodov's theorem, \(H\) is isomorphic to a subgroup of \(\operatorname{Aff}_{+}(\mathbb{R})\), and therefore torsion free. This is again a contradiction. Therefore we are left with the case of order-\(2\) orbit, so by Claim 1, we have \(H\simeq\mathbb{Z}^{2}\rtimes_{-\operatorname{id}}\mathbb{Z}_{2}\), which contradicts the fact that \(H\simeq\mathbb{Z}^{2}\rtimes_{A}\mathbb{Z}_{2}\). Finally, conjugating \(G\) by a suitable \(C^{\infty}\) homeomorphism which is infinitely flat at \(0\) and \(\infty\), we can embed \(G\) into \(\operatorname{Diff}^{\infty}(\mathbb{S}^{1})\). This proves Theorem B. _Remark 3.3_.: An example of a _minimal_ finitely generated group of circle homeomorphisms, with at most \(2\) fixed points, and which is not isomorphic to any subgroup of \(\operatorname{PSL}(2,\mathbb{R})\), can be built by taking a free product of the group \(G\) from Theorem B with itself, obtained by blowing-up a free orbit. This kind of construction is inspired by Kovacevic's work [13], and it is detailed in the second author's PhD thesis: indeed, let \(G_{1}\) and \(G_{2}\) be two copies of the group \(G\), considered as acting on distinct circles \(\Gamma_{1}\) and \(\Gamma_{2}\), and choose any two points \(x\in\Gamma_{1}\) and \(y\in\Gamma_{2}\) with trivial stabilizer in \(G_{1}\) and \(G_{2}\), respectively. Then, [4, Theorem D] gives that the _amalgamated product_\((G_{1},x)\star(G_{2},y)\) (properly defined after [4, Theorem 4.7]; note that in this case it is simply a _free_ product) is a subgroup of \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) acting minimally, and with at most \(2\) fixed points. ## 4. Non-locally discrete groups with at most \(N\) fixed points In this section we discuss Theorem C, about non-locally discrete groups with at most \(N\) fixed points. We start this section with an example that justifies our definition of non-local discreteness (Definition 1.4). **Proposition 4.1**.: _There exists a finitely generated non-elementary subgroup \(G<\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) with at most 2 fixed points, which is not Mobius-like, and for which there exists a wandering interval \(I\subset\mathbb{S}^{1}\) such that the action of \(G\) restricted to \(I\) is non-discrete._ Proof.: First, we will construct a non-elementary subgroup of \(\operatorname{PSL}(2,\mathbb{R})\) with the stabilizer of a point \(p\in\mathbb{S}^{1}\) being parabolic and isomorphic to \(\mathbb{Z}^{2}\). For this, let \(T_{\alpha},T_{\beta}\) be two parabolic elements of \(\operatorname{PSL}(2,\mathbb{R})\) fixing the same point \(p\in\mathbb{S}^{1}\), such that the subgroup \(T=\langle T_{\alpha},T_{\beta}\rangle\) is free abelian of rank \(2\). Using [12, Lemma 3.4], we can find a countable subset \(D\subset\operatorname{SO}(2)\), such that, for every rotation \(R_{\rho}\in\operatorname{SO}(2)\smallsetminus D\), it follows that \(\langle T,R_{\rho}\rangle\simeq T*R_{\rho}\). So, we take a rotation \(R_{\rho}\in\operatorname{SO}(2)\smallsetminus D\), and set \(F=\langle T,R_{\rho}\rangle\). Observe that the stabilizer of the point \(p\) has not changed, that is \(\operatorname{Stab}(F,p)=\operatorname{Stab}(T,p)=T\). Indeed, for every element \(g\in\operatorname{PSL}(2,\mathbb{R})\) that fixes the point \(p\), we have that \(gT_{\alpha}g^{-1}\) commutes with \(T_{\alpha}\), so that if \(\operatorname{Stab}(F,p)\neq T\), we could find an element \(g\in F\smallsetminus T\) such that \([gT_{\alpha}g^{-1},T_{\alpha}]=\operatorname{id}\). This is not possible after our choice of \(R_{\rho}\notin D\). Finally, note that \(F\) is a non-elementary subgroup of \(\operatorname{PSL}(2,\mathbb{R})\) with a parabolic stabilizer \(T\) of the point \(p\) isomorphic to For the second step of the construction, we _blow-up_ the action of \(F\) at the orbit of \(p\) (see for instance Kim and Koberda [11]), in such a way that the action of the stabilizer \(T\) on the interval \(I\) inserted at \(p\) is a minimal action by translation. We call \(G\) the resulting subgroup of \(\operatorname{Homeo}_{+}(\mathbb{S}^{1})\), which is abstractly isomorphic to \(F\simeq\mathbb{Z}^{2}*\mathbb{Z}\). We can choose the translation action of \(T\) on \(I\) is such a way that there are elements in \(G\) with two parabolic fixed points. It is not difficult to verify that any non-trivial element of \(G\) has at most \(2\) fixed points. Therefore, \(G\) is a non-elementary group of circle homeomorphisms, with at most \(2\) fixed points, and whose action restricted to \(I\) is non-discrete. It turns out that non-discreteness and non-local discreteness are equivalent for non-elementary subgroups. To see this, we need a fundamental structural result for non-elementary subgroups. To state it, we say that the action of a subgroup \(G\leq\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) is _proximal_ if for every non-empty open intervals \(I,J\subset\mathbb{S}^{1}\), there exists an element \(g\in G\) such that \(g(I)\subset J\). Note that if the action is proximal, then it is automatically minimal. We also say that the action is proximal _in restriction to the minimal invariant subset_ if the previous statement holds only for intervals \(J\subset\mathbb{S}^{1}\) which are non-wandering (that is, intersecting the minimal invariant subset). Note that if \(G\) is proximal (in restriction to the minimal invariant subset), then it is automatically non-elementary. The following fundamental result can be deduced from the work of Antonov [2] (see also Ghys [9]). **Theorem 4.2** (Antonov).: _Let \(G\leq\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) be a non-elementary subgroup. Then there exists a finite order element \(\gamma\in\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) which commutes with every \(g\in G\), and such that the induced action of \(G\) on the quotient \(\mathbb{S}^{1}/\langle\gamma\rangle\) is proximal in restriction to the minimal invariant subset._ **Lemma 4.3**.: _If \(G<\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) is a non-elementary, non-locally discrete subgroup, then \(G\) is non-discrete._ _Moreover, if \(G\) has at most \(N\) fixed points, its action on the circle is minimal._ Proof.: Since \(G\) is non-elementary, by Theorem 4.2, there exists a finite order homeomorphism \(\gamma:\mathbb{S}^{1}\to\mathbb{S}^{1}\) centralizing \(G\) and such that the induced action of \(G\) on \(\mathbb{S}^{1}/\langle\gamma\rangle\) is proximal in restriction to the minimal invariant subset. As \(G\) is non-locally discrete, the same holds for the action on \(\mathbb{S}^{1}/\langle\gamma\rangle\). Conversely, if we show that the action of \(G\) on \(\mathbb{S}^{1}/\langle\gamma\rangle\) is non-discrete, then the same will hold for the original action. Therefore, from now on, we will assume that the action of \(G\) on \(\mathbb{S}^{1}\) is proximal in restriction to the minimal invariant subset. Since \(G\) is non-locally discrete, there exists a non-empty open interval \(I\subset\mathbb{S}^{1}\) which intersects the minimal set \(\Lambda\) of \(G\), and a sequence of non-trivial elements \((g_{n})_{n\in\mathbb{N}}\subset G\), such that \(g_{n}|_{I}\to\operatorname{id}|_{I}\). Fix \(\eta\in(0,\frac{1}{2})\); by proximality of the action, we can find an element \(f\in G\) with a repelling fixed point in \(I\) and such that \(\lim_{m\to\infty}|f^{m}(I)|>1-\frac{\eta}{2}\). Now, to show that the action of \(G\) is non-discrete, we consider the following family of sequences, indexed by \(m\in\mathbb{N}\): \[(h_{m,n})_{n\in\mathbb{N}}:=(f^{m}g_{n}f^{-m})_{n\in\mathbb{N}}\subset G.\] Note that for fixed \(m\in\mathbb{N}\), the sequence \((h_{m,n})_{n}\) converges to \(\operatorname{id}\) in restriction to \(f^{m}(I)\). Indeed, choose \(m\in\mathbb{N}\) such that \(|f^{m}(I)|>1-\frac{\eta}{2}\), and then choose \(n\in\mathbb{N}\) such that \(|h_{m,n}(x)-x|<\frac{\eta}{2}\) for every \(x\in f^{m}(I)\). As the complement of \(f^{m}(I)\) is of size \(<\frac{\eta}{2}\), we get that \(|h_{m,n}(x)-x|<\eta\), for every point \(x\in\mathbb{S}^{1}\). As \(\eta\) was arbitrary, we conclude that \(G\) is non-discrete. For the second part of the statement, assume the action admits an invariant Cantor set \(\Lambda\subset\mathbb{S}^{1}\). For any given \(\varepsilon>0\), every element \(g\in G\) which is \(\varepsilon\)-close to the identity must fix every gap of \(\Lambda\) (that is, any connected component of the complement of \(\Lambda\)) whose size exceeds \(\varepsilon\). For sufficiently small \(\varepsilon\), this gives that \(g\) fixes more than \(N\) points, and thus \(g=\operatorname{id}\). **Lemma 4.4**.: _Let \(G<\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) be a non-discrete subgroup with at most \(N\) fixed points. Then, there exists a sequence of fixed-point-free elements in \(G\) converging to the identity._ Proof.: As in the previous proof, and because of Lemma 4.3, we can assume that \(G\) is proximal, as the desired property is preserved when considering finite central extensions. Assuming this, we remark that we can take a sequence of non-trivial elements converging to the identity with localized fixed points. **Claim 1**.: _For every interval \(J\subset\mathbb{S}^{1}\) and every \(\varepsilon>0\), there exists an element \(g\in G\) which is \(\varepsilon\)-close to the identity and has no fixed points in the interval \(J\)._ Proof of claim.: Let \((g_{n})_{n\in\mathbb{N}}\subset G\) be a sequence of non-trivial elements such that \(g_{n}\to\mathrm{id}\). If \((g_{n})_{n\in\mathbb{N}}\subset G\) contains a subsequence without fixed points, there is nothing to do. Otherwise, by taking a subsequence, we can assume that the fixed points of \(g_{n}\) are converging to points \(p_{1},\ldots,p_{M}\in\mathbb{S}^{1}\), for some \(M\leq N\). Let \(J\subset\mathbb{S}^{1}\) be any closed interval. By proximality of \(G\), there exists an element \(k\in G\) which sends \(p_{1},\ldots,p_{M}\) to the complement of \(J\). Now, by choosing \(g_{n}\) close enough to the identity, we can assume that \(kg_{n}k^{-1}\in G\) is \(\varepsilon\)-close to the identity, and that this element only fixes points which are in the complement of the interval \(J\). Throughout the rest of the proof, we use the distance \(d_{\infty}\) on \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) to quantify how close elements are. Recall that this can be locally computed by using the uniform norm \(\|\cdot\|\). We assume by contradiction that for a fixed (small) \(\varepsilon>0\), every element \(\varepsilon\)-close to the identity has fixed points. After Claim 1, we can define recursively a sequence of nested intervals \((J_{n})\subset\mathbb{S}^{1}\) with the following properties. 1. For any \(n\in\mathbb{N}\), there exists an element \(g_{n}\in G\) such that \(\|g_{n}-\mathrm{id}\|\leq\frac{\varepsilon}{2}\) and \(g_{n}\) is positive on the complement of \(J_{n}\), with \(g_{n}(J_{n})=J_{n}\). 2. The sequence \((J_{n})\) is shrinking to a point \(p\in\mathbb{S}^{1}\), namely \(\bigcap_{n}J_{n}=\{p\}\). **Claim 2**.: _For any \(n\in\mathbb{N}\) such that \(|J_{n}|<1-\frac{\varepsilon}{2}\), there exists \(m\in\mathbb{N}\) such that \(\frac{\varepsilon}{4}<\|g_{n}^{m}-\mathrm{id}\|\leq\frac{\varepsilon}{2}\)._ Proof of claim.: Indeed, if we assume that \(g_{n}\) is \(\frac{\varepsilon}{4}\)-close to the identity (otherwise \(m=1\) works), since \(g_{n}\) has fixed points only in \(J_{n}\), for a sufficiently large power \(m\in\mathbb{N}\), the distance of \(g_{n}^{m}\) to the identity will be larger than \(\frac{\varepsilon}{2}\). Therefore, there exists \(m_{0}\in\mathbb{N}\) such that \(g_{n}^{m_{0}}\) is not \(\frac{\varepsilon}{2}\)-close to the identity, but \(g_{n}^{m}\) is \(\frac{\varepsilon}{2}\)-close to the identity for every \(0\leq m<m_{0}\). The point is that \(g_{n}^{m_{0}-1}\) is \(\frac{\varepsilon}{2}\)-close to the identity, but it is not \(\frac{\varepsilon}{4}\)-close: indeed, there exists \(x\in\mathbb{S}^{1}\) such that \[\frac{\varepsilon}{2}<|g_{n}^{m_{0}}(x)-x|<\left|g_{n}\left(g_{n}^{m_{0}-1}(x) \right)-g_{n}^{m_{0}-1}(x)\right|+|g_{n}^{m_{0}-1}(x)-x|<\frac{\varepsilon}{4} +|g_{n}^{m_{0}-1}(x)-x|.\] This proves the claim. From now on, after Claim 2, we can and will assume that \(\frac{\varepsilon}{4}<\|g_{n}-\mathrm{id}\|\leq\frac{\varepsilon}{2}\). Hence, when \(n\) is sufficiently large so that \(|J_{n}|<\frac{\varepsilon}{4}\), we can find a point \(x_{n}\in\mathbb{S}^{1}\smallsetminus J_{n}\) such that \(g_{n}(x_{n})>x_{n}+\frac{\varepsilon}{4}\). Consider the interval \(I_{n}\subset\mathbb{S}^{1}\) defined by \(I_{n}:=(x_{n}+\frac{\varepsilon}{12},x_{n}+\frac{\varepsilon}{6})\). After passing to a subsequence, we can assume that \(x_{n}\) converges to a point \(x\in\mathbb{S}^{1}\), and for \(n_{0}\in\mathbb{N}\) large enough we have that \(I:=\bigcap_{n\geq n_{0}}I_{n}\) is a non-trivial interval. With such choices, for every \(y\in I\) and \(n\geq n_{0}\) we have (4.1) \[g_{n}(y)>g_{n}(x_{n})>x_{n}+\frac{\varepsilon}{4}>y+\frac{\varepsilon}{12}.\] On the other hand, for every \(n\in\mathbb{N}\), the interval \(J_{n}\) is disjoint from the interval \([x_{n},x_{n}+\frac{\varepsilon}{4}]\), so that by choosing \(n_{1}\geq n_{0}\) such that \(|x_{n}-x|<\frac{\varepsilon}{48}\) for any \(n\geq n_{1}\), we have that the intervals \(I\) and \(J_{n}\) are at least \(\frac{\varepsilon}{24}\)-apart for any such \(n\). We deduce that the union \(J:=\bigcup_{n\geq n_{1}}J_{n}\) and \(I\) are at least \(\frac{\varepsilon}{24}\)-apart. After Claim 1, we can take an element \(f\in G\) which is \(\frac{\varepsilon}{2}\)-close to the identity, and negative on the complement of \(I\). As \(J\) and \(I\) are separated, we can find \(\delta>0\) such that \(f(y)<y-\delta\) for every \(y\in J\). Take \(m>n_{1}\) sufficiently large such that \(|J_{m}|<\delta\); we claim that the element \(f^{-1}g_{m}\in G\) is \(\varepsilon\)-close to the identity and it has no fixed points in the circle. Indeed, since \(f\) is negative on the complement of \(I\) and \(g_{m}\) is positive on the complement of \(J_{m}\), it is clear that \(g_{m}\) does not cross \(f\) in the complement of \(J_{m}\cup I\). Now, we have \(|I|\leq|I_{m}|=\frac{\varepsilon}{12}\), so that from the inequality (4.1) we deduce that \(g_{m}\) does not cross \(f\) in \(I\). Similarly, the size of the interval \(J_{m}\) is smaller than \(\delta\) and \(f(y)<y-\delta\) for every \(y\in J\supset J_{m}\), which implies that \(g_{m}\) does not cross \(f\) in \(J_{m}\). Therefore, \(g_{m}\) does not cross \(f\) in the whole circle \(\mathbb{S}^{1}\), which implies that the element \(f^{-1}g_{m}\) has no fixed points in \(\mathbb{S}^{1}\). Finally, as \(f^{-1}g_{m}\) is the composition of two elements \(\frac{\varepsilon}{2}\)-close to the identity, this element is \(\varepsilon\)-close to the identity, as desired. Given a subgroup \(G\leq\mathrm{Homeo}_{+}(\mathbb{S}^{1})\), we denote by \(\overline{G}\) its closure in \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) with respect to the \(C^{0}\) topology, which is still a subgroup of \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\). **Lemma 4.5**.: _Let \(G<\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) be a non-elementary, non-discrete subgroup with at most \(N\) fixed points. Then, its closure \(\overline{G}\) has at most \(N\) fixed points._ Proof.: As in the previous proof, we will assume that \(G\) is proximal, since when considering finite central extensions, the number of fixed points is basically multiplied by the degree of the extension. We then remark that any \(f\in\overline{G}\) has at most \(\lfloor\frac{N}{2}\rfloor\) components \(I\) of \(\mathrm{supp}(f):=\mathbb{S}^{1}\smallsetminus\mathrm{Fix}(f)\) for which \(f(x)>x\) for every \(x\in I\). Indeed, arguing by contradiction, by Lemma 4.4, we can take an element \(g\in G\), sufficiently close to the identity and without fixed points, so that \(g\) crosses hyperbolically the element \(f\) at least \(2\left(\lfloor\frac{N}{2}\rfloor+1\right)\geq N+1\) times. As this is an open condition, we can find an element \(f_{0}\in G\) sufficiently close to \(f\), which also crosses hyperbolically \(g\) at least \(N+1\) times. This contradicts the assumption that \(G\) have at most \(N\) fixed points. Repeating the argument for \(f^{-1}\), we deduce that \(\mathrm{supp}(f)\) has at most \(2\lfloor\frac{N}{2}\rfloor\leq N\) connected components, and so does \(\mathrm{Fix}(f)\). Assume now that \(\overline{G}\) contains a non-trivial element \(f\) such that \(\mathrm{Fix}(f)\) contains a non-empty open interval \(I\). Let us choose such an element \(f\in\overline{G}\) such that the number \(C\) of connected components of \(\mathrm{supp}(f)\) is maximal. As we are assuming that \(G\) be proximal, we can find an element \(h\in G\) such that \(\overline{h(\mathrm{supp}(f))}\subset I\). Then the element \(f^{\prime}=fhfh^{-1}\in\overline{G}\) is such that \(\mathrm{supp}(f^{\prime})\) has \(2C>C\) connected components, contradicting maximality. The next result we need is very general. **Lemma 4.6**.: _Let \(f\in\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) be a circle homeomorphism of order \(q\geq 2\), with rotation number \(\mathrm{rot}(f)=\frac{p}{q}\), where \(p,q\in\mathbb{Z}\smallsetminus\{0\}\). Then, for every \(\varepsilon>0\) there exists \(\delta>0\) such that for every circle homeomorphism \(g\in\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) which is positive and \(\delta\)-close to the identity, it follows that_ \[d_{\mathcal{C}^{0}}(f,gf)<\varepsilon\ \text{ and }\ \mathrm{rot}(gf)\in\left( \frac{p}{q},\frac{p}{q}+\frac{1}{q^{3}}\right].\] Proof.: We fix \(\varepsilon>0\) and \(\delta_{1}>0\) such that \(d_{\mathcal{C}^{0}}(f,gf)<\varepsilon\) for every circle homeomorphism \(g\in\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) which is \(\delta_{1}\)-close to the identity. Let \(\tilde{f}\in\mathrm{Homeo}_{\mathbb{Z}}(\mathbb{R})\) be the lift of \(f\) such that \(\tilde{f}(0)\in(0,1)\). Take \(\delta_{2}>0\) such that for every positive homeomorphism \(\widetilde{g}\in\mathrm{Homeo}_{\mathbb{Z}}(\mathbb{R})\) which is \(\delta_{2}\)-close to the identity (on \(\mathbb{R}\)) we have that \[\left(\widetilde{g}\tilde{f}\right)^{q}(x)\in\left(\widetilde{f}^{q}(x), \widetilde{f}^{q}(x)+\frac{1}{q^{2}}\right)\ \ \ \text{ for every }x\in\mathbb{R}. \tag{4.2}\] Now, from the assumption, we have that \(\widetilde{f}^{q}(x)=x+p\) for any \(x\in\mathbb{R}\), therefore we have \[\left(\widetilde{g}\tilde{f}\right)^{q}(x)\in\left(x+p,x+p+\frac{1}{q^{2}} \right).\] Thus, for any \(n\geq 2\) one has \[\left(\widetilde{g}\tilde{f}\right)^{nq}(x)\in\left(\left(\widetilde{g} \tilde{f}\right)^{(n-1)q}(x)+p,\left(\widetilde{g}\tilde{f}\right)^{(n-1)q}(x)+ p+\frac{1}{q^{2}}\right).\] On the other hand, one has \[\left(\left(\widetilde{g}\widetilde{f}\right)^{(n-1)q}(x)+p,\left(\widetilde{g} \widetilde{f}\right)^{(n-1)q}(x)+p+\frac{1}{q^{2}}\right)\subset\cdots\subset \left(x+np,x+np+\frac{n}{q^{2}}\right).\] We deduce that the rotation number of \(gf\in\mathrm{Homeo}_{+}(\mathbb{S}^{1})\), where \(g\in\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) is the projection of \(\widetilde{g}\), satisfies \[\mathrm{rot}(gf)=\lim_{n\to+\infty}\frac{\left(\widetilde{g}\widetilde{f} \right)^{nq}(0)}{nq}\in\left[\frac{np}{nq},\frac{np+\frac{n}{q^{2}}}{nq}\right]= \left[\frac{p}{q},\frac{p}{q}+\frac{1}{q^{3}}\right].\] Next, we observe that \(\mathrm{rot}(gf)\neq\mathrm{rot}(f)\). Indeed, if \(\mathrm{rot}(gf)=\frac{p}{q}\) then there exists a point \(x\in\mathbb{S}^{1}\) such that \((gf)^{q}(x)=x\), which implies that \(\left(\widetilde{g}\widetilde{f}\right)^{q}(x)=x+p\), contradicting (4.2). We conclude that \(\mathrm{rot}(gf)\in\left(\frac{p}{q},\frac{p}{q}+\frac{1}{q^{3}}\right]\) for every positive circle homeomorphism \(g\in\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) which is \(\delta_{2}\)-close to the identity. Taking \(\delta=\min\{\delta_{1},\delta_{2}\}\), we obtain the desired conclusion. **Lemma 4.7**.: _Let \(G<\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) be a non-elementary, non-discrete subgroup with at most \(N\) fixed points. Then, \(\overline{G}\) contains an element with irrational rotation number._ Proof.: If the subgroup \(G<\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) has an element with irrational rotation number there is nothing to prove, because \(G\subset\overline{G}\). Therefore, we will suppose that \(G\) has no element with irrational rotation number. We want to construct a converging sequence \(\left(h_{n}\right)_{n\in\mathbb{N}}\subset G\) whose limit \(h\in\overline{G}\) has irrational rotation number. We start by choosing a sequence of elements \(\left(f_{n}\right)_{n\in\mathbb{N}}\subset G\) without fixed points and converging to the identity, whose existence is ensured by Lemma 4.4. After changing \(f_{n}\) for \(f_{n}^{-1}\) when necessary and taking a subsequence, we can assume that \(\left(f_{n}\right)_{n\in\mathbb{N}}\) is a sequence of positive circle homeomorphisms whose distance to the identity decreases. Let us fix also a sequence \((\varepsilon_{n})_{n\in\mathbb{N}}\) of positive numbers such that \(\sum\varepsilon_{n}<1/2\). We choose \(h_{0}\in G\) to be the first element \(f_{m_{0}}\) such that \(d_{\mathcal{C}^{0}}(f_{m_{0}},\mathrm{id})<\varepsilon_{0}\). Now, let us assume by induction that \(h_{n}\in G\) is a positive circle homeomorphism such that \(d_{\mathcal{C}^{0}}(h_{n},\mathrm{id})<\sum_{k=0}^{n}\varepsilon_{k}\) and of rational rotation number \[\mathrm{rot}(h_{n})=\frac{p_{n}}{q_{n}}\in\left(\frac{p_{n-1}}{q_{n-1}},\frac{ p_{n-1}}{q_{n-1}}+\frac{1}{q_{n-1}^{3}}\right].\] As \(G\) has at most \(N\) fixed points, whenever \(q_{n}>N\) (this condition is satisfied for any sufficiently large \(n\in\mathbb{N}\)), we must have \(h_{n}^{q_{n}}=\mathrm{id}\). By Lemma 4.6, there exists \(\delta_{n}>0\) such that for every positive circle homeomorphism \(g\in\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) which is \(\delta_{n}\)-close to the identity, we have that \(d_{\mathcal{C}^{0}}(h_{n},gh_{n})<\varepsilon_{n+1}\) and \(\mathrm{rot}(gh_{n})\in\left(\frac{p_{n}}{q_{n}},\frac{p_{n}}{q_{n}}+\frac{1}{ q_{n}^{3}}\right]\). Then, for every \(n\in\mathbb{N}\), we define \(h_{n+1}\) as \(f_{m_{n}}h_{n}\in G\), where \(f_{m_{n}}\) is the first element of the sequence \(\left(f_{n}\right)_{n\in\mathbb{N}}\) such that \(d_{\mathcal{C}^{0}}(f_{m_{n}},\mathrm{id})<\delta_{n}\). This gives that \[d_{\mathcal{C}^{0}}(h_{n+1},\mathrm{id})\leq d_{\mathcal{C}^{0}}(h_{n+1},h_{n}) +d_{\mathcal{C}^{0}}(h_{n},\mathrm{id})<d_{\mathcal{C}^{0}}(f_{m_{n}}h_{n},h_{n} )+\sum_{k=0}^{n}\varepsilon_{k}<\sum_{k=0}^{n+1}\varepsilon_{k}.\] Since \(\sum_{k=0}^{n+1}\varepsilon_{k}<\frac{1}{2}\), \(h_{n+1}\in G\) is a positive circle homeomorphism (Remark 2.2). By our assumption on \(G\), the rotation number of \(h_{n}\) is rational, and we write \(\mathrm{rot}(h_{n})=\frac{p_{n+1}}{q_{n+1}}\), which belongs to \(\left(\frac{p_{n}}{q_{n}},\frac{p_{n}}{q_{n}}+\frac{1}{q_{n}^{3}}\right]\) by construction. Hence, the inductive assumptions are satisfied. We next check that \((h_{n})\) is a Cauchy sequence with respect to the \(d_{\mathcal{C}^{0}}\)-distance (for which \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) is complete), so that it admits a limit \(h\in\overline{G}\). Indeed, for every \(n,m\in\mathbb{N}\) we have \[d_{\mathcal{C}^{0}}(h_{n+m},h_{n})\leq\sum_{k=n}^{n+m}d_{\mathcal{C}^{0}}(h_{k+ 1},h_{k})<\sum_{k=n}^{n+m}\varepsilon_{k}<\sum_{k=n}^{+\infty}\varepsilon_{k} \xrightarrow{n\to+\infty}0.\] It remains to prove that \(\operatorname{rot}(h)\notin\mathbb{Q}\). To see this, for every \(n\in\mathbb{N}\), consider the interval \(I_{n}=\left(\frac{p_{n}}{q_{n}},\frac{p_{n}}{q_{n}}+\frac{1}{q_{n}^{2}}\right)\). By the classical Dirichlet's approximation theorem, we have that \(I_{n}\) is a nested sequence of intervals, such that \(\bigcap I_{n}\) is an irrational number. Let us detail this for completeness. First, we recall that \[\frac{p_{n+1}}{q_{n+1}}\in\left(\frac{p_{n}}{q_{n}},\frac{p_{n}}{q_{n}}+\frac{1 }{q_{n}^{3}}\right],\] so that \(\frac{p_{n+1}}{q_{n+1}}>\frac{p_{n}}{q_{n}}\). Next, we want to prove that \(\frac{p_{n+1}}{q_{n+1}}+\frac{1}{q_{n+1}^{2}}<\frac{p_{n}}{q_{n}}+\frac{1}{q_{ n}^{2}}\). We will first show that \(q_{n+1}>q_{n}\). Indeed, if \(q_{n+1}\leq q_{n}\) then \[p_{n+1}\in\left(\frac{p_{n}q_{n+1}}{q_{n}},\frac{p_{n}q_{n+1}}{q_{n}}+\frac{q_ {n+1}}{q_{n}^{3}}\right]\subset\left(\frac{p_{n}q_{n+1}}{q_{n}},\frac{p_{n}q_ {n+1}}{q_{n}}+\frac{1}{q_{n}^{2}}\right],\] which is an absurd, because \(\left(\frac{p_{n}q_{n+1}}{q_{n}},\frac{p_{n}q_{n+1}}{q_{n}}+\frac{1}{q_{n}^{2 }}\right]\) does not contain any integer. Now, we have the following inequality \[\frac{p_{n+1}}{q_{n+1}}+\frac{1}{q_{n+1}^{2}}\leq\frac{p_{n}}{q_{n}}+\frac{1} {q_{n}^{3}}+\frac{1}{q_{n+1}^{2}}\leq\frac{p_{n}}{q_{n}}+\frac{1}{q_{n}^{3}}+ \frac{1}{(q_{n}+1)^{2}}=\frac{p_{n}}{q_{n}}+\frac{q_{n}^{3}+(q_{n}+1)^{2}}{q_{ n}^{3}(q_{n}+1)^{2}}\] \[=\frac{p_{n}}{q_{n}}+\frac{1}{q_{n}^{2}}\frac{q_{n}^{3}+q_{n}^{2}+2q_{n}+1}{q_ {n}^{3}+2q_{n}^{2}+q_{n}}<\frac{p_{n}}{q_{n}}+\frac{1}{q_{n}^{2}}\frac{q_{n}^{ 3}+q_{n}^{2}+2q_{n}+1+(q_{n}^{2}-q_{n}-1)}{q_{n}^{3}+2q_{n}^{2}+q_{n}}=\frac{p _{n}}{q_{n}}+\frac{1}{q_{n}^{2}}.\] So, we conclude that \(I_{n+1}\subset I_{n}\) for every \(n\in\mathbb{N}\). On the other hand, \(\left(q_{n}\right)_{n\in\mathbb{N}}\) is an increasing sequence of integers, and therefore \(|I_{n}|=\frac{1}{q_{n}^{3}}\longrightarrow 0\). Hence, \(\bigcap_{n\in\mathbb{N}}I_{n}\) converges to a point \(\alpha\in\mathbb{R}\). We claim that \(\alpha\) is not a rational number, otherwise \(\alpha=\frac{p}{q}\) with \(p,q\in\mathbb{Z}\), and \(q\geq 2\), which implies that \(\frac{p}{q}\in I_{n}=\left(\frac{p_{n}}{q_{n}},\frac{p_{n}}{q_{n}}+\frac{1}{q_ {n}^{2}}\right)\), for every \(n\in\mathbb{N}\). Therefore, for \(n\) sufficiently large, we have \(2q<q_{n}\) and then \[p\in\left(\frac{p_{n}q}{q_{n}},\frac{p_{n}q}{q_{n}}+\frac{q}{q_{n}^{2}}\right) \subset\left(\frac{p_{n}q}{q_{n}},\frac{p_{n}q}{q_{n}}+\frac{1}{2q_{n}}\right),\] which is an absurd, because \(\left(\frac{p_{n}q}{q_{n}},\frac{p_{n}q}{q_{n}}+\frac{1}{2q_{n}}\right)\) does not contain any integer. Let us go back to our converging sequence \((h_{n})\). By construction we have \[\operatorname{rot}(h_{n+1})\in\left(\frac{p_{n}}{q_{n}},\frac{p_{n}}{q_{n}}+ \frac{1}{q_{n}^{3}}\right]\subset\left(\frac{p_{n}}{q_{n}},\frac{p_{n}}{q_{n}} +\frac{1}{q_{n}^{2}}\right)=I_{n},\] so that \(\operatorname{rot}(h_{n}+k)\in I_{n+k}\subset I_{n}\) for any \(k\geq 1\). Hence, by continuity of \(\operatorname{rot}:\operatorname{Homeo}_{+}(\mathbb{S}^{1})\to\mathbb{S}^{1}\), it follows that \(\operatorname{rot}(h)\in I_{n}\), for every \(n\in\mathbb{N}\), and therefore \(\operatorname{rot}(h)=\alpha\) is irrational, as desired. **Lemma 4.8**.: _Let \(G<\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) be a non-elementary, non-discrete subgroup with at most \(N\) fixed points. Then, \(\overline{G}\) contains an element which is conjugate to an irrational rotation. Therefore, up to conjugacy, \(\overline{G}\) contains \(\operatorname{SO}(2)\)._ Proof.: We will prove that any \(f\in\overline{G}\) with irrational rotation number \(\alpha=\operatorname{rot}(f)\) (by Lemma 4.7, we can find such an element in \(G\)) is conjugate to the rotation \(R_{\alpha}\). We argue by way of contradiction. Let us denote by \(\Lambda\) the minimal \(f\)-invariant Cantor set. Recall that a gap of \(\Lambda\) is a connected component of the complement \(\mathbb{S}^{1}\smallsetminus\Lambda\). Up to conjugating \(G\) by some circle homeomorphism, we can assume that the size of any gap does not exceed \(\frac{1}{4}\). After the discussion in Section 2, \(f\) is semi-conjugate to the rotation \(R_{\alpha}\). Denoting by \((q_{n})\) the sequence of denominators of rational approximations of \(\alpha\), we have \(R_{\alpha}^{q_{n}}\to\operatorname{id}\), which implies that for any \(x\in\Lambda\) which is not in the closure of any wandering interval, we have \(f^{q_{n}}(x)\to x\). (This is because any such \(x\) satisfies that \(\{x\}=h^{-1}(h(x))\), where \(h:\mathbb{S}^{1}\to\mathbb{S}^{1}\) is a continuous monotone map such that \(hf=R_{\alpha}h\) giving the semi-conjugacy.) After the choice of the size of the gaps, we can take \(n_{0}\in\mathbb{N}\) sufficiently large so that \(d_{\infty}(f^{q_{n}},\operatorname{id})<1/2\) for any \(n\geq n_{0}\), and then take a subsequence \((q_{n_{j}})_{j\in\mathbb{N}}\subset(q_{n})_{n\geq n_{0}}\) such that \(f^{q_{n_{j}}}\) is positive for any \(j\in\mathbb{N}\). Take \(\varepsilon>0\) smaller than the size of the \(N\) largest gaps. By Lemma 4.4, we can choose a positive element \(g\in G\) which is \(\varepsilon\)-close to the identity. Note that for any \(j\in\mathbb{N}\) sufficiently large, \(g\) crosses \(f^{q_{nj}}\) twice on a small neighborhood of any of the \(N\) largest gaps, hence at least \(2N\) times. This contradicts the assumption that \(g^{-1}f^{n_{j}}\in\overline{G}\) have at most \(N\) fixed points. So we conclude that \(f\in\overline{G}\) with \(\operatorname{rot}(f)\notin\mathbb{Q}\) is conjugate to an irrational rotation and therefore \(\overline{G}\) contains a conjugate copy of \(\operatorname{SO}(2)\). We finally use a result by Giblin and Markovic [10, Theorem 1.2]. (A self-contained argument for the conclusion in the case \(N=2\) is given in the second author's PhD thesis [4].) **Theorem 4.9** (Giblin-Markovic).: _Let \(G\leq\operatorname{Homeo}_{+}(\mathbb{S}^{1})\) be a closed transitive subgroup which contains a non-constant continuous path. Then we have the following alternative:_ 1. _either_ \(G\) _is conjugate to_ \(\operatorname{SO}(2)\)_, or_ 2. \(G\) _is conjugate to_ \(\operatorname{PSL}^{(k)}(2,\mathbb{R})\)_, for some_ \(k\geq 1\)_, or_ 3. \(G\) _is conjugate to_ \(\operatorname{Homeo}_{+}^{(k)}(\mathbb{S}^{1})\)_, for some_ \(k\geq 1\)_, where_ \(\operatorname{Homeo}_{+}^{(k)}(\mathbb{S}^{1})\) _is the group of all homeomorphisms commuting with the group of order_ \(k\) _rotations._ We can now put everything together and prove the main result of our work. Proof of Theorem C.: Let \(G\) be a non-elementary subgroup with at most \(N\) fixed points. If \(G\) is non-locally discrete, then after Lemma 4.3, \(G\) is non-discrete, so that by Lemma 4.8 its closure \(\overline{G}\) contains a conjugate copy of the subgroup of rotations \(\operatorname{SO}(2)\). In particular, \(\overline{G}\) is closed, transitive and contains a non-constant continuous path. As \(G\) is non-elementary, the first possibility in Theorem 4.9 cannot occur. On the other hand, after Lemma 4.5, \(\overline{G}\) has at most \(N\) fixed points, so that the third possibility in Theorem 4.9 cannot occur either. We conclude that \(\overline{G}\) is conjugate to \(\operatorname{PSL}^{(k)}(2,\mathbb{R})\), for some \(k\geq 1\), as desired. ### Acknowledgments This work is based on part of the second author's PhD thesis, and we thank the readers Andres Navas and Maxime Wolff for their careful reading and suggestions. We are particularly grateful to Maxime Wolff for pointing out that the result of Giblin and Markovic could shortcut our proof. The authors acknowledge the support of the project MATH AMSUD, DGT - Dynamical Group Theory (22-MATH-03) and the project ANR Gromeov (ANR-19-CE40-0007), and the host department IMB receives support from the EIPHI Graduate School (ANR-17-EURE-0002). M.T. has been partially supported by the project ANER Agroupes (AAP 2019 Region Bourgogne-Franche-Comte).
2308.07801
Combinatorial QFT on graphs: first quantization formalism
We study a combinatorial model of the quantum scalar field with polynomial potential on a graph. In the first quantization formalism, the value of a Feynman graph is given by a sum over maps from the Feynman graph to the spacetime graph (mapping edges to paths). This picture interacts naturally with Atiyah-Segal-like cutting-gluing of spacetime graphs. In particular, one has combinatorial counterparts of the known gluing formulae for Green's functions and (zeta-regularized) determinants of Laplacians.
Ivan Contreras, Santosh Kandel, Pavel Mnev, Konstantin Wernli
2023-08-15T14:27:15Z
http://arxiv.org/abs/2308.07801v1
# Combinatorial QFT on graphs: first quantization formalism ###### Abstract. We study a combinatorial model of the quantum scalar field with polynomial potential on a graph. In the first quantization formalism, the value of a Feynman graph is given by a sum over maps from the Feynman graph to the spacetime graph (mapping edges to paths). This picture interacts naturally with Atiyah-Segal-like cutting-gluing of spacetime graphs. In particular, one has combinatorial counterparts of the known gluing formulae for Green's functions and (zeta-regularized) determinants of Laplacians. The work of K. Wernli was supported by the ERCSyG project, Recursive and Exact New Quantum Theory (ReNewQuantum) which received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No. 810573. ###### Contents * 1 Introduction [MISSING_PAGE_POST] utting a closed graph \(X\). * 32 Proof 1 ("functional integral approach") * 32.2.1 The quantum massive scalar field with polynomial potential on a spacetime given a by graph \(X\). * 32.2 The quantum massive scalar field with polynomial potential on a spacetime given a by graph \(X\). * 33.1 Cutting a closed graph \(X\). * 33.2 Proof 1 ("functional integral approach") * 33.3 Proof 2 (Schur complement approach) * 33.3.4 Examples * 33.3.5 General cutting/gluing of cobordisms * 33.3.6 Self-gluing and trace formula * 33.4 Comparison to continuum formulation * 33.4.1 Gluing formulae for Green's functions and determinants * 33.4.2 Example: continuum limit of line and circle graphs * 4 Interacting theory via Feynman diagrams * 4.1 Version relative to a boundary subgraph * 4.2 Cutting/gluing of perturbative partition functions via cutting/gluing of Feynman diagrams * 5 Path sum formulae for the propagator and determinant (Gaussian theory in the first quantization formalism) * 5.1 Quantum mechanics on a graph * 5.2 Path sum formulae on closed graphs * 5.2.1 Paths and h-paths in graphs * 5.2.2 h-path formulae for heat kernel, propagator and determinant * 5.2.3 Resumming h-paths. Path sum formulae for propagator and determinant * 5.2.4 Aside: path sum formulae for the heat kernel and the propagator - "1d gravity" version. * 5.3 Examples * 5.3.1 Circle graph, \(N=3\). * 5.3.2 Line graph, \(N=3\) * 5.4 Relative versions * 5.4.1 h-path formulae for Dirichlet propagator, extension operator, Dirichlet-to-Neumann operator * 5.4.2 Resumming h-paths * 5.4.3 Examples * 5.5 Gluing formulae from path sums * 6 Interacting theory: first quantization formalism * 6.1 Closed graphs * 6.1.1 Edge-to-path maps * 6.1.2 Feynman weights * 6.2 Relative version * 6.3 Cutting and gluing * 6.3.1 \(\mathrm{Cont}\) and \(\mathrm{Qu}\) and \(\mathrm{Qu}\) ## 1. Introduction In this paper we study a combinatorial model of the quantum massive scalar field with polynomial potential on a spacetime given a by a graph \(X\). Our motivation to do so was the study of the first quantization formalism, that we recall in Section 1.1 below, and in particular its interplay with locality, i.e. cutting and gluing of the spacetime manifold. At the origin is the Feynman-Kac formula (4) for the Green's function of the kinetic operator. In case the spacetime is a graph, this formula has a combinatorial analog given by summing over paths with certain weights (see Section 5). These path sums interact very naturally with cutting and gluing, in a mathematically rigorous way, see Theorem 3.8 and its proof from path sum formulae 5.5. A second motivation to study this model was the notion of (extended) functorial QFTs with source a Riemannian cobordism. Few examples of functorial QFTs out of Riemannian cobordism categories exist, for instance [15], [16],[21]. In this paper, we define a graph cobordism category and show that the combinatorial model defines a functor to the category of Hilbert spaces (Section 2.1). We also propose an extended cobordism (partial) \(n\)-category of graphs and a functor to a target \(n\)-category of commutative algebras defined by the combinatorial QFT we are studying (Section 2.2). Finally, one can use this discrete toy model to approximate the continuum theory, which in this paper we do only in easy one-dimensional examples (see Section 3.4). We think that the results derived in this paper will be helpful to study the interplay between renormalization and locality in higher dimensions (the two-dimensional case was discussed in detail in [16]). ### Motivation: first quantization formalism We outline the idea of the first quantization picture in QFT in the example of the interacting scalar field.1 Footnote 1: We refer the reader to the inspiring exposition of this idea in [10, Section 3.2]. Consider the scalar field theory on a Riemannian \(n\)-manifold \(M\) perturbed by a polynomial potential \(p(\phi)=\sum_{k\geq 3}\frac{p_{k}}{k!}\phi^{k}\), defined by the action functional \[S(\phi)=\int_{M}\left(\frac{1}{2}\phi(\Delta+m^{2})\phi+p(\phi)\right)d^{n}x. \tag{1}\] Here \(\phi\in C^{\infty}(M)\) is the field, \(\Delta\) is the Laplacian determined by the metric, \(m>0\) is the mass parameter and \(d^{n}x\) denotes the metric volume element. The partition function is formally given by a (mathematically ill-defined) functional integral understood perturbatively as a sum over Feynman graphs \(\Gamma\),2 Footnote 2: In this discussion we will ignore the issue of divergencies and renormalization. \[Z_{M}=\,^{\omega}\int_{C^{\infty}(M)}\mathcal{D}\phi\;e^{-\frac{1}{h}S(\phi) _{\gamma}}\;=\Big{(}\mathrm{det}^{\zeta}(\Delta+m^{2})\Big{)}^{-\frac{1}{2}} \cdot\sum_{\Gamma}\Phi_{\Gamma}. \tag{2}\] Here \(\mathrm{det}^{\zeta}\) is the functional determinant in zeta function regularization. The weight \(\Phi_{\Gamma}\) of a Feynman graph is the product of Green's functions \(G(x,y)\) of the kinetic operator \(\Delta+m^{2}\) associated with the edges of \(\Gamma\), integrated over ways to position vertices of \(\Gamma\) at points of \(M\) (times the vertex factors, a symmetry factor and a loop-counting factor): \[\Phi_{\Gamma}=\frac{\hbar^{|E|-|V|}}{|\mathrm{Aut}(\Gamma)|}\int_{M^{\times V }}d^{n}x_{1}\cdots d^{n}x_{|V|}\prod_{(u,v)\in E}G(x_{u},x_{v})\cdot\prod_{v \in V}(-p_{\mathrm{val}(v)}). \tag{3}\] Here \(V,E\) are the set of vertices and the set of edges of \(\Gamma\), respectively. Next, one can understand the kinetic operator \(\Delta+m^{2}=:\widehat{H}\) as a quantum Hamiltonian of an auxiliary quantum mechanical system with Hilbert space \(L^{2}(M)\). Then, one can write the Green's function \(G(x,y)\) as the evolution operator of this auxiliary system integrated over the time of evolution: \[G(x,y)=\int_{0}^{\infty}dt\,\langle x|e^{-t\widehat{H}}|y\rangle.\] Replacing the evolution operator (a.k.a. heat kernel) with its Feynman-Kac path integral representation, one has \[G(x,y)=\int_{0}^{\infty}dt\int_{\gamma:\;[0,t]\to M,\;\gamma(0)=x,\,\gamma(t)=y }\mathcal{D}\gamma\;e^{-S^{1q}(\gamma)}. \tag{4}\] Here the inner integral is over paths \(\gamma\) on \(M\) parameterized by the interval \([0,t]\), starting at \(y\) and ending at \(x\); the auxiliary ("first quantization") action in the exponent is \[S^{1q}(\gamma)=\int_{0}^{t}d\tau\left(\frac{\dot{\gamma}^{2}}{4}+m^{2}-\frac{ 1}{6}R(\gamma)\right), \tag{5}\] where \(R\) is the scalar curvature of the metric on \(M\); \(\dot{\gamma}^{2}\colon\;=g_{\gamma(\tau)}(\dot{\gamma},\dot{\gamma})\) is the square norm of the velocity vector \(\dot{\gamma}\in T_{\gamma(\tau)}M\) of the path w.r.t. the metric \(g\) on \(M\).3 Footnote 3: The action (5) can be obtained from the short-time asymptotics (Seeley-DeWitt expansion) of the heat kernel \(\kappa(x,y;t)=\langle x|e^{-t\widehat{H}}|y\rangle\underset{t\to 0}{\sim}(4\pi t)^{- \frac{n}{2}}e^{-\frac{d(x,y)^{2}}{4t}}(1+b_{2}(x,y)t+b_{4}(x,y)t^{2}+\cdots)\), with \(b_{2k}\) smooth functions on \(M\times M\) (in particular, on the diagonal), with \(b_{2}(x,x)=-m^{2}+\frac{1}{6}R(x)\); \(d(x,y)\) is the geodesic distance on \(M\), see e.g. [26]. One then has \(\kappa(x,y;t)=\lim_{N\to\infty}\int_{M^{\times(N-1)}}d^{n}x_{N-1}\cdots d^{ n}x_{1}\kappa(x_{N}=x,x_{N-1};\delta t)\cdots\kappa(x_{2},x_{1};\delta t) \kappa(x_{1},x_{0}=y;\delta t)=\lim_{N\to\infty}\int\prod_{j=1}^{N-1}\left((4 \pi\delta t)^{-\frac{n}{2}}dx_{j}\right)\,e^{-\sum_{j=1}^{N}\left(\frac{d(x_{ j},x_{j-1})^{2}}{4(\delta t)^{2}}+(m^{2}-\frac{1}{6}R(x_{j}))\right)\delta t}\). In the r.h.s. one recognizes the path integral of (4) written as a limit of finite-dimensional integrals (cf. [12]). We denoted \(\delta t=t/N\). Plugging the integral representation (4) of the Green's function into (3), one obtains the following integral formula for the weight of a Feynman graph: \[\Phi_{\Gamma}=\frac{\hbar^{|E|-|V|}}{|\text{Aut}(\Gamma)|}\int_{0 <t_{1},\ldots,t_{|E|}<\infty}dt_{1}\cdots dt_{|E|}\int_{\boldsymbol{\gamma} \colon\;\Gamma_{t_{1},\ldots,t_{|E|}}\to M}\mathcal{D}\boldsymbol{\gamma}\,e^ {-S^{1q}(\boldsymbol{\gamma})}.\\ \cdot\prod_{v\in V}\left(-p_{\text{val}(v)}\right). \tag{6}\] Here \(\Gamma_{t_{1},\ldots,t_{|E|}}\) is the graph \(\Gamma\) seen as a metric graph with \(t_{e}\) the length of edge \(e\). The outer integral is over metrics on \(\Gamma\), the inner (path) integral is over maps \(\boldsymbol{\gamma}\) of \(\Gamma\) to \(M\), sending vertices to points of \(M\) and edges to paths connecting those points; \(S^{1q}(\boldsymbol{\gamma})\) is understood as a sum of expressions in the r.h.s. of (5) over edges of \(\Gamma\). We refer to the formula (6), representing the weight of a Feynman graph via an integral over maps \(\Gamma\to M\) (or, equivalently, as a partition function of an auxiliary 1d sigma model on the graph \(\Gamma\) with target \(M\)), as the "first quantization formula."4 Footnote 4: As opposed to the functional integral (2) – the “second quantization formula.” _Remark 1.1_.: It is known that \(\frac{1}{6}R\) appears in the quantum Hamiltonian of the quantum mechanical system of a free particle on a closed Riemannian manifold \(M\), see for example [1, 27]. Here, the difference is that \(\frac{1}{6}R\) is introduced in the classical action (5) so that \(\Delta+m^{2}\) is the quantum Hamiltonian. _Remark 1.2_.: One can absorb the determinant factor in the r.h.s. of (2) into the sum over graphs, if we extend the set of graphs \(\Gamma\) to allow them to have circle connected components (with no vertices), with the rule \[\Phi_{S^{1}}=-\frac{1}{2}\log\det^{\zeta}(\Delta+m^{2})=\frac{1}{ 2}\int_{0}^{\infty}\frac{dt}{t}\,\operatorname{tr}\left(e^{-t\widehat{H}}\right) \\ =\frac{1}{2}\int_{0}^{\infty}\frac{dt}{t}\int_{\gamma\colon\,S^{1} _{t}\to M}\mathcal{D}\gamma\,e^{-S^{1q}(\gamma)}, \tag{7}\] where the integral in \(t\) is understood in zeta-regularized sense; \(S^{1}_{t}=\mathbb{R}/t\mathbb{Z}\) is the circle of perimeter \(t\). #### 1.1.1. Version with 1d gravity Another way to write the formula (4) is to consider paths \(\gamma\) parameterized by the standard interval \(I=[0,1]\) (with coordinate \(\sigma\)) and introduce an extra field - the metric \(\xi=\underline{\xi}(\sigma)(d\sigma)^{2}\) on \(I\): \[G(x,y)=\int_{(\operatorname{Met}(I)\times\operatorname{Map}(I,M)_{x,y})/ \operatorname{Diff}(\operatorname{I})}\mathcal{D}\xi\,\mathcal{D}\gamma\,e^{- S^{1q}(\gamma,\xi)}. \tag{8}\] Here \(\operatorname{Map}(I,M)_{x,y}\) is the space of paths \(\gamma\colon I\to M\) from \(x\) to \(y\); the exponent in the integrand is \[\bar{S}^{1q}(\gamma,\xi)=\int_{I}\left(\frac{1}{4}(\xi^{-1} \otimes\gamma^{*}g)(d\gamma,d\gamma)+m^{2}-\frac{1}{6}R(\gamma)\right)d \mathrm{vol}_{\xi}\\ =\int_{0}^{1}\left(\frac{\dot{\gamma}^{2}}{4\underline{\xi}}+m^{ 2}-\frac{1}{6}R(\gamma)\right)\sqrt{\underline{\xi}}\,d\sigma, \tag{9}\] with \(d\mathrm{vol}_{\xi}\) the Riemannian volume form of \(I\) induced by \(\xi\). Note that the action (9) is invariant under diffeomorphisms of \(I\). One can gauge-fix this symmetry by requiring that the metric \(\xi\) is constant on \(I\), then one is left with integration over the length \(t\) of \(I\) w.r.t. the constant metric; this reduces the formula (8) back to (4). In (8), the Green's function of the original theory on \(M\) is understood in terms of a 1d sigma-model on \(I\) with target \(M\) coupled to 1d gravity. For a Feynman graph, similarly to (6), one has \[\Phi_{\Gamma}=\hbar^{|E|-|V|}\int_{(\operatorname{Met}(\Gamma)\times \operatorname{Map}(\Gamma,M))/\operatorname{Diff}(\Gamma)}\mathcal{D}\boldsymbol{ \xi}\,\mathcal{D}\boldsymbol{\gamma}\,e^{-\bar{S}^{1q}(\boldsymbol{\gamma} \boldsymbol{\xi})}\prod_{v\in V}\left(-p_{\operatorname{val}(v)}\right) \tag{10}\] - the partition function of 1d sigma model on the Feynman graph \(\Gamma\) coupled to 1d gravity on \(\Gamma\); \(\bar{S}^{1q}(\boldsymbol{\gamma},\boldsymbol{\xi})\) is understood as a sum of terms (9) over the edges of \(\Gamma\).5 #### 1.1.2. Heuristics on locality in the first quantization formalism Suppose that we have a decomposition \(M=M_{1}\cup_{Y}M_{2}\) of \(M\) into two Riemannian manifolds \(M_{i}\), with common boundary \(Y\). Then locality of quantum field theory - or, a fictional "Fubini theorem" for the (also fictional) functional integral - suggests a gluing formula \[Z_{M}=\text{``}\int_{C^{\infty}(Y)}\mathcal{D}\phi_{Y}\;Z_{M_{1}}(\phi_{Y})Z_{M_ {2}}(\phi_{Y})\text{,''} \tag{11}\] where \(Z_{M_{i}}\) is a functional of \(C^{\infty}(Y)\), again formally given by a functional integral understood as a sum over Feynman graphs,6 Footnote 6: Again, for the purpose of this motivational section we are not discussing the problem of divergencies and renormalization. For \(n=\dim M=2\), a precise definition of all involved objects and a proof of the gluing formula (11) can be found in [16]. \[Z_{M_{i}}(\phi_{Y})=\text{``}\int_{\begin{subarray}{c}\phi\in C ^{\infty}(M)\\ \phi|_{Y}=\phi_{Y}\end{subarray}}\mathcal{D}\phi\;e^{-\frac{1}{\hbar}S(\phi) \text{,''}}\\ =\left(\det^{\zeta}(\Delta_{M_{i},Y}+m^{2})\right)^{-\frac{1}{2}} \cdot\sum_{\Gamma}\Phi_{\Gamma}(\phi_{Y}), \tag{12}\] where we are putting Dirichlet boundary conditions on the kinetic operator. Feynman graphs now have bulk and boundary vertices, \(V=V^{\text{bulk}}\sqcup V^{\partial}\), where boundary vertices are required to be univalent. The set of edges then decomposes as \(E=\sqcup_{i=0}^{2}E_{i}\) with where edges in \(E_{i}\) have \(i\) endpoints in \(V^{\partial}\). The weight of a Feynman graph then is \[\Phi_{\Gamma}(\phi_{Y})=\\ =\frac{\hbar|E|-|V|}{|\text{Aut}(\Gamma)|}\int_{M_{i}^{\times V \text{bulk}}}d^{n}x_{1}\cdots d^{n}x_{|V^{\text{bulk}}|}\int_{YV^{\partial}}d^ {n-1}y_{1}\cdots d^{n-1}y_{|V^{\partial}|}\\ \prod_{v\in V^{\text{bulk}}}(-p_{\text{val}(v)})\prod_{w\in V^{ \partial}}\phi_{Y}(y_{w})\\ \prod_{(u,v)\in E_{0}}G_{M_{i},Y}(x_{u},x_{v})\prod_{(u,v)\in E_{ 1}}E_{Y,M_{i}}(x_{u},y_{v})\prod_{(u,v)\in E_{2}}-\text{DN}_{Y,M_{i}}(y_{u},y _{v}), \tag{13}\] where \(G_{M_{i},Y}\) denotes the Green's function of the operator with Dirichlet boundary conditions, \(E_{Y,M_{i}}(x,y)=\partial_{n_{y}}G(x,y)\) is the normal derivative of the Green's function at a boundary point \(y\in Y\), and \(\text{DN}_{Y,M_{i}}\) is the Dirichlet-to-Neumann operator associated to the kinetic operator (see Section 3.4 for details). Let us sketch an interpretation of the gluing formula for the Green's function from the standpoint of the first quantization formalism. Let \(x\in M_{1}\), \(y\in M_{2}\) and consider a path \(\gamma\colon[0,t]\to M\) with \(\gamma(0)=x\) and \(\gamma(t)=y\). Then the decomposition \(M=M_{1}\cup_{Y}M_{2}\) induces a decomposition \(\gamma=\gamma_{1}*\gamma_{2}*\gamma_{3}\) as follows ("\(*\)" means concatenation of paths). Let \(t_{0}=0\), \(t_{1}=\min\{t,\gamma(t)\in Y\}\) and \(t_{2}=\max\{t,\gamma(t)\in Y\}\) and \(t_{3}=t\), then \(\gamma_{i}=\gamma|_{[t_{i-1},t_{i}]}\). This gives a decomposition \[P_{M}(x,y)=\bigsqcup_{u,v\in Y}P^{\prime}_{M_{1}}(x,u)\times P_{M}(u,v)\times P ^{\prime}_{M_{2}}(v,y),\] where we have introduced the notation \(P_{M}(x,y)\) for the set of all paths from \(x\) to \(y\) (of arbitrary length) and \(P^{\prime}_{M_{i}}(x,u)\) for the set of all paths starting at \(x\in M_{i}\) and ending at \(u\in Y\) and not intersecting \(Y\) in between. See Figure 1. Paths of a specific length \(t\) will be denoted \(P^{t}_{M}(x,y)\), or \((P^{\prime}_{M})^{t}(x,u)\). Assuming a Fubini theorem for the path measure \(\mathcal{D}\gamma\), additivity of the action suggests that we could rewrite (4) as \[G(x,y)\text{``}=\text{''}\int_{Y\times Y}d^{n-1}u\,d^{n-1}v\int_ {0}^{\infty}dt_{1}\int_{\gamma_{1}\in(P^{\prime}_{M_{1}})^{t_{1}}(x,u)} \mathcal{D}\gamma_{1}\,e^{-S^{1q}(\gamma_{1})}\\ \int_{0}^{\infty}dt_{2}\int_{\gamma_{2}\in P^{t_{2}}_{M}(u,v)} \mathcal{D}\gamma_{2}\,e^{-S^{1q}(\gamma_{2})}\int_{0}^{\infty}dt_{3}\int_{ \gamma_{3}\in(P_{M_{2}})^{t_{3}}(v,y)}\mathcal{D}\gamma_{3}\,e^{-S^{1q}(\gamma _{1})}. \tag{14}\] Comparing with the gluing formula for the Green's function7 Footnote 7: See [16, Proposition 4.2] and Section 3.4.1 of the present paper for details. \[G(x,y)=\int_{Y\times Y}d^{n-1}u\,d^{n-1}v\ E_{Y,M_{1}}(x,u)\,\varkappa_{Y,M_{ }}(u,v)\,E_{Y,M_{2}}(v,y), \tag{15}\] with \(\varkappa_{Y,M}=(\text{DN}_{Y,M_{1}}+\text{DN}_{Y,M_{2}})^{-1}\) the inverse of the "total" Dirichlet-to-Neumann operator, suggests the following path integral formulae for the extension operator and \(\varkappa\): \[E_{Y,M_{i}}(x,u)=\int_{0}^{\infty}dt\int_{\gamma\in(P^{\prime}_{M_{i}})^{t}(x,u)}\mathcal{D}\gamma\,e^{-S^{1q}(\gamma)}, \tag{16}\] \[\varkappa_{Y,M}(u,v)=\int_{0}^{\infty}dt\int_{\gamma\in P^{\prime}_{M}(u,v)} \mathcal{D}\gamma\,e^{-S^{1\mathrm{q}}(\gamma)}. \tag{17}\] The results of our paper8 actually suggest also the following path integral formula for the Dirichlet-to-Neumann operator: Footnote 8: Another reason to guess that formula is the fact that the integral kernel of the Dirichlet-to-Neumann operator is given by a symmetric normal derivative of the Green’s function \(\mathrm{DN}_{Y,M_{i}}(u,v)=-\partial_{n_{u}}\partial_{n_{v}}G_{M_{i},Y}\) (in a regularized sense - see [16, Remark 3.4]), and formula (16) for the first normal derivative of the Green’s function. \[\mathrm{DN}_{Y,M_{i}}(u,v)=\int_{0}^{\infty}dt\int_{\gamma\in(P^{\prime\prime }_{M_{i}})^{t}(u,v)}\mathcal{D}\gamma\,e^{-S^{1\mathrm{q}}(\gamma)}. \tag{18}\] Here \((P^{\prime\prime}_{M_{i}})^{t}(u,v)\) is the set of all paths from \(\gamma\colon[0,t]\to M_{i}\) from \(u\in Y\) to \(v\in Y\) such that \(\gamma(\tau)\notin Y\) for all \(0<\tau<t\). Assuming these formulae, we have again a "first quantization formula" for weights of Feynman graphs with boundary vertices \[\Phi_{\Gamma}(\phi_{Y})=\frac{\hbar^{|E|-|V|}}{|\mathrm{Aut}( \Gamma)|}\int_{0<t_{1},\dots,t_{|E|}<\infty}dt_{1}\cdots dt_{|E|}\\ \int_{\boldsymbol{\gamma}\colon\,\Gamma_{t_{1},\dots,t_{|E|}} \to M_{i}}\mathcal{D}\boldsymbol{\gamma}\,e^{-S^{1\mathrm{q}}(\boldsymbol{ \gamma})}\prod_{v\in V^{\mathrm{bulk}}}\big{(}-p_{\mathrm{val}(v)}\big{)}\prod _{v^{\partial}\in V^{\partial}}\phi_{Y}(\boldsymbol{\gamma}(v^{\partial})). \tag{19}\] Here notation is as in (6), the only additional condition is that \(\boldsymbol{\gamma}\) respects the type of edges in \(\Gamma\), that is, for all \(x\in\Gamma_{t_{1},\dots,t_{|E|}}\) we have \(\boldsymbol{\gamma}(x)\in Y\) if and only if \(x\in V^{\partial}\). ### QFT on a graph. A guide to the paper In this paper we study a toy ("combinatorial" or "lattice") version of the scalar field theory (1), where the spacetime manifold \(M\) is replaced by a graph \(X\), the scalar field \(\phi\) is a function on the vertices of \(X\) and the Laplacian in the kinetic operator is replaced by the graph Laplacian \(\Delta_{X}\). I.e., the model is defined by the action \[S_{X}(\phi)=\sum_{v\in V_{X}}\frac{1}{2}\phi(v)\,\big{(}(\Delta_{X}+m^{2} \mathrm{Id})\phi\big{)}\,(v)+p(\phi(v)), \tag{20}\] where \(V_{X}\) is the set of vertices of \(X\) and \(p\) is the interaction potential (a polynomial of \(\phi\)), as before. This model has the following properties. 1. The "functional integral" over the space of fields is a finite-dimensional convergent integral (Section 2). 2. The functional integral can be expanded in Feynman graphs, giving an asymptotic expansion of the nonperturbative partition function in powers of \(\hbar\) (Section 4). 3. Partition functions are compatible with unions of graphs over a subgraph ("gluing") - we see this as a graph counterpart of Atiyah-Segal functorial picture of QFT, with compatibility w.r.t. cutting-gluing \(n\)-manifolds along closed \((n-1)\)-submanifolds. This functorial property of the graph QFT can be proven 1. directly from the functional integral perspective (by a Fubini theorem argument) - Section 2.1, or 2. at the level of Feynman graphs (Section 4.2). The proof of functoriality at the level of Feynman graphs relies on the "gluing formulae" describing the behavior of Green's functions and determinants w.r.t. gluing of spacetime graphs (Section 3.3). These formulae are a combinatorial analog of known gluing formulae for Green's functions and zeta-regularized functional determinants on manifolds (Section 3.4). 3. The graph QFT admits a higher-categorical extension which allows cutting-gluing along higher-codimension "corners"9 (Section 2.2), in the spirit of Baez-Dolan-Lurie extended TQFTs. Footnote 9: “Corners” in the graph setting are understood à la Čech complex, as multiple overlaps of “bulk” graphs. 5. The Green's function on a graph \(X\) can be written as a sum over paths (Section 5, in particular Table 4), giving an analog of the formula (4); similarly, the determinant can be written as a sum over closed paths, giving an analog of (7). This leads to a "first-quantization" representation of Feynman graphs, as a sum over maps \(\Gamma\to X\), sending vertices of \(\Gamma\) to vertices of \(X\) and sending edges of \(\Gamma\) to paths on \(X\) (connecting the images of the incident vertices) - Section 6. This yields a graph counterpart of the continuum first quantization formula (6). 6. There are path sum formulae for the combinatorial extension (or "Poisson") operators and Dirichlet-to-Neumann operators (Section 5.4, see in particular Table 5), analogous to the path integral formulae (16) and (18). 7. First quantization perspective gives a visual interpretation of the gluing formula for Green's functions and determinants on a graph \(X=X^{\prime}\cup_{Y}X^{\prime\prime}\) in terms of cutting the path into portions spent in \(X^{\prime}\) or in \(X^{\prime\prime}\) (Section 5.5), and likewise an interpretation of the cutting-gluing of Feynman graphs (Section 6.3). _Remark 1.3_.: A free (Gaussian) version of the combinatorial model we are studying in this paper was studied in [23]. Our twist on it is the deformation by a polynomial potential, the path-sum (first quantization) formalism, and the gluing formula for propagators (the BFK-like gluing formula for determinants was studied in [23]). ### Acknowledgements We thank Olga Chekeres, Andrey Losev and Donald R. Youmans for inspiring discussions on the first quantization approach in QFT. I.C., P.M. and K.W. would like to thank the Galileo Galilei Institute, where part of the work was completed, for hospitality. ## Notations \begin{tabular}{l l} **Notation** & **Description** \\ \(X\) & Spacetime graph \\ \(Y\), or \(Y_{i}\) & A subgraph \(Y\subset X\) (or several subgraphs \(Y_{i}\subset X\)) of the spacetime graph \\ \(\Delta_{X}\) & Laplacian on \(X\) \\ \(K_{X}\) & Kinetic operator on \(X\), \(K_{X}=\Delta_{X}+m^{2}\) \\ \(\Gamma\) & Feynman graph \\ \(G(x,y)\) & The propagator or Green's function of the kinetic operator \(K_{X}\), integral kernel (matrix) of \\ & \(G=K_{X}^{-1}\) \\ \(Z_{X}\) & The partition function on \(X\) \\ \(Z_{X}^{\rm pert}\) & The perturbative partition function on \(X\) \\ \(E_{Y,X}\) & The extension operator (also known as Poisson operator): extends a field \(\phi_{Y}\) into bulk \(X\) \\ & as a solution of Dirichlet problem \\ DN\({}_{Y,X}\) & Dirichlet-to-Neumann operator \\ \(S_{X}\) & Action functional on the space of fields on \(X\) \\ \(S_{X}^{1q}\) & First quantized action functional \\ \(\det^{\zeta}A\) & Zeta-regularized determinant of operator \(A\) \\ \(P_{X}(u,v)\) & The set of paths in \(X\) joining \(u\) to \(v\) \\ \(\Pi_{X}(u,v)\) & The set of h-paths in \(X\) joining \(u\) to \(v\) \\ \(P_{X}^{\Gamma}\) & The set of edge-to-path maps from \(\Gamma\) to \(X\) \\ \(\Pi_{X}^{\Gamma}\) & The set of edge-to-h-path maps from \(\Gamma\) to \(X\) \\ \(\deg(\tilde{\gamma})\) & The number of jumps in an h-path \(\tilde{\gamma}\) \\ \(l(\gamma)\) & The length of a path (or h-path) \(\gamma\) \\ \(h(\tilde{\gamma})\) & The number of hesitations of an h-path \(\tilde{\gamma}\) \\ \(s(\tilde{\gamma})\) & Weight of an h-path, \(s(\tilde{\gamma})=m^{-2l(\tilde{\gamma})}(-1)^{h(\tilde{\gamma})}\) \\ \(w(\gamma)\) & Weight of a path, \(w(\gamma)=\prod_{v\in V(\gamma)}\frac{1}{m^{2}+\text{ val}(v)}\) \\ \end{tabular} ## 2. Scalar field theory on a graph Let \(X\) be a finite graph. Consider the toy field theory on \(X\) where fields are real-valued functions \(\phi(v)\) on the set of vertices \(V_{X}\), i.e., the space of fields is the space of \(0\)-cochains on \(X\) seen as a \(1\)-dimensional CW complex, \[F_{X}=C^{0}(X).\] We define the action functional as \[S_{X}(\phi)= \frac{1}{2}(d\phi,d\phi)+\langle\mu,\frac{m^{2}}{2}\phi^{2}+p( \phi)\rangle\] \[= \frac{1}{2}(\phi,(\Delta_{X}+m^{2})\phi)+\langle\mu,p(\phi)\rangle\] \[= \sum_{e\in E_{X}}\frac{1}{2}(\phi(v_{e}^{1})-\phi(v_{e}^{0}))^{2} +\sum_{v\in V_{X}}\left(\frac{m^{2}}{2}\phi(v)^{2}+p(\phi(v))\right). \tag{21}\] Here: - the model does not depend on this choice). * \((,)\colon\operatorname{Sym}^{2}\!C^{k}(X)\to\mathbb{R}\) for \(k=0,1\) is the standard metric, in which the cell basis is orthonormal. * \(\langle,\rangle\) is the canonical pairing of chains and cochains; \(\mu\) is the \(0\)-chain given by the sum of all vertices with coefficient \(1\).10 Footnote 10: The \(0\)-chain \(\mu\) is an analog of the volume form on the spacetime manifold in our model. If we want to consider the field theory on \(X\) as a lattice approximation of a continuum field theory, we would need to scale the metric \((,)\) and the \(0\)-chain \(\mu\) appropriately with the mesh size. Additionally, one would need to add mesh-dependent counterterms to the action in order to have finite limits for the partition function and correlators. * \(m>0\) is the fixed "mass" parameter. * \(p(\phi)\) is a fixed polynomial ("potential"), \[p(\phi)=\sum_{k\geq 3}\frac{p_{k}}{k!}\phi^{k}.\] (22) More generally, \(p(\phi)\) can be a real analytic function. We will assume that \(\frac{m^{2}}{2}\phi^{2}+p(\phi)\) has a unique absolute minimum at \(\phi=0\) and that it grows sufficiently fast11 at \(\phi\to\pm\infty\), so that the integral (23) converges measure-theoretically. Footnote 11: Namely, we want the integral \(\int_{\mathbb{R}}\Phi\,e^{-\frac{1}{k}(\frac{m^{2}}{2}\phi^{2}+p(\phi))}\) to converge for any \(\hbar>0\). * \(\Delta_{X}=d^{T}d\colon C^{0}(X)\to C^{0}(X)\) is the graph Laplace operator on \(0\)-cochains; \(d^{T}\colon C^{1}(X)\to C^{0}(X)\) is the dual (transpose) map to the coboundary operator (in the construction of the dual, one identifies chains and cochains using the standard metric). The matrix elements of \(\Delta_{X}\) in the cell basis, for \(X\) a simple graph (i.e. without double edges and short loops), are \[(\Delta_{X})_{uv}=\left\{\begin{array}{ll}\operatorname{val}(v)&\text{if $u=v$,} \\ -1&\text{if $u\neq v$ and $u$ is connected to $v$ by an edge,}\\ 0&\text{otherwise,}\end{array}\right.\] where \(\operatorname{val}(v)\) is the valence of the vertex \(v\). More generally, for \(X\) not necessarily simple, one has \[(\Delta_{X})_{uv}=\left\{\begin{array}{ll}\operatorname{val}(v)-2\cdot \#\{\text{short loops $v\to v$}\}&\text{if $u=v$,}\\ -\#\{\text{edges $u\to v$}\}&\text{if $u\neq v$.}\end{array}\right.\] We will be interested in the partition function \[Z_{X}=\int_{F_{X}}D\phi\;e^{-\frac{1}{\hbar}S_{X}}, \tag{23}\] where \[D\phi=\prod_{v\in V_{X}}\frac{d\phi(v)}{\sqrt{2\pi\hbar}}\] is the "functional integral measure" on the space of fields \(F_{X}\) (in this case, just the Lebesgue measure on a finite-dimensional space); \(\hbar>0\) is the parameter of quantization - the "Planck constant."12 The integral in the r.h.s. of (23) is absolutely convergent. One can also consider correlation functions \[\langle\phi(v_{1})\cdots\phi(v_{n})\rangle=\frac{1}{Z_{X}}\int_{F_{X}}D\phi\;e^{- \frac{1}{\hbar}S_{X}}\phi(v_{1})\cdots\phi(v_{n}). \tag{24}\] _Remark 2.1_.: We stress that in this section we consider the _nonperturbative_ partition functions/correlators and \(\hbar\) is to be understood as an actual positive number, unlike in the setting of perturbation theory (Section 4) where \(\hbar\) becomes a formal parameter. _Remark 2.2_.: In this paper we use the Euclidean QFT convention for our (toy) functional integrals, with the integrand \(e^{-\frac{1}{\hbar}S}\) instead of \(e^{\frac{1}{\hbar}S}\), in order to have a better measure-theoretic convergence situation. The first convention leads to absolutely convergent integrals whereas the second leads to conditionally convergent oscillatory integrals. ### Functorial picture One can interpret our model in the spirit of Atiyah-Segal functorial picture of QFT, as a (symmetric monoidal) functor \[\operatorname{GraphCob}\xrightarrow{(\mathcal{H},Z)}\operatorname{Hilb} \tag{25}\] from the spacetime category13 of graph cobordisms to the category of Hilbert spaces and Hilbert-Schmidt operators. Footnote 13: This terminology is taken from [22]. Here in the source category \(\operatorname{GraphCob}\) is as follows: * The objects are graphs \(Y\). * A morphism from \(Y_{\operatorname{in}}\) to \(Y_{\operatorname{out}}\) is a graph \(X\) which contains \(Y_{\operatorname{in}}\) and \(Y_{\operatorname{out}}\) as disjoint subgraphs. We will write \(Y_{\operatorname{in}}\xrightarrow{X}Y_{\operatorname{out}}\) and refer to \(Y_{\operatorname{in}},Y_{\operatorname{out}}\) as "ends" (or "boundaries") of \(X\), or we will say that \(X\) is a "graph cobordism" between \(Y_{\operatorname{in}}\) and \(Y_{\operatorname{out}}\). * The composition is given by unions of graphs with out-end of one cobordism identified with the in-end of the subsequent one: (26) \[(Y_{3}\xleftarrow{X^{\prime\prime}}Y_{2})\circ(Y_{2}\xleftarrow{X^{\prime}}Y _{1})=Y_{3}\xleftarrow{X}Y_{1},\] where \[X=X^{\prime}\cup_{Y_{2}}X^{\prime\prime}.\] * The monoidal structure is given by disjoint unions of graphs. All graphs are assumed to be finite. As defined, \(\operatorname{GraphCob}\) does not have unit morphisms (as usual for spacetime categories in non-topological QFTs); by abuse of language, we still call it a category. The target category \(\operatorname{Hilb}\) has as its objects Hilbert spaces \(\mathcal{H}\) over \(\mathbb{C}\);14 the morphisms are Hilbert-Schmidt operators; the composition is composition of operators. The monoidal structure is given by tensor products (of Hilbert spaces and of operators). Footnote 14: Alternatively (since we do not put \(i\) in the exponent in the functional integral), one can consider Hilbert spaces over \(\mathbb{R}\). The functor (25) is constructed as follows. For an end-graph \(Y\in\operatorname{Ob}(\operatorname{GraphCob})\), the associated vector space is \[\mathcal{H}_{Y}=L^{2}(C^{0}(Y)) \tag{27}\] - the space of complex-valued square-integrable functions on the vector space \(C^{0}(Y)=\mathbb{R}^{V_{Y}}\). For a graph cobordism \(Y_{\rm in}\xrightarrow{X}Y_{\rm out}\), the associated operator \(Z_{X}\colon\mathcal{H}_{Y_{\rm in}}\to\mathcal{H}_{Y_{\rm out}}\) is \[Z_{X}\colon\Psi_{\rm in}\mapsto\Big{(}\Psi_{\rm out}\colon\phi_{\rm out}\mapsto \int_{F_{Y_{\rm in}}}D\phi_{\rm in}\ \langle\phi_{\rm out}|Z_{X}|\phi_{\rm in}\rangle\Psi_{\rm in}(\phi_{\rm in}) \Big{)} \tag{28}\] with the integral kernel \[\langle\phi_{\rm out}|Z_{X}|\phi_{\rm in}\rangle\colon\ =\int_{F_{X}^{\phi_{\rm in },\phi_{\rm out}}}[D\phi]^{\phi_{\rm in},\phi_{\rm out}}\ e^{-\frac{1}{\hbar}(S_{X}( \phi)-\frac{1}{2}S_{Y_{\rm in}}(\phi_{\rm in})-\frac{1}{2}S_{Y_{\rm out}}(\phi _{\rm out}))}. \tag{29}\] Here * \(F_{X}^{\phi_{\rm in},\phi_{\rm out}}\) is the space of fields on \(X\) subject to boundary conditions \(\phi_{\rm in},\phi_{\rm out}\) imposed on the ends, i.e., it is the fiber of the evaluation-at-the-ends map \[F_{X}\to F_{Y_{\rm in}}\times F_{Y_{\rm out}}\] over the pair \((\phi_{\rm in},\phi_{\rm out})\). * The measure \[[D\phi]^{\phi_{\rm in},\phi_{\rm out}}=\prod_{v\in V_{X}\setminus(V_{Y_{\rm in }}\sqcup V_{Y_{\rm out}})}\frac{d\phi(v)}{\sqrt{2\pi\hbar}}\] stands for the "conditional functional measure" on fields subject to boundary conditions. We will also call the expression (29) the partition function on the graph \(X\) "relative" to the ends \(Y_{\rm in}\), \(Y_{\rm out}\), or just the partition function relative to the "boundary" subgraph \(Y=Y_{\rm in}\sqcup Y_{\rm out}\), if the distinction between "in" and "out" is irrelevant. In the latter case we will use the notation \(Z_{X,Y}(\phi_{Y})\), with \(\phi_{Y}=(\phi_{\rm in},\phi_{\rm out})\). **Proposition 2.3**.: _The assignment (27), (28) is a functor of monoidal categories._ Proof.: The main point to check is that composition is mapped to composition. It follows from Fubini theorem, locality of the integration measure (that it is a product over vertices of local measures) and additivity of the action: \[S_{X}(\phi)=S_{X^{\prime}}(\phi|_{X^{\prime}})+S_{X^{\prime\prime}}(\phi|_{X^ {\prime\prime}})-S_{Y_{2}}(\phi|_{Y_{2}}) \tag{30}\] in the notations of (26). Indeed, it suffices to prove \[\int_{F_{Y_{2}}}D\phi_{2}\ \langle\phi_{3}|Z_{X^{\prime\prime}}|\phi_{2} \rangle\langle\phi_{2}|Z_{X^{\prime}}|\phi_{1}\rangle\stackrel{{! }}{{=}}\langle\phi_{3}|Z_{X}|\phi_{1}\rangle \tag{31}\] - again, we are considering the gluing of graph cobordisms as in (26). The l.h.s. is \[\int_{F_{Y_{2}}}D\phi_{2}\int_{F_{X^{\prime}}^{\phi_{1},\phi_{2}}}[D\phi^{ \prime}]^{\phi_{1},\phi_{2}}\int_{F_{X^{\prime\prime}}^{\phi_{2},\phi_{3}}}[D \phi^{\prime\prime}]^{\phi_{2},\phi_{3}}\exp\Big{(}-\frac{1}{\hbar}\Big{(} \big{(}-\frac{1}{2}S_{Y_{1}}(\phi_{1})+\] \[+\underbrace{S_{X^{\prime}}(\phi^{\prime})-\frac{1}{2}S_{Y_{2}}( \phi_{2})}_{X^{\prime}(\phi)}+\big{(}-\frac{1}{2}S_{Y_{2}}(\phi_{2})+S_{X^{ \prime\prime}}(\phi^{\prime\prime})\big{)}\] \[=\int_{F_{X}^{\phi_{1},\phi_{3}}}[D\phi]^{\phi_{1},\phi_{3}}e^{- \frac{1}{h}(S_{X}(\phi)-\frac{1}{2}S_{Y_{1}}(\phi_{1})-\frac{1}{2}S_{Y_{2}}( \phi_{3}))},\] which proves (31). Here we understood that \(\phi\) is a field on the glued cobordism \(X\) restricting to \(\phi^{\prime},\phi^{\prime\prime}\) on \(X^{\prime},X^{\prime\prime}\), respectively. Compatibility with disjoint unions is obvious by construction. _Remark 2.4_.: One can interpret the correlator (24) as the partition function of \(X\) seen as a cobordism \(\{v_{1},\dots,v_{n}\}\xrightarrow{X}\varnothing\) applied to the state \(\phi(v_{1})\otimes\dots\otimes\phi(v_{n})\in\mathcal{H}_{\{v_{1},\dots,v_{n}\}}\). _Remark 2.5_.: The combinatorial model we are presenting is intended to be an analog (toy model) of the continuum QFT, according to the dictionary of Table 2. When we want to emphasize that a graph \(X\) is not considered as a cobordism (or equivalently \(X\) is seen as a cobordism \(\varnothing\xrightarrow{X}\varnothing\)), we will call \(X\) a "closed" graph (by analogy with closed manifolds). ### Aside: "QFT with corners" (or "extended QFT") picture Fix any \(n\geq 1\). We will describe a (tautological) extension of the functorial picture above for our graph model as an \(n\)-extended QFT (with gluing/cutting along "corners" of codimension up to \(n\)), in the spirit of Baez-Dolan-Lurie program [2], [18] of extended topological quantum field theories.15 One has \begin{table} \begin{tabular}{c|c} combinatorial QFT & continuum QFT \\ \hline graph \(X\) & closed spacetime \(n\)-manifold \(M\); \\ field \(\phi\colon V_{X}\to\mathbb{R}\) & scalar field \(\phi\in C^{\infty}(M)\); \\ action (21) & action \(S(\phi)=\int_{M}\frac{1}{2}d\phi\wedge*d\phi+(\frac{m^{2}}{2}\phi^{2}+p(\phi))d\)vol \\ & \(=\int_{M}(\frac{1}{2}\phi(\Delta+m^{2})\phi+p(\phi))d\)vol; \\ partition function (23) & functional integral on a closed manifold; \\ \hline graph cobordism \(Y_{\mathrm{in}}\xrightarrow{X}Y_{\mathrm{out}}\) & \(n\)-manifold \(M\) with in/out-boundaries \\ & being closed \((n-1)\)-manifolds \(\gamma_{\mathrm{in}},\gamma_{\mathrm{out}}\); \\ gluing/cutting of graph cobordisms & gluing/cutting of smooth \(n\)-cobordisms; \\ matrix element (29) & functional integral \\ & with boundary conditions \(\phi_{\mathrm{in}},\phi_{\mathrm{out}}\). \\ \end{tabular} \end{table} Table 2. Comparison between toy model and continuum QFT. a functor of symmetric monoidal \(n\)-categories \[\mathrm{GraphCob}^{n}\xrightarrow{(\mathcal{H},Z)}\mathsf{T}^{n}. \tag{32}\] We proceed to describe its ingredients. #### 2.2.1. Source \(n\)-category The source \(n\)-category \(\mathrm{GraphCob}^{n}\) is as follows. * Objects (a.k.a. 0-morphisms) are graphs \(X^{[0]}\) (the index in brackets is to emphasize that this is a graph at categorical level 0). * A 1-morphism between objects (graphs) \(Y_{1}^{[0]},Y_{2}^{[0]}\) is a graph \(X^{[1]}\) together with graph embeddings of \(Y_{1}^{[0]},Y_{2}^{[0]}\) into \(X^{[1]}\) with disjoint images. * For \(2\leq k\leq n\), a \(k\)-morphism between two \((k-1)\)-morphisms \(Y_{1}^{[k-1]},Y_{2}^{[k-1]}\in\mathrm{Mor}_{k-1}(A_{1}^{[k-2]},A_{2}^{[k-2]})\) is a graph \(X^{[k]}\) equipped with embeddings of \(Y_{1,2}^{[k-1]}\) satisfying the following "maximal disjointness" assumption: in the resulting diagram of graph embeddings \[\xy(0,0)*{X^{[k]}}="0";(0,0)*{X^{[k]}}="1";(0,0)*{X^{[k-1]}}="2";(0,0)*{X^{[k-1]} }="3";(0,0)*{X^{[k]}}="4";(0,0)*{X^{[k-2]}}="5";(0,0)*{X^{[k-1]}}="6";(0,0)*{X^{[ k-1]}}="7";(0,0)*{X^{[k-1]}}="8";(0,0)*{X^{[k]}}="9";(0,0)*{X^{[k-1]}}="10";(0,0)* {X^{[k]}}="11";(0,0)*{X^{[k-1]}}="12";(0,0)*{X^{[k-1]}}="13";(0,0)*{X^{[k]}}="14";(0,0)* {X^{[k]}}="15";(0,0)*{X^{[k-1]}}="16";(0,0)*{X^{[k-1]}}="17";(0,0)*{X^{[k-1]}}="18";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k-1]}}="19";(0,0)*{X^{[k-1]}}="11";(0,0)*{X^{[k-1]}}="1 19";(0,0)*{X^{[k-1]}}="12";(0,0)*{X^{[k-1]}}="13";(0,0)*{X^{[k]}}="14";(0,0)* {X^{[k]}}="15";(0,0)*{X^{[k]}}="16";(0,0)*{X^{[k]}}="17";(0,0)*{X^{[k-1]}}="18";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k-1]}}="19";(0,0)*{X^{[k-1]}}="19";(0,0)*{X^{[k-1]}}="1 9";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k-1]}}="19";(0,0)*{X^{[k-1]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k-1]}}="19";(0,0)*{X^{[k-1]}}="1 9";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[ k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="1 9";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[ k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="1 9";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[ k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)*{X^{[k]}}="19";(0,0)* {X^{[k]}}="19";(0,0)*{X^{ _Remark 2.7_.: In GraphCob\({}^{n}\) we only consider "vertical" compositions of \(k\)-morphisms, i.e., one can only glue two level \(k\) graphs over a level \(k-1\) graph, not over any graph at level \(k^{\prime}<k\) (otherwise, the composition would fail the maximal disjointness assumption). One might call this structure a "partial" \(n\)-category16 (but by abuse of language we suppress "partial"). On a related point, as in Section 2.1, there are no unit \(k\)-morphisms. Footnote 16: Or a “Pickwickian \(n\)-category.” (Cf. “He had used the word in its Pickwickian sense...He had merely considered him a humbug in a Pickwickian point of view.” Ch. Dickens, Pickwick Papers.) #### 2.2.2. Target \(n\)-category The target \(n\)-category \(\mathsf{T}^{n}\) is as follows. * Objects are commutative unital algebras over \(\mathbb{C}\) ("CUAs") \(\mathcal{H}^{[0]}\). * For \(1\leq k\leq n-1\), a \(k\)-morphism between CUAs \(\mathcal{H}^{[k-1]}_{1},\mathcal{H}^{[k-1]}_{2}\) is a CUA \(\mathcal{H}^{[k]}\) equipped with injective morphisms of unital algebras of \(\mathcal{H}^{[k-1]}_{1},\mathcal{H}^{[k-1]}_{2}\) into \(\mathcal{H}^{[k]}\). * An \(n\)-morphism between CUAs \(\mathcal{H}^{[n-1]}_{1,2}\) is a linear map \(Z\colon\mathcal{H}^{[n-1]}_{1}\to\mathcal{H}^{[n-1]}_{2}\). This map is not required to be an algebra morphism. The monoidal structure is given by tensor product at all levels. The composition of \(n\)-morphisms is the composition of linear maps. The composition of \(k\)-morphisms for \(k<n\) is given by the balanced tensor product of algebras over a subalgebra, \[\mathcal{H}^{[k]}_{1}\otimes_{\mathcal{H}^{[k-1]}}\mathcal{H}^{[k]}_{2}.\] #### 2.2.3. The QFT functor The functor (32) is defined as follows. * For \(0\leq k\leq n-1\), a graph \(X^{[k]}\in\operatorname{Mor}_{k}(Y^{[k-1]}_{1},Y^{[k-1]}_{2})\) is mapped to the commutative unital algebra of functions on \(0\)-cochains on the graph \[\mathcal{H}^{[k]}(X^{[k]}):=C^{\infty}(C^{0}(X^{[k]})),\] with algebra maps from functions on \(0\)-cochains of \(Y^{[k-1]}_{1},Y^{[k-1]}_{2}\) induced by graph inclusions \(i_{1,2}\colon Y^{[k-1]}_{1,2}\to X^{[k]}\) : (33) \[(i^{*}_{1,2})^{*}\colon C^{\infty}(C^{0}(Y^{[k-1]}_{1,2}))\to C^{\infty}(C^{ 0}(X^{[k]})).\] Here \(i^{*}_{1,2}\) is the restriction of a \(0\)-cochain (field) from \(X^{[k]}\) to \(Y^{[k-1]}_{1,2}\), and \((i^{*}_{1,2})^{*}\) is the pullback by this restriction map. * A graph \(X^{[n]}\in\operatorname{Mor}_{n}(Y^{[n-1]}_{\operatorname{in}},Y^{[n-1]}_{ \operatorname{out}})\) is mapped to the linear map \(Z(X^{[n]})\colon\mathcal{H}^{[n-1]}(Y^{[n-1]}_{\operatorname{in}})\to \mathcal{H}^{[n-1]}(Y^{[n-1]}_{\operatorname{out}})\) defined by a variant of (28), (29) allowing the in- and out-boundaries to intersect: (34) \[Z(X)\colon\Psi_{\operatorname{in}}\mapsto\Big{(}\Psi_{ \operatorname{out}}\colon\phi_{\operatorname{out}}\mapsto\\ \mapsto\int_{F^{\phi_{\operatorname{out}}}_{X}}[D\phi]^{\phi_{ \operatorname{out}}}e^{-\frac{1}{h}(S_{X}(\phi)-\frac{1}{2}S_{Y_{\operatorname {out}}}(\phi_{\operatorname{out}})-\frac{1}{2}S_{Y_{\operatorname{in}}}(\phi_{ \operatorname{|Y_{\operatorname{in}}}}))}\Psi_{\operatorname{in}}(\phi_{ \operatorname{|Y_{\operatorname{in}}}})\Big{)}.\] Here we suppressed the superscripts \([\cdots]\) for \(X,Y\) to lighten the notation. It is a straightforward check (by repeating the argument of Proposition 2.3) that formula (34) is compatible with gluing (vertical composition) of \(n\)-morphisms in GraphCob\({}^{n}\), see Figure 3. _Remark 2.8_.: The category GraphCob\({}^{n}\) should be thought of as an (oversimplified) toy model of Baez-Dolan-Lurie fully extended smooth cobordism \((\infty,n)\)-category, where graphs \(X^{[k]}\) model \(k\)-dimensional smooth strata. In this language, if we relabel our graphs by categorical co-level, \(X^{\{k\}}\colon=X^{[n-k]}\), we should think of graphs \(X^{\{0\}}\) as "bulk," graphs \(X^{\{1\}}\) as "boundaries," graphs \(X^{\{2\}}\) as "codimension 2 corners," etc. _Remark 2.9_.: We also remark that one can consider a different (simpler) version of the source category - iterated cospans of graphs w.r.t. graph inclusions, without any disjointness conditions. While this \(n\)-category is simpler to define and admits non-vertical compositions, it has less resemblance to the extended cobordism category, as here the intersection of strata of codimensions \(k\) and \(l\) can have codimension less than \(k+l\). _Remark 2.10_.: Note that the most interesting part of the theory is concentrated in the top component of the functor (partition functions \(Z\)) - e.g., the interaction potential \(p(\phi)\) only affects it, not the spaces of states \(\mathcal{H}^{[k]}\). This is why we emphasize (cf. footnote 15) that there is no analog of the cobordism hypothesis in our model: one cannot recover the entire functor (in particular, the top component) from its bottom component. This situation is similar to another example of an extended geometric (non-topological) QFT - the 2D Yang-Mills theory. In this case, the area form affects the QFT 2-functor only at the top-dimension stratum (and thus "obstructs" the cobordism hypothesis), cf. [14]. _Remark 2.11_.: The case \(n=1\) of the formalism of this section is slightly (ind consequentially) different from the non-extended functorial picture of Section 2.1, with target category \(\mathsf{T}^{1}\) instead of Hilb, with boundaries mapped to algebras of smooth functions on boundary values of fields rather than Hilbert spaces of \(L^{2}\) functions of boundary values of fields. In the extended setting, we cannot use \(L^{2}\) functions for two reasons: (a) they don't form an algebra and (b) the pullback (33) of a function of field values on codim=2 corner vertices to a codim=1 boundary is generally not square integrable. (i.e. our QFT functor applied to the inclusion of a corner graph into the boundary graph does not land in the \(L^{2}\) space). Figure 3. Composition of 2-morphisms. ## 3. Gaussian theory ### Gaussian theory on a closed graph Consider the free case of the model (21), with the interaction \(p(\phi)\) set to zero. The action is quadratic \[S_{X}(\phi)=\frac{1}{2}(\phi,K_{X}\phi), \tag{35}\] where the kinetic operator is \[K_{X}\colon\;=\Delta_{X}+m^{2}\] - it is a positive self-adjoint operator on \(F_{X}\). Let us denote its inverse \[G_{X}\colon\;=(K_{X})^{-1}\] - the "Green's function" or "propagator;" we will denote matrix elements of \(G_{X}\) in the basis of vertices by \(G_{X}(u,v)\), for \(u,v\in V_{X}\). The partition function (23) for a closed graph \(X\) is the Gaussian integral \[Z_{X}=\int_{F_{X}}D\phi\;e^{-\frac{1}{h}S_{X}(\phi)}=\det(K_{X})^{-\frac{1}{2}}. \tag{36}\] The correlator (24) is given by Wick's lemma, as a moment of the Gaussian measure: \[\langle\phi(v_{1})\cdots\phi(v_{2m})\rangle=\hbar^{m}\sum_{\text{ partitions }\{1,\ldots,2m\}=\cup_{i=1}^{m}\{a_{i},b_{i}\}}G_{X}(v_{a_{1}},v_{b_{1}})\cdots G _{X}(v_{a_{m}},v_{b_{m}}).\] #### 3.1.1. Examples **Example 3.1**.: Consider the graph \(X\) shown in Figure 4 below: The kinetic operator is \[K_{X}=\left(\begin{array}{ccc}1+m^{2}&-1&0\\ -1&2+m^{2}&-1\\ 0&-1&1+m^{2}\end{array}\right).\] Its determinant is: \[\det K_{X}=m^{2}(1+m^{2})(3+m^{2}) \tag{37}\] and the inverse is \[G_{X}=\frac{1}{m^{2}(1+m^{2})(3+m^{2})}\left(\begin{array}{ccc}1+3m^{2}+m^{ 4}&1+m^{2}&1\\ 1+m^{2}&(1+m^{2})^{2}&1+m^{2}\\ 1&1+m^{2}&1+3m^{2}+m^{4}\end{array}\right). \tag{38}\] **Example 3.2**.: Consider the line graph of length \(N\): Figure 4. A line graph on 3 vertices. Figure 5. A line graph of length \(N\). The kinetic operator is the tridiagonal matrix \[K_{X}=\left(\begin{array}{cccccc}1+m^{2}&-1&&&&\\ -1&2+m^{2}&-1&&\\ &-1&\ddots&\ddots&\\ &&&\ddots&2+m^{2}&-1\\ &&&-1&1+m^{2}\end{array}\right).\] The matrix elements of its inverse are:17 Footnote 17: One finds this by solving the finite difference equation \(-G(i+1,j)+(2+m^{2})G(i,j)-G(i-1,j)=\delta_{ij}\), using the ansatz \(G(i,j)=A_{+}e^{\beta i}+A_{-}e^{-\beta i}\) for \(i\leq j\) and \(G(i,j)=B_{+}e^{\beta i}+B_{-}e^{-\beta i}\) for \(i\geq j\), with \(A_{\pm},B_{\pm}\) some coefficients depending on \(j\). One imposes single-valuedness (“continuity”) at \(i=j\) and “Neumann boundary conditions” \(G(0,j)=G(1,j)\), \(G(N,j)=G(N+1,j)\), which – together with the original equation at \(i=j\) – determines uniquely the solution. One can obtain the determinant from the propagator using the property \(\frac{d}{dm^{2}}\log\det K_{X}=\operatorname{tr}K_{X}^{-1}\frac{d}{dm^{2}}K_{X }=\sum_{i=1}^{N}G_{X}(i,i)\). \[G_{X}(i,j)=\frac{\cosh\beta(N-|i-j|)+\cosh\beta(N+1-i-j)}{2\sinh\beta\,\sinh \beta N},\quad 1\leq i,j\leq N, \tag{39}\] where \(\beta\) is related to \(m\) by \[\sinh\frac{\beta}{2}=\frac{m}{2}. \tag{40}\] The determinant is: \[\det K_{X}=2\tanh\frac{\beta}{2}\,\sinh\beta N. \tag{41}\] **Example 3.3**.: Consider the circle graph with \(N\) vertices shown in Figure 6 below: The kinetic operator is: \[K_{X}=\left(\begin{array}{cccccc}2+m^{2}&-1&&&&-1\\ -1&2+m^{2}&-1&&&\\ &-1&\ddots&\ddots&&\\ &&&\ddots&2+m^{2}&-1\\ -1&&&-1&2+m^{2}\end{array}\right).\] Figure 6. A circle graph with \(N\) vertices. (We are only writing the nonzero entries.) Its inverse is given by \[G_{X}(i,j)=\frac{\cosh\beta(\frac{N}{2}-|i-j|)}{2\sinh\beta\,\sinh\frac{\beta N}{ 2}},\quad 1\leq i,j\leq N. \tag{42}\] Here \(\beta\) is as in (40). The determinant is: \[\det K_{X}=4\sinh^{2}\frac{\beta N}{2}. \tag{43}\] For instance, for \(N=3\) we obtain \[G_{X}=\frac{1}{m^{2}(m^{2}+3)}\begin{pmatrix}m^{2}+1&1&1\\ 1&m^{2}+1&1\\ 1&1&m^{2}+1\end{pmatrix} \tag{44}\] and \[\det K_{X}=m^{2}(m^{2}+3)^{2}. \tag{45}\] ### Gaussian theory relative to the boundary Consider the Gaussian theory on a graph \(X\) with "boundary subgraph" \(Y\subset X\). #### 3.2.1. Dirichlet problem Consider the following "Dirichlet problem." For a fixed field configuration on the boundary \(\phi_{Y}\in F_{Y}\), we are looking for a field configuration on \(X\), \(\phi\in F_{X}\) such that \[\phi|_{Y} = \phi_{Y}, \tag{47}\] \[(K_{X}\phi)(v) = 0\,\,\,\mbox{for all}\,\,v\in V_{X}\setminus V_{Y}. \tag{46}\] Equivalently, we are minimizing the action (35) on the fiber \(F_{X}^{\phi_{Y}}\) of the evaluation-on-\(Y\) map \(F_{X}\to F_{Y}\) over \(\phi_{Y}\). The solution exists and is unique due to convexity and nonnegativity of \(S_{X}\). Let us write the inverse of \(K_{X}\) as a \(2\times 2\) block matrix according to partition of vertices of \(X\) into (1) not belonging to \(Y\) ("bulk vertices") or (2) belonging to \(Y\) ("boundary vertices"): \[(K_{X})^{-1}=\left(\begin{array}{c|c}A&B\\ \hline C&D\end{array}\right). \tag{48}\] Note that this matrix is symmetric, so \(A\) and \(D\) are symmetric and \(C=B^{T}\). Then, we can write the solution of the Dirichlet problem as follows: (47) implies \(K_{X}\phi=\left(\begin{array}{c}0\\ \xi\end{array}\right)\) for some \(\xi\in F_{Y}\). Hence, \[\phi=(K_{X})^{-1}\left(\begin{array}{c}0\\ \xi\end{array}\right)=\left(\begin{array}{c}B\xi\\ D\xi\end{array}\right)\underset{(\ref{eq:K_X})}{=}\left(\begin{array}{c}0\\ \phi_{Y}\end{array}\right).\] Therefore, \(\xi=D^{-1}\phi_{Y}\) and the solution of the Dirichlet problem is \[\phi=\left(\begin{array}{c}BD^{-1}\phi_{Y}\\ \phi_{Y}\end{array}\right). \tag{49}\] #### 3.2.2. Dirichlet-to-Neumann operator Note also that the evaluation of the action \(S_{X}\) on the solution of the Dirichlet problem is \[S_{X}(\phi)=\frac{1}{2}(\phi,K_{X}\phi)=\frac{1}{2}\Big{(}\left( \begin{array}{c}BD^{-1}\phi_{Y}\\ \phi_{Y}\end{array}\right),\left(\begin{array}{c}0\\ \xi\end{array}\right)\Big{)}\\ =\frac{1}{2}(\phi_{Y},\xi)=\frac{1}{2}(\phi_{Y},D^{-1}\phi_{Y}). \tag{50}\] The map sending \(\phi_{Y}\) to the corresponding \(\xi\) (i.e. the kinetic operator evaluated on the solution of the Dirichlet problem) is a combinatorial analog of the Dirichlet-to-Neumann operator.18 We will call the operator \(\mathrm{DN}_{Y,X}\colon=D^{-1}\colon F_{Y}\to F_{Y}\) the (combinatorial) Dirichlet-to-Neumann operator.19 Footnote 18: Recall that in the continuum setting, for \(X\) a manifold with boundary, the Dirichlet-to-Neumann operator \(\mathrm{DN}\colon C^{\infty}(\partial X)\to C^{\infty}(\partial X)\) maps a smooth function \(\phi_{\partial}\) to the normal derivative \(\partial_{n}\phi(x)\) on \(\partial X\) of the solution \(\phi\) of the Helmholtz equation \((\Delta+m^{2})\phi=0\) subject to Dirichlet boundary condition \(\phi|_{\partial}=\phi_{\partial}\). Footnote 19: We put the subscripts in \(\mathrm{DN}_{Y,X}\) to emphasize that we are extending \(\phi_{Y}\) into \(X\) as a solution of (47). When we will discuss gluing, the same \(Y\) can be a subgraph of two different graphs \(X^{\prime},X^{\prime\prime}\); then it is important into which graph we are extending \(\phi_{Y}\). Recall (see e.g. [16]) that in the continuum setting, the action of the free massive scalar field on a manifold with boundary, evaluated on a classical solution with Dirichlet boundary condition \(\phi_{\partial}\) is \(\int_{\partial X}\frac{1}{2}\phi_{\partial}\mathrm{DN}(\phi_{\partial})\). Comparing with (50) reinforces the idea that is reasonable to call \(D^{-1}\) the Dirichlet-to-Neumann operator. We will denote the operator \(BD^{-1}\) appearing in (49) by \[E_{Y,X}=BD^{-1} \tag{51}\] - the "extension" operator (extending \(\phi_{Y}\) into the bulk of \(X\) as a solution of the Dirichlet problem).20 Footnote 20: In [23], this operator is called the _Poisson operator_. #### 3.2.3. Partition function and correlators (relative to a boundary subgraph) Let us introduce a notation for the blocks of the matrix \(K_{X}\) corresponding to splitting of the vertices of \(X\) into bulk and boundary vertices, similarly to (48): \[K_{X}=\left(\begin{array}{c|c}\widehat{A}=K_{X,Y}&\widehat{B}\\ \hline\widehat{C}&\widehat{D}\end{array}\right). \tag{52}\] The partition function relative to \(Y\) (cf. (29)) is again given by a Gaussian integral \[Z_{X,Y}(\phi_{Y})=\int_{F_{X}^{\phi_{Y}}}[D\phi]^{\phi_{Y}}e^{- \frac{1}{h}(S_{X}(\phi)-\frac{1}{2}S_{Y}(\phi_{Y}))}\\ =\det(K_{X,Y})^{-\frac{1}{2}}e^{-\frac{1}{2h}(\phi_{Y},(\mathrm{ DN}_{Y,X}-\frac{1}{2}K_{Y})\phi_{Y})}. \tag{53}\] The normalized correlators (depending on the boundary field \(\phi_{Y}\)) are as follows. * 1-point correlator:21 Footnote 21: When specifying that a vertex \(v\) is in \(V_{X}\setminus V_{Y}\) we will use a shorthand and write \(v\in X\setminus Y\). \[\langle\phi(v)\rangle_{\phi_{Y}}=(E_{Y,X}\phi_{Y})(v),\quad v\in X\setminus Y. \tag{54}\] * Centered \(2m\)-point correlator: (55) \[\langle\delta\phi(v_{1})\cdots\delta\phi(v_{2m})\rangle_{\phi_{Y}}=\\ =\hbar^{m}\sum_{\text{partitions }\{1,\dots,2m\}=\cup_{i=1}^{m}\{a_{i },b_{i}\}}G_{X,Y}(v_{a_{1}},v_{b_{1}})\cdots G_{X,Y}(v_{a_{m}},v_{b_{m}}),\\ v_{1},\dots,v_{2m}\in X\setminus Y.\] Here: * \(\delta\phi(v)\colon=\phi(v)-\langle\phi(v)\rangle_{\phi_{Y}}=\phi(v)-(E_{Y,X} \phi_{Y})(v)\) is the fluctuation of the field w.r.t. its average; * \(G_{X,Y}\colon=(K_{X,Y})^{-1}\) is the "propagator with Dirichlet boundary condition on \(Y\)" (or "propagator relative to \(Y\)"). * Non-centered correlators follow from (55), e.g. (56) \[\langle\phi(v_{1})\phi(v_{2})\rangle_{\phi_{Y}}=\hbar\,G_{X,Y}(v_{1},v_{2})+( E_{Y,X}\phi_{Y})(v_{1})\cdot(E_{Y,X}\phi_{Y})(v_{2}).\] _Remark 3.4_.: In our notations, the subscript \(X,Y\) (as in \(K_{X,Y}\), \(G_{X,Y}\), \(Z_{X,Y}\)) stands for an object on \(X\) relative to \(Y\).22 On the other hand, the subscript \(Y,X\) (as in \(\operatorname{DN}_{Y,X}\), \(E_{Y,X}\)) refers to an object related to extending a field on \(Y\) to a classical solution in the "bulk" \(X\). Footnote 22: I.e. we think of \((X,Y)\) as a _pair_ of 1-dimensional CW complexes, where “pair” has the same meaning as in, e.g., the long exact sequence in cohomology of a pair. #### 3.2.4. Examples **Example 3.5**.: Consider the graph \(X\) shown in Figure 7 below, relative the subgraph \(Y\) consisting solely of the vertex 2. The full kinetic operator is \[K_{X}=\left(\begin{array}{c|c}1+m^{2}&-1\\ \hline-1&1+m^{2}\end{array}\right)\] and the relative version is its top left block, \(K_{X,Y}=1+m^{2}\). The relative propagator is \(G_{X,Y}=K_{X,Y}^{-1}=\frac{1}{1+m^{2}}\). The inverse of the full kinetic operator is \[K_{X}^{-1}=\frac{1}{m^{2}(2+m^{2})}\left(\begin{array}{c|c}1+m^{2}&1\\ \hline 1&1+m^{2}\end{array}\right).\] The DN operator is the inverse of the bottom right block: \(\operatorname{DN}_{Y,X}=\frac{m^{2}(2+m^{2})}{1+m^{2}}\) and the extension operator (51) is \(E_{Y,X}=\frac{1}{1+m^{2}}\). Figure 7. A graph with two vertices relative to one vertex. In particular, the relative partition function is \[Z_{X,Y}(\phi_{Y})=(1+m^{2})^{-\frac{1}{2}}\,e^{-\frac{1}{2\hbar}\left(\frac{m^{2}( 2+m^{2})}{1+m^{2}}-\frac{m^{2}}{2}\right)\phi_{Y}^{2}}.\] **Example 3.6**.: Consider the line graph of length \(N\) relative to the subgraph consisting of the right endpoint \(Y=\{N\}\) (Figure 8). The relative propagator is \[G_{X,Y}(i,j)=\frac{\sinh\beta(N-\frac{1}{2}-|i-j|)+\sinh\beta(N+\frac{1}{2}-i-j )}{2\sinh\beta\,\cosh\beta(N-\frac{1}{2})},\quad 1\leq i,j\leq N-1,\] with \(\beta\) as in (40). The DN operator is the inverse of the \(N-N\) block (element) of the absolute propagator (39): \[\mathrm{DN}_{Y,X}=\frac{2\sinh\frac{\beta}{2}\,\sinh\beta N}{\cosh\beta(N- \frac{1}{2})}.\] The extension operator is \[E_{Y,X}(i,N)=\frac{\cosh\beta(i-\frac{1}{2})}{\cosh\beta(N-\frac{1}{2})}, \quad 1\leq i\leq N-1\] and the determinant is \[\det K_{X,Y}=\frac{\cosh\beta(N-\frac{1}{2})}{\cosh\frac{\beta}{2}}. \tag{57}\] **Example 3.7**.: Consider again the line graph, but now relative to both left and right endpoints, see Figure 9 below. Then we have: \[G_{X,Y}(i,j)=\frac{\cosh\beta(N-1-|i-j|)-\cosh\beta(N+1-i-j)}{2 \sinh\beta\,\sinh\beta(N-1)},\ \ 2\leq i,j\leq N-1, \tag{59}\] \[\mathrm{DN}_{Y,X}=\frac{2\sinh\frac{\beta}{2}}{\sinh\beta(N-1)} \left(\begin{array}{cc}\cosh\beta(N-\frac{1}{2})&-\cosh\frac{\beta}{2}\\ -\cosh\frac{\beta}{2}&\cosh\beta(N-\frac{1}{2})\end{array}\right),\] (60) \[E_{Y,X}(i,1)=\frac{\sinh\beta(N-i)}{\sinh\beta(N-1)},\ \ E_{Y,X}(i,N)=\frac{ \sinh\beta(i-1)}{\sinh\beta(N-1)},\] (61) \[\det K_{X,Y}=\frac{\sinh\beta(N-1)}{\sinh\beta}. \tag{58}\] Figure 8. A line graph relative to one endpoint. Figure 9. A line graph relative to both endpoints. ### Gluing in Gaussian theory. Gluing of propagators and determinants #### 3.3.1. Cutting a closed graph Consider a closed graph \(X=X^{\prime}\cup_{Y}X^{\prime\prime}\) obtained from graphs \(X^{\prime},X^{\prime\prime}\) by gluing along a common subgraph \(X^{\prime}\supset Y\subset X^{\prime\prime}\). **Theorem 3.8**.: _(a) The propagator on \(X\) is expressed in terms the data (propagators, DN operators, extension operators) on \(X^{\prime},X^{\prime\prime}\) relative to \(Y\) as follows._ * _For both vertices_ \(v_{1},v_{2}\in X^{\prime}\)_:_ (62) \[G_{X}(v_{1},v_{2})=G_{X^{\prime},Y}(v_{1},v_{2})+\\ +\sum_{u_{1},u_{2}\in Y}E_{Y,X^{\prime}}(v_{1},u_{1})\mathrm{DN} _{Y,X}^{-1}(u_{1},u_{2})E_{Y,X^{\prime}}(v_{2},u_{2}).\] _For both vertices in_ \(X^{\prime\prime}\)_, the formula is similar. Here the total DN operator is_ (63) \[\mathrm{DN}_{Y,X}=\mathrm{DN}_{Y,X^{\prime}}+\mathrm{DN}_{Y,X^{\prime\prime} }-K_{Y}.\] _Also, we assume by convention that_ \(G_{X^{\prime},Y}(v_{1},v_{2})=0\) _if either of_ \(v_{1},v_{2}\) _is in_ \(Y\)_. We also set_ \(E_{Y,X^{\prime}}(u,v)=\delta_{u,v}\) _if_ \(u,v\in Y\)_._ * _For_ \(v_{1}\in X^{\prime}\)_,_ \(v_{2}\in X^{\prime\prime}\)_,_ (64) \[G_{X}(v_{1},v_{2})=\sum_{u_{1},u_{2}\in Y}E_{Y,X^{\prime}}(v_{1},u_{1}) \mathrm{DN}_{Y,X}^{-1}(u_{1},u_{2})E_{Y,X^{\prime\prime}}(v_{2},u_{2})\] _and similarly for_ \(v_{1}\in X^{\prime\prime}\)_,_ \(v_{2}\in X^{\prime}\)_._ * _The determinant of_ \(K_{X}\) _is expressed in terms of the data on_ \(X^{\prime},X^{\prime\prime}\) _relative to_ \(Y\) _as follows:_ (65) \[\det K_{X}=\det(K_{X^{\prime},Y})\det(K_{X^{\prime\prime},Y})\det(\mathrm{DN }_{Y,X}).\] We will give three proofs of these gluing formulae: 1. From Fubini theorem for the "functional integral" (QFT/second quantization approach). 2. From inverting a \(2\times 2\) block matrix via Schur complement and Schur's determinant formula. 3. From path counting (first quantization approach) - later, in Section 5.5. #### 3.3.2. Proof 1 ("functional integral approach") First, consider the partition function on \(X\) relative to \(Y\): \[Z_{X,Y}(\phi_{Y})=\int_{F_{X}^{\phi_{Y}}}[D\phi]^{\phi_{Y}}e^{- \frac{1}{\hbar}(S_{X}(\phi)-\frac{1}{2}S_{Y}(\phi_{Y}))}\underset{(\ref{eq: 1})}{=}\\ =\int_{F_{X^{\prime}}^{\phi_{Y}}}[D\phi^{\prime}]^{\phi_{Y}}\int _{F_{X^{\prime\prime}}^{\phi_{Y}}}[D\phi^{\prime\prime}]^{\phi_{Y}}e^{-\frac{ 1}{\hbar}(S_{X^{\prime}}(\phi^{\prime})+S_{X^{\prime\prime}}(\phi^{\prime \prime})-\frac{3}{2}S_{Y}(\phi_{Y}))}\\ =(\det K_{X^{\prime},Y})^{-\frac{1}{2}}(\det K_{X^{\prime\prime},Y})^{-\frac{1}{2}}e^{-\frac{1}{2\hbar}(\phi_{Y},(\mathrm{DN}_{Y,X^{\prime}}+ \mathrm{DN}_{Y,X^{\prime\prime}}-\frac{3}{2}K_{Y})\phi_{Y})}. \tag{66}\] Comparing the r.h.s. with (53) as functions of \(\hbar\), we obtain the formula (63) for the total DN operator and the relation for determinants \[\det K_{X,Y}=\det K_{X^{\prime},Y}\cdot\det K_{X^{\prime\prime},Y}.\] The partition function on \(X\) can be obtained by integrating (66) over the field on the "gluing interface" \(Y\): \[Z_{X}=\int_{F_{X}}D\phi\,e^{-\frac{1}{\hbar}S_{X}(\phi)}=\int_{F_{Y}}D\phi_{Y}\,e ^{\frac{1}{2\hbar}S_{Y}(\phi_{Y})}Z_{X,Y}(\phi_{Y})=\\ =\int_{F_{Y}}D\phi_{Y}(\det K_{X,Y})^{-\frac{1}{2}}e^{-\frac{1}{2 \hbar}(\phi_{Y},\operatorname{DN}_{Y,X}\phi_{Y})}=(\det K_{X,Y})^{-\frac{1}{2 }}(\det\operatorname{DN}_{Y,X})^{-\frac{1}{2}}.\] Comparing the r.h.s. with (36), we obtain the gluing formula for determinants (65). Next, we prove the gluing formula for propagators thinking of them as 2-point correlation functions. We denote by \(\ll\cdots\gg\) correlators not normalized by the partition function. Consider the case \(v_{1},v_{2}\in X^{\prime}\). We have \[\begin{split}&\underbrace{\ll\phi(v_{1})\phi(v_{2})}_{\hbar \,G_{X}(v_{1},v_{2})\cdot Z_{X}}=\int_{F_{X}}D\phi\,e^{-\frac{1}{\hbar}S_{X}( \phi)}\phi(v_{1})\phi(v_{2})=\\ =&\int_{F_{Y}}D\phi_{Y}\underbrace{\int_{F^{\phi_{Y} }_{X^{\prime}}}[D\phi^{\prime}]^{\phi_{Y}}\phi^{\prime}(v_{1})\phi^{\prime}(v _{2})e^{-\frac{1}{\hbar}(S_{X^{\prime}}(\phi^{\prime})-\frac{1}{2}S_{Y}(\phi_ {Y}))}}_{\ll\phi(v_{1})\phi(v_{2})\gg_{\phi_{Y}}^{X^{\prime}}(\phi_{Y})\cdot( \hbar\,G_{X^{\prime},Y}(v_{1},v_{2})+(E_{Y,X^{\prime}}\phi_{Y})(v_{1})\cdot(E_ {Y,X^{\prime}}\phi_{Y})(v_{2}))}\\ &\qquad\qquad\qquad\cdot\underbrace{\int_{F^{\phi_{Y}}_{X^{ \prime\prime}}}[D\phi^{\prime\prime}]^{\phi_{Y}}e^{-\frac{1}{\hbar}(S_{X^{ \prime\prime}}(\phi^{\prime\prime})-\frac{1}{2}S_{Y}(\phi_{Y}))}}_{Z_{X^{ \prime\prime},Y}(\phi_{Y})}\\ =&\int_{F_{Y}}D\phi_{Y}(\det K_{X^{\prime},Y})^{- \frac{1}{2}}(\det K_{X^{\prime\prime},Y})^{-\frac{1}{2}}e^{-\frac{1}{2\hbar}( \phi_{Y},\operatorname{DN}_{Y,X}\phi_{Y})}.\\ &\quad\cdot(\hbar\,G_{X^{\prime},Y}(v_{1},v_{2})+\sum_{u_{1},u_{2 }\in Y}E_{Y,X^{\prime}}(v_{1},u_{1})\phi_{Y}(u_{1})\phi_{Y}(u_{2})E_{Y,X^{ \prime}}(v_{2},u_{2}))\\ =& Z_{X}\cdot\hbar(G_{X^{\prime},Y}(v_{1},v_{2})+\sum _{u_{1},u_{2}\in Y}E_{Y,X^{\prime}}(v_{1},u_{1})\operatorname{DN}_{Y,X}^{-1}( u_{1},u_{2})E_{Y,X^{\prime}}(v_{2},u_{2})).\end{split}\] This proves the gluing formula (62). Finally, consider the case \(v_{1}\in X^{\prime}\), \(v_{2}\in X^{\prime\prime}\). By a similar computation we find \[\begin{split}\underbrace{\ll\phi(v_{1})\phi(v_{2})}_{\hbar\,G_{X }(v_{1},v_{2})\cdot Z_{X}}=\int_{F_{Y}}D\phi_{Y}\underbrace{\int_{F^{\phi_{Y} }_{X^{\prime}}}[D\phi^{\prime}]^{\phi_{Y}}\phi^{\prime}(v_{1})e^{-\frac{1}{ \hbar}(S_{X^{\prime}}(\phi^{\prime})-\frac{1}{2}S_{Y}(\phi_{Y}))}}_{=\ll\phi( v_{1})\gg_{\phi_{Y}}^{X^{\prime}}(\phi_{Y})\cdot(E_{Y,X^{\prime}}\phi_{Y})(v_{1})}. \\ \cdot\underbrace{\int_{F^{\phi_{Y}}_{X^{\prime\prime}}}[D\phi^{ \prime\prime}]^{\phi_{Y}}\phi^{\prime\prime}(v_{2})e^{-\frac{1}{\hbar}(S_{X^{ \prime\prime}}(\phi^{\prime\prime})-\frac{1}{2}S_{Y}(\phi_{Y}))}}_{=\ll\phi( v_{2})\gg_{\phi_{Y}}^{X^{\prime\prime}}=Z_{X^{\prime\prime},Y}(\phi_{Y})\cdot(E_{Y,X^{ \prime\prime}}\phi_{Y})(v_{2})}\\ =&\int_{F_{Y}}D\phi_{Y}(\det K_{X^{\prime},Y})^{- \frac{1}{2}}(\det K_{X^{\prime\prime},Y})^{-\frac{1}{2}}e^{-\frac{1}{2\hbar}( \phi_{Y},\operatorname{DN}_{Y,X}\phi_{Y})}.\\ &\qquad\cdot\sum_{u_{1},u_{2}\in Y}E_{Y,X^{\prime}}(v_{1},u_{1}) \phi_{Y}(u_{1})\phi_{Y}(u_{2})E_{Y,X^{\prime\prime}}(v_{2},u_{2})\\ =&\int_{F_{Y}}D\phi_{Y}(\det K_{X^{\prime},Y})^{- \frac{1}{2}}(\det K_{X^{\prime\prime},Y})^{-\frac{1}{2}}e^{-\frac{1}{2\hbar}( \phi_{Y},\operatorname{DN}_{Y,X}\phi_{Y})}.\\ &\qquad\cdot\sum_{u_{1},u_{2}\in Y}E_{Y,X^{\prime}}(v_{1},u_{1}) \phi_{Y}(u_{1})\phi_{Y}(u_{2})E_{Y,X^{\prime\prime}}(v_{2},u_{2})\\ =&\int_{F_{Y}}D\phi_{Y}(\det K_{X^{\prime},Y})^{- \frac{1}{2}}(\det K_{X^{\prime\prime},Y})^{-\frac{1}{2}}e^{-\frac{1}{2\hbar}( \phi_{Y},\operatorname{DN}_{Y,X}\phi_{Y})}.\\ &\qquad\cdot\sum_{u_{1},u_{2}\in Y}E_{Y,X^{\prime}}(v_{1},u_{1}) \phi_{Y}(u_{1})\phi_{Y}(u_{2})E_{Y,X^{\prime\prime}}(v_{2},u_{2})\\ =&\int_{F_{Y}}D\phi_{Y}(\det K_{X^{\prime},Y})^{- \frac{1}{2}}(\det K_{X^{\prime\prime},Y})^{-\frac{1}{2}}e^{-\frac{1}{2\hbar}( \phi_{Y},\operatorname{DN}_{Y,X}\phi_{Y})}.\\ &\qquad\cdot\sum_{u_{1},u_{2}\in Y}E_{Y,X^{\prime}}(v_{1},u_{1}) \phi_{Y}(u_{1})\phi_{Y}(u_{2})E_{Y,X^{\prime\prime}}(v_{2},u_{2})\\ =&\int_{F_{Y}}D\phi_{Y}(\det K_{X^{\prime},Y})^{- \frac{1}{2}}(\det K_{X^{\prime\prime},Y})^{-\frac{1}{2}}e^{-\frac{1}{2\hbar}( \phi_{Y},\operatorname{DN}_{Y,X}\phi_{Y})}.\\ &\qquad\cdot\sum_{u_{1},u_{2}\in Y}E_{Y,X^{\prime\prime}}(v_{1},u_{1}) \phi_{Y}(u_{1})\phi_{Y}(u_{2})E_{Y,X^{\prime\prime}}(v_{2},u_{2})\\ =&\int_{F_{Y}}D\phi_{Y}(\det K_{X^{\prime},Y})^{- \frac{1}{2}}(\det K_{X^{\prime\prime},Y})^{-\frac{1}{2}}e^{-\frac{1}{2\hbar}( \phi_{Y},\operatorname{DN}_{Y,X}\phi_{Y})}. \[=Z_{X}\cdot\hbar\sum_{u_{1},u_{2}\in Y}E_{Y,X^{\prime}}(v_{1},u_{1}){\rm DN}_{Y,X} ^{-1}(u_{1},u_{2})E_{Y,X^{\prime\prime}}(v_{2},u_{2}).\] This proves (64). #### 3.3.3. Proof 2 (Schur complement approach) Let us introduce the notations \[\overline{G}_{X,Y}=\left(\begin{array}{c|c}G_{X,Y}&0\\ \hline 0&0\end{array}\right),\quad\overline{E}_{Y,X}=\left(\begin{array}{c}E_{ Y,X}\\ \mathrm{id}\end{array}\right) \tag{67}\] for the extension of the propagator on \(X\) relative to \(Y\) by zero to vertices of \(Y\) and the extension of the extension operator by identity to vertices of \(Y\) (the blocks correspond to vertices of \(X\setminus Y\) and vertices of \(Y\), respectively).23 Using these notations, gluing formulae (62), (64) for the propagator can be jointly expressed as Footnote 23: Note that one can further refine the block decompositions (67) according to partitioning of vertices in \(X\setminus Y\) into those in \(X^{\prime}\setminus Y\) and those in \(X^{\prime\prime}\setminus Y\). Then the block \(G_{X,Y}\) becomes \(\left(\begin{array}{cc}G_{X^{\prime},Y}&0\\ 0&G_{X^{\prime\prime},Y}\end{array}\right)\) and the block \(E_{Y,X}\) becomes \(\left(\begin{array}{c}E_{Y,X^{\prime}}\\ E_{Y,X^{\prime\prime}}\end{array}\right)\). \[G_{X}\stackrel{{!}}{{=}}\overline{G}_{X,Y}+ \overline{E}_{Y,X}{\rm DN}_{Y,X}^{-1}\overline{E}_{Y,X}^{T}. \tag{68}\] The r.h.s. here is \[\left(\begin{array}{c|c}G_{X,Y}+E\,{\rm DN}^{-1}E^{T}&E\,{\rm DN }^{-1}\\ \hline{\rm DN}^{-1}E^{T}&{\rm DN}^{-1}\end{array}\right)=\left(\begin{array}[] {c|c}\widehat{A}^{-1}+BD^{-1}DD^{-1}C&BD^{-1}D\\ \hline DD^{-1}C&D\end{array}\right)\\ =\left(\begin{array}{c|c}\widehat{A}^{-1}+BD^{-1}C&B\\ \hline C&D\end{array}\right).\] Here we are suppressing the subscript \(Y,X\) for \(E\) and \({\rm DN}\); notations for the blocks are as in (48), (52). So, the only part to check is that the 1-1 block above is \(A\). It is a consequence of the inversion formula for \(2\times 2\) block matrices, which in particular asserts that the 1-1 block \(\widehat{A}\) of the matrix \(K_{X}\) inverse to \(G_{X}\) is the inverse of the Schur complement of the 2-2 block in \(G_{X}\), i.e., \[\widehat{A}^{-1}=A-BD^{-1}C.\] This finishes the proof of the gluing formula for propagators (68). Schur's formula for a determinant of a block \(2\times 2\) matrix applied to (48) yields \[\det K_{X}^{-1}=\det\underbrace{D}_{{\rm DN}^{-1}}\cdot\det(\underbrace{A-BD^ {-1}C}_{\widehat{A}^{-1}=K_{X,Y}^{-1}})\] and thus \[\det K_{X}=\det{\rm DN}\cdot\det K_{X,Y}=\det{\rm DN}\cdot\det K_{X^{\prime},Y }\cdot\det K_{X^{\prime\prime},Y}.\] In the last equality we used that \(K_{X,Y}\) is block-diagonal, with blocks corresponding to \(X^{\prime}\setminus Y\) and \(X^{\prime\prime}\setminus Y\). This proves the gluing formula for determinants. #### 3.3.4. Examples **Example 3.9**.: Consider the gluing of two line graphs of length \(2\), \(X^{\prime},X^{\prime\prime}\) over a common vertex \(Y\) into a line graph \(X\) of length \(3\) as pictured in Figure 10 below. The data of the constituent graphs \(X^{\prime},X^{\prime\prime}\) relative to \(Y\) was computed in Example 3.5. We assemble the data on the glued graph \(X\) using the gluing formulae of Theorem 3.8. We have \[\underset{=\mathrm{DN}_{Y,X}(2,2)}{\mathrm{DN}_{Y,X}}=\mathrm{DN}_{Y,X^{ \prime}}+\mathrm{DN}_{Y,X^{\prime\prime}}-K_{Y}=\\ =\frac{m^{2}(2+m^{2})}{1+m^{2}}+\frac{m^{2}(2+m^{2})}{1+m^{2}}-m^ {2}=\frac{m^{2}(3+m^{2})}{1+m^{2}}.\] For the propagator we have, e.g., \[G_{X}(1,1)=G_{X^{\prime},Y}(1,1)+E_{Y,X^{\prime}}(1,2)\mathrm{ DN}_{Y,X}^{-1}(2,2)E_{Y,X^{\prime}}(1,2)\\ =\frac{1}{1+m^{2}}+\frac{1}{1+m^{2}}\cdot\frac{1+m^{2}}{m^{2}(3+m ^{2})}\cdot\frac{1}{1+m^{2}}\] and \[G_{X}(1,3)=E_{Y,X^{\prime}}(1,2)\mathrm{DN}_{Y,X}^{-1}(2,2)E_{Y,X^{\prime \prime}}(3,2)=\frac{1}{1+m^{2}}\cdot\frac{1+m^{2}}{m^{2}(3+m^{2})}\cdot\frac{1 }{1+m^{2}},\] which agrees with the 1-1 entry and 1-3 entry in (38) respectively. For the gluing of determinants, we have \[\det K_{X^{\prime},Y}\cdot\det K_{X^{\prime\prime},Y}\cdot\det\mathrm{DN}_{Y, X}=(1+m^{2})\cdot(1+m^{2})\cdot\frac{m^{2}(3+m^{2})}{1+m^{2}},\] which agrees with (37). **Example 3.10**.: Consider the circle graph \(X\) with \(N\) vertices presented as a gluing by the two endpoints of two line graphs \(X^{\prime}\), \(X^{\prime\prime}\) of lengths \(N^{\prime},N^{\prime\prime}\) respectively, with \(N=N^{\prime}+N^{\prime\prime}-2\), see Figure 11 below. Figure 10. Gluing two line graphs into a longer line graph. One can then use the gluing formulae of Theorem 3.8 to recover the propagator and the determinant on the circle graph (cf. Example 3.3) from the data for line graphs relative to the endpoints (cf. Example 3.7). E.g. for the determinant, we have \[\underbrace{\frac{\sinh\beta(N^{\prime}-1)}{\sinh\beta}}_{\det K_{X^ {\prime},Y}}\cdot\underbrace{\frac{\sinh\beta(N^{\prime\prime}-1)}{\sinh\beta} }_{\det K_{X^{\prime\prime},Y}}\cdot\det\left(\text{DN}_{Y,X^{\prime}}+\text{ DN}_{Y,X^{\prime\prime}}-\left(\begin{array}{cc}m^{2}&0\\ 0&m^{2}\end{array}\right)\right)=\\ =\underbrace{4\sinh^{2}\frac{\beta N}{2}}_{\det K_{X}}.\] Here the \(2\times 2\) matrices \(\text{DN}_{Y,X^{\prime}}\), \(\text{DN}_{Y,X^{\prime\prime}}\) are given by (59), with \(N\) replaced by \(N^{\prime},N^{\prime\prime}\), respectively. #### 3.3.5. General cutting/gluing of cobordisms Consider the gluing of graph cobordisms (26), \[Y_{1}\xrightarrow{X^{\prime}}Y_{2}\xrightarrow{X^{\prime\prime}}Y_{3}\quad= \quad Y_{1}\xrightarrow{X}Y_{3}.\] Let us introduce the following shorthand notations * DN operators: \(\text{DN}_{ij,A}\colon=\text{DN}_{Y_{i}\sqcup Y_{j},A}\), with \(i,j\in\{1,2,3\}\) and \(A\in\{X^{\prime},X^{\prime\prime},X\}\). Also, by \((\text{DN}_{ij,A})_{kl}\) we will denote \(Y_{k}-Y_{l}\) block in \(\text{DN}_{ij,A}\). * "Interface" DN operator: \(\text{DN}_{\text{int}}\colon=(\text{DN}_{Y_{1}\sqcup Y_{2}\sqcup Y_{3},X})_{Y _{2}-Y_{2}-\text{block}}\). * Extension operators: \(E_{ij,A}=E_{Y_{i}\sqcup Y_{j},A}\). We will also denote its \(A-Y_{k}\) block by \((E_{ij,A})_{k}\). * Propagators: \(G_{A,ij}\colon=G_{A,Y_{i}\sqcup Y_{j}}\). One has the following straightforward generalization of Theorem 3.8 to the case of possibly nonempty \(Y_{1},Y_{3}\). **Theorem 3.11**.: _The data of the Gaussian theory on the glued cobordism \(Y_{1}\xrightarrow{X}Y_{3}\) can be computed from the data of the constituent cobordisms \(Y_{1}\xrightarrow{X^{\prime}}Y_{2}\), \(Y_{2}\xrightarrow{X^{\prime\prime}}Y_{3}\) as follows_ Figure 11. Gluing a circle from two intervals. 1. _Glued DN operator_ \(\mathrm{DN}_{13,X}\)_:_ (69) \[\left(\begin{array}{c}(\mathrm{DN}_{12,X^{\prime}})_{11}-(\mathrm{DN}_{12,X^{ \prime}})_{12}\mathrm{DN}_{\mathrm{int}}^{-1}(\mathrm{DN}_{12,X^{\prime}})_{21} &-(\mathrm{DN}_{12,X^{\prime}})_{12}\mathrm{DN}_{\mathrm{int}}^{-1}(\mathrm{DN }_{23,X^{\prime\prime}})_{23}\\ -(\mathrm{DN}_{23,X^{\prime\prime}})_{32}\mathrm{DN}_{\mathrm{int}}^{-1}( \mathrm{DN}_{12,X^{\prime}})_{21}&(\mathrm{DN}_{23,X^{\prime\prime}})_{33}-( \mathrm{DN}_{23,X^{\prime\prime}})_{32}\mathrm{DN}_{\mathrm{int}}^{-1}( \mathrm{DN}_{23,X^{\prime\prime}})_{23}\end{array}\right).\] _The blocks correspond to vertices of_ \(Y_{1}\) _and_ \(Y_{3}\)_. The interface DN operator here is_ (70) \[\mathrm{DN}_{\mathrm{int}}=(\mathrm{DN}_{12,X^{\prime}})_{22}+(\mathrm{DN}_{ 23,X^{\prime\prime}})_{22}-K_{Y_{2}}.\] 2. _Extension operator_ \(E_{13,X}\)_:_ (71) \[\left(\begin{array}{cc}(E_{12,X^{\prime}})_{1}-(E_{12,X^{\prime}})_{2} \mathrm{DN}_{\mathrm{int}}^{-1}(\mathrm{DN}_{12,X^{\prime}})_{21}&-(E_{12,X^{ \prime}})_{2}\mathrm{DN}_{\mathrm{int}}^{-1}(\mathrm{DN}_{23,X^{\prime\prime }})_{23}\\ -\mathrm{DN}_{\mathrm{int}}^{-1}(\mathrm{DN}_{12,X^{\prime}})_{21}&-\mathrm{ DN}_{11}^{-1}(\mathrm{DN}_{23,X^{\prime\prime}})_{23}\\ -(E_{23,X^{\prime\prime}})_{2}\mathrm{DN}_{\mathrm{int}}^{-1}(\mathrm{DN}_{12,X^{\prime}})_{21}&(E_{23,X^{\prime\prime}})_{3}-(E_{23,X^{\prime\prime}})_{2 }\mathrm{DN}_{\mathrm{int}}^{-1}(\mathrm{DN}_{23,X^{\prime\prime}})_{23}\end{array} \right).\] _Here horizontally, the blocks correspond to vertices of_ \(Y_{1}\)_,_ \(Y_{3}\)_; vertically - to vertices of_ \(X^{\prime}\setminus(Y_{1}\sqcup Y_{2})\)_,_ \(Y_{2}\) _and_ \(X^{\prime\prime}\setminus(Y_{2}\sqcup Y_{3})\)_._ 3. _Determinant:_ (72) \[\det K_{X,Y_{1}\sqcup Y_{3}}=\det K_{X^{\prime},Y_{1}\sqcup Y_{2}}\cdot\det K _{X^{\prime\prime},Y_{2}\sqcup Y_{3}}\cdot\det\mathrm{DN}_{\mathrm{int}}.\] 4. _Propagator:_ * _For_ \(v_{1},v_{2}\in X^{\prime}\)_,_ (73) \[G_{X,13}(v_{1},v_{2})=G_{X^{\prime},12}(v_{1},v_{2})+\sum_{u_{1},u_{2}\in Y}E_ {12,X^{\prime}}(v_{1},u_{1})\mathrm{DN}_{\mathrm{int}}^{-1}(u_{1},u_{2})E_{12,X^{\prime}}(v_{2},u_{2})\] _and similarly for_ \(v_{1},v_{2}\in X^{\prime\prime}\)_._ * _For_ \(v_{1}\in X^{\prime}\)_,_ \(v_{2}\in X^{\prime\prime}\)_,_ (74) \[G_{X,13}(v_{1},v_{2})=\sum_{u_{1},u_{2}\in Y}E_{12,X^{\prime}}(v_{1},u_{1}) \mathrm{DN}_{\mathrm{int}}^{-1}(u_{1},u_{2})E_{23,X^{\prime\prime}}(v_{2},u_{2})\] _and similarly for_ \(v_{1}\in X^{\prime\prime}\)_,_ \(v_{2}\in X^{\prime}\)_._ #### 3.3.6. Self-gluing and trace formula As another generalization of Theorem 3.8, one can consider the case of a graph \(X\) relative to a subgraph \(Y\) that admits a decomposition \(Y=Y_{1}\sqcup Y_{2}\) where \(Y_{1}\) and \(Y_{2}\) are isomorphic graphs. Then, specifying a graph isomorphism \(f\colon Y_{1}\to Y_{2}\), we can glue \(Y_{1}\) to \(Y_{2}\) using \(f\) to form a new graph \(\tilde{X}\) with a distinguished subgraph \(\tilde{Y}\).24 We have \(\tilde{Y}\cong Y_{1}\cong Y_{2}\) if and only if there are no edges between \(Y_{1}\) and \(Y_{2}\). See Figure 12. Footnote 24: In the setting of theorem 3.8, we have \(X=X^{\prime}\sqcup X^{\prime\prime}\), and there are no edges between \(X^{\prime}\) and \(X^{\prime\prime}\). In the following discussion we will suppress \(f\) but remark that in principle the glued graphs \(\tilde{X}\) and \(\tilde{Y}\) do depend on \(f\). Then one has the following relation between the Dirichlet-to-Neumann operators of \(Y\) relative to \(X\) and \(\tilde{Y}\) relative to \(\tilde{X}\): Figure 12. An example of self-gluing. **Proposition 3.12**.: _Let \(\phi\in C^{0}(Y_{1})\simeq C^{0}(Y_{2})\simeq C^{0}(\tilde{Y})\), then25_ Footnote 25: Below we are identifying using \(f\) to identify \(V(Y_{1})\) and \(V(Y_{2})\), and then also \(\phi\) and \((f^{-1})^{*}\phi\). \[\left((\phi,\phi),\left(\operatorname{DN}_{Y,X}-\frac{1}{2}K_{Y}\right)\begin{pmatrix} \phi\\ \phi\end{pmatrix}\right)=(\phi,\operatorname{DN}_{\tilde{Y},\tilde{X}}\phi)- \left(\phi,\left(\frac{1}{2}K_{\tilde{Y}}-\frac{1}{2}K_{Y_{1}}\right)\phi \right). \tag{75}\] _Equivalently,_ \[\left((\phi,\phi),\left(\operatorname{DN}_{Y,X}-\frac{1}{2}K_{Y_{1}}-\frac{1}{ 2}K_{Y_{2}}\right)\begin{pmatrix}\phi\\ \phi\end{pmatrix}\right)=(\phi,\operatorname{DN}_{\tilde{Y},\tilde{X}}\phi). \tag{76}\] Proof.: We have \[S_{X}=S_{X,Y}+S_{Y_{1}}+S_{Y_{2}}+S_{Y_{1},Y_{2}},\] where the first term contains contributions to the action from vertices in \(X\setminus Y\) and edges with at least one vertex in \(X\setminus Y\), while the last term contains just contributions from edges between \(Y_{1}\) and \(Y_{2}\). Evaluating on the subspace of fields \(F_{X}^{(\phi,\phi)}\) that agree on \(Y_{1}\) and \(Y_{2}\), we get \[S_{X}\big{|}_{F_{X}^{(\phi,\phi)}}=S_{X,Y}+2S_{Y_{1}}(\phi)+S_{Y_{1},Y_{2}}(\phi)\] and \(S_{X}-\frac{1}{2}S_{Y}=S_{X,Y}+S_{Y_{1}}+\frac{1}{2}S_{Y_{1},Y_{2}}.\) On the other hand, we have \[S_{\tilde{X}}-\frac{1}{2}S_{\tilde{Y}}\big{|}_{F_{\tilde{X}}^{ \phi}} =S_{\tilde{X},\tilde{Y}}+\frac{1}{2}S_{\tilde{Y}}(\phi)=S_{X,Y}+ \frac{1}{2}S_{Y_{1}}(\phi)+\frac{1}{2}S_{Y_{1},Y_{2}}(\phi)\] \[=S_{X}-\frac{1}{2}S_{Y}(\phi)-\frac{1}{2}S_{Y_{1}}.\] Therefore, \[Z_{X,Y}((\phi,\phi))=Z_{\tilde{X},\tilde{Y}}(\phi)e^{-\frac{1}{2\hbar}S_{Y_{1 }}(\phi)}.\] Noticing that the relative operators agree \(K_{X,Y}=K_{\tilde{X},\tilde{Y}}\), and using (53), we obtain (75). To see (76), notice that the difference \(K_{Y}-K_{Y_{1}}-K_{Y_{2}}=K_{Y_{1},Y_{2}}=K_{\tilde{Y}}-K_{Y_{1}}\), so adding \(\frac{1}{2}K_{Y_{1},Y_{2}}\) to (75) we obtain (76). **Corollary 3.13**.: _We have the following trace formula_ \[Z_{\tilde{X}}=\int_{F_{Y_{1}}}[D\phi]\langle\phi|Z_{X,Y_{1},Y_{2}}|\phi\rangle. \tag{77}\] Proof.: We have \[\langle\phi|Z_{X,Y_{1},Y_{2}}|\phi\rangle) =\det K_{X,Y}^{-\frac{1}{2\hbar}}e^{-\frac{1}{2\hbar}((\phi,\phi),(\operatorname{DN}_{Y,X}-\frac{1}{2}K_{Y_{1}}-\frac{1}{2}K_{Y_{2}})(\phi,\phi)}\] \[=\det K_{\tilde{X},\tilde{Y}}^{-\frac{1}{2\hbar}}e^{-\frac{1}{2 \hbar}((\phi,\operatorname{DN}_{\tilde{Y},\tilde{X}})\phi)}.\] Integrating over \(\phi\), we obtain the result. **Example 3.14** (Gluing a circle graph from a line graph).: For the line graph \(L_{3}\) relative to both endpoints, \[DN_{Y,X}=\frac{m^{2}+3}{m^{2}+2}\begin{pmatrix}m^{2}+1&-1\\ -1&m^{2}+1\end{pmatrix}.\] In this case we have \(K_{Y_{1}}=K_{Y_{2}}=m^{2}\) and \[\begin{pmatrix}1&1\end{pmatrix}\frac{m^{2}+3}{m^{2}+2}\begin{pmatrix}m^{2}+1&- 1\\ -1&m^{2}+1\end{pmatrix}\begin{pmatrix}1\\ 1\end{pmatrix}=\frac{2m^{2}(m^{2}+3)}{m^{2}+2},\] which implies \[\frac{2m^{2}(m^{2}+3)}{m^{2}+2}-\underbrace{m^{2}}_{-\frac{1}{2}K_{Y_{1}}-\frac{1 }{2}K_{Y_{2}}}=\frac{m^{2}(m^{2}+4)}{m^{2}+2}.\] Here \(m^{2}=K_{Y_{1}}\). On the other hand, \(\tilde{X}=C_{2}\) is a circle graph with \(\tilde{Y}\) a point, and we have \[K_{X}=\begin{pmatrix}m^{2}+2&-2\\ -2&m^{2}+2\end{pmatrix},\qquad G_{X}=\frac{1}{m^{2}(m^{2}+4)}\begin{pmatrix}m^ {2}+2&2\\ 2&m^{2}+2\end{pmatrix},\] therefore the corresponding Dirichlet-to-Neumann operator is \[\operatorname{DN}_{\tilde{Y},\tilde{X}}=\frac{m^{2}(m^{2}+4)}{m^{2}+2},\] as predicted by Proposition 3.12. The relative determinant \(K_{Y,X}\) is \(m^{2}+2\) so that the trace formula becomes \[\int_{F_{Y_{1}}}[D\phi]\langle\phi|Z_{X,Y_{1},Y_{2}}|\phi\rangle=(m^{2}+2)^{- \frac{1}{2}}\left(\frac{m^{2}(m^{2}+4)}{m^{2}+2}\right)^{-\frac{1}{2}}=(m^{2}( m^{2}+4))^{-\frac{1}{2}}=Z_{C_{2}}.\] Similarly, for the line graph of length \(N\) relative to both endpoints, the Dirichlet-to-Neumann operator is given by (59) and we have \[\begin{pmatrix}1&1\end{pmatrix}\operatorname{DN}_{Y,X}\begin{pmatrix}1\\ 1\end{pmatrix}=\frac{4\sinh\frac{\beta}{2}(\cosh\beta(N-\frac{1}{2})-\cosh \frac{\beta}{2})}{\sinh\beta(N-1)}.\] On the other hand, the Dirichlet-to-Neumann operator of \(\tilde{X}=C_{N-1}\) relative to a single vertex is \[\operatorname{DN}_{\tilde{Y},\tilde{X}}=2\sinh\beta\tanh\beta\frac{N-1}{2}.\] Then one can check that \[\frac{4\sinh\frac{\beta}{2}(\cosh\beta(N-\frac{1}{2})-\cosh\frac{\beta}{2})}{ \sinh\beta(N-1)}-m^{2}=2\sinh\beta\tanh\beta\frac{N-1}{2}.\] _Remark 3.15_.: There is of course also common generalization of Theorem 3.11 and Proposition 3.12, where we have several boundary components and are allowed sew any two isomorphic components together, we leave this statement to the imagination of the reader. ### Comparison to continuum formulation In this subsection, we compare of results of subsections 3.2 and 3.3 to the continuum counterparts for a free scalar theory on a Riemannian manifold. For details on the latter, we refer to [16]. Consider the free scalar theory on a closed Riemannian manifold \(M\) defined by the action \[S(\phi)=\int_{M}\frac{1}{2}d\phi\wedge*d\phi+\frac{m^{2}}{2}\phi^{2}d\text{vol }=\int_{M}\frac{1}{2}\phi(\Delta+m^{2})\phi\,d\text{vol},\] where \(\phi\in C^{\infty}(M)\) is the scalar field, \(m>0\) is the mass, \(*\) is the Hodge star associated with the metric, \(d\)vol is the metric volume form and \(\Delta\) is the metric Laplacian. The partition function is defined to be \[Z=\text{``}\int D\phi\,e^{-\frac{1}{\hbar}S(\phi)\text{ ''}}\colon=\left(\det^{\zeta}( \Delta+m^{2})\right)^{-\frac{1}{2}},\] where \(\det^{\zeta}\) stands for the functional determinant understood in the sense of zeta-regularization. Correlators are given by Wick's lemma in terms of the Green's function \(G(x,y)\in C^{\infty}(M\times M\setminus\text{Diag})\) of the operator \(\Delta+m^{2}\). Next, if \(M\) is a compact Riemannian manifold with boundary \(\partial M\), one can impose Dirichlet boundary condition \(\phi|_{\partial M}=\phi_{\partial}\) - a fixed function on \(\partial M\) (thus, fluctuations of fields are zero on the boundary). The unique solution of the Dirichlet boundary value problem on \(M\), \[(\Delta+m^{2})\phi=0,\quad\phi|_{\partial}=\phi_{\partial},\] can be written as \[\phi(x)=\int_{\partial M}\partial_{y}^{n}G_{D}(x,y)\,\phi_{\partial}(y)d\text {vol}_{y}^{\partial}. \tag{78}\] Here: * \(d\text{vol}^{\partial}\) is the Riemannian volume form on \(\partial M\) (w.r.t. the induced metric from the bulk). * \(G_{D}\in C^{\infty}(M\times M\setminus\text{Diag})\) is the Green's function for the operator \(\Delta+m^{2}\) with Dirichlet boundary condition. * \(\partial^{n}\) stands for the normal derivative at the boundary. In particular, for \(x\in M\), \(y\in\partial M\), (79) \[\partial_{y}^{n}G_{D}(x,y)=\left.\frac{\partial}{\partial t}\right|_{t=0}G_{D }(x,\widetilde{y}_{t}),\] where \(\widetilde{y}_{t}\), \(t\geq 0\) is a curve in \(M\) starting at \(\widetilde{y}_{0}=y\) with initial velocity being the inward unit normal to the boundary. Then on a manifold with boundary one has the partition function \[Z(\phi_{\partial})=\text{ ``}\int_{\phi|_{\partial}=\phi_{ \partial}}D\phi\,e^{-\frac{1}{\hbar}S(\phi)\text{ ''}}\\ =\left(\det^{\zeta}_{D}(\Delta+m^{2})\right)^{-\frac{1}{2}}e^{- \frac{1}{2\hbar}\int_{\partial M}\phi_{\partial}\text{DN}(\phi_{\partial})d \text{vol}^{\partial}}\\ =\left(\det^{\zeta}_{D}(\Delta+m^{2})\right)^{-\frac{1}{2}}e^{ \frac{1}{2\hbar}\int_{\partial M\times\partial M}\phi_{\partial}(x)\,\partial _{x}^{n}\partial_{y}^{n}G_{D}(x,y)\,\phi_{\partial}(y)d\text{vol}_{x}^{ \partial}d\text{vol}_{y}^{\partial}}. \tag{80}\] Here in the determinant in the r.h.s., \(\Delta+m^{2}\) is understood as acting on smooth functions on \(M\) vanishing on \(\partial M\) (which we indicate by the subscript \(D\) for "Dirichlet boundary condition"); DN\(\colon C^{\infty}(\partial M)\to C^{\infty}(\partial M)\) is the Dirichlet-to-Neumann operator (see footnote 18). The integral kernel of the DN operator is \(-\partial_{x}^{n}\partial_{y}^{n}G_{D}(x,y)\). The integral in the exponential in the last line of (80) contains a non-integrable singularity on the diagonal and has to be appropriately regularized, cf. Remark 3.4 in [16]. Correlators on a manifold with boundary are: * One-point correlator: \[\langle\phi(x)\rangle_{\phi_{\partial}}=\int_{\partial M}\partial_{y}^{n}G_{ D}(x,y)\phi_{\partial}(y)d\text{vol}_{y}^{\partial}.\] * Centered two-point correlator: \[\langle\delta\phi(x)\delta\phi(y)\rangle_{\phi_{\partial}}=\hbar\,G_{D}(x,y),\] where \(\delta\phi(x)\colon=\phi(x)-\langle\phi(x)\rangle_{\phi_{0}}\). * \(k\)-point centered correlators are given by Wick's lemma. When more detailed notations of the manifolds involved is needed, instead of \(G_{D}\) we will write \(G_{M,\partial M}\) (and similarly for \(\det_{D}^{\zeta}\)) and instead of DN we will write \(\mathrm{DN}_{\partial M,M}\). Continuing the dictionary of Remark 2.5 to free scalar theory on graphs vs. Riemannian manifolds, we have the following. Scalar theory on a graph \(X\)Scalar theory on a Riemannian manifold \(M\) relative to subgraph \(Y\)\(\begin{array}{c|c}\text{\rm{\rm{\ In the case \(\dim M=2\), the zeta-regularized determinants satisfy a remarkable Mayer-Vietoris type gluing formula due to Burghelea-Friedlander-Kappeler [3], \[\det_{M}^{\zeta}(\Delta+m^{2})=\det_{M^{\prime},\gamma}^{\zeta}(\Delta+m^{2}) \det_{M^{\prime\prime},\gamma}^{\zeta}(\Delta+m^{2})\det_{\gamma}^{\zeta}( \mathrm{DN}_{\mathrm{int}}). \tag{84}\] This formula also holds for higher even dimensions provided that the metric near the cut \(\gamma\) is of warped product type (this is a result of Lee [17]). In odd dimensions, under a similar assumption, the formula is known to hold up to a multiplicative constant known explicitly in terms of the metric on the cut. Note that formulae (81), (83) have the exact same structure as formulae (62), (64) for gluing of graph propagators.27 Likewise, the gluing formulae for determinants in the continuum setting (84) and in graph setting (65) have the same structure. Footnote 27: One small remark is that the continuum formula for the interface DN operator (82) is similar to (63), except for the \(-K_{Y}\) term in the l.h.s. which is specific to the graph setting and disappears in the continuum limit. One can also allow the manifold \(M\) to have extra boundary components disjoint from the cut, i.e., to consider \(M\) as a composition of two cobordisms \(\gamma_{1}\stackrel{{ M^{\prime}}}{{\longrightarrow}}\gamma_{2}, \ \gamma_{2}\stackrel{{ M^{\prime\prime}}}{{\longrightarrow}}\gamma_{3}\). One then has the corresponding gluing formulae which have the same structure as the formulae of Theorem 3.11. In particular, one has a gluing formula for continuum DN operators (see [21]) similar to the formula (69) in the graph setting. #### 3.4.2. Example: continuum limit of line and circle graphs The action of the continuum theory on an interval \([0,L]\) evaluated on a smooth field \(\phi\in C^{\infty}([0,L])\) can be seen as a limit of Riemann sums \[S(\phi)=\lim_{N\to\infty}\sum_{i=2}^{N}\frac{\epsilon_{N}}{2}\left(\frac{\phi (i\epsilon_{N})-\phi((i-1)\epsilon_{N})}{\epsilon_{N}}\right)^{2}+\sum_{i=1}^ {N}\epsilon_{N}\frac{m^{2}}{2}\phi(i\epsilon_{N})^{2},\] where in the r.h.s. we denoted \(\epsilon_{N}=L/N\). The r.h.s. can be seen as the action of the graph theory on a line graph with \(N=L/\epsilon\) vertices, where the mass is scaled as \(m\mapsto\epsilon m\) and then the kinetic operator is scaled as \(K\mapsto\epsilon^{-1}K\) (and thus the propagator scales as \(G\to\epsilon G\)), where we consider the limit \(\epsilon\to 0\) (we are approximating the interval by a portion of a 1d lattice and taking the lattice spacing to zero). Applying the scaling above to the formulae of Example 3.7, we obtain the following for the propagator (58): \[G_{\mathrm{graph}}(x,y)\underset{\epsilon\to 0}{\sim}\frac{\cosh m(L-|x-y|)- \cosh m(L-x-y)}{\sinh mL},\] where we denoted \(x=i\epsilon,y=j\epsilon\) - we think of \(i,j\) as scaling with \(\epsilon\) so that \(x,y\) remain fixed. The r.h.s. above is the Green's function for the operator \(\Delta+m^{2}\) on an interval \([0,L]\) with Dirichlet boundary conditions at the endpoints.28 For the DN operator (59), we obtain Footnote 28: For the formulae pertaining to the continuum theory on an interval, see e.g. [16, Appendix A.1]. \[\mathrm{DN}_{\mathrm{graph}}\underset{\epsilon\to 0}{\sim}\frac{m}{\sinh mL} \left(\begin{array}{cc}\cosh mL&-1\\ -1&\cosh mL\end{array}\right).\] The r.h.s. is the correct DN operator of the continuum theory on the interval. For the determinant (61), we have \[\det K_{\text{graph }\underset{\epsilon\to 0}{\sim}}\;\epsilon^{-N}\frac{\sinh mL}{m}.\] For comparison, the zeta-regularized determinant on the interval is \[\det^{\zeta}_{D}(\Delta+m^{2})=\frac{2\sinh mL}{m}.\] It differs from the graph result by a scaling factor \(\epsilon^{N}\) and an extra factor \(2\) which exhibits a discrepancy between the two regularizations of the functional determinant - lattice vs. zeta regularization. _Remark 3.16_.: One can similarly consider the continuum limit for the line graph of Example 3.2, without Dirichlet condition at the endpoints. Its continuum counterpart is the theory on an interval \([0,L]\) with _Neumann_ boundary conditions at the endpoints, cf. footnote 17. Likewise, in the continuum limit for line graphs relative to one endpoint (Example 3.6), one recovers the continuum theory with Dirichlet condition at one endpoint and Neumann condition at the other. For example, the zeta-determinant for Neumann condition at both ends is \(\det^{\zeta}_{N-N}(\Delta+m^{2})=2m\sinh mL\). For Dirichlet condition at one end and Neumann at the other, one has \(\det^{\zeta}_{D-N}(\Delta+m^{2})=2\cosh mL\). These formulae are related to the continuum limit of the discrete counterparts ((41) for Neumann-Neumann and (57) for Neumann-Dirichlet boundary conditions) in the same way as in the Dirichlet-Dirichlet case (by scaling with \(\epsilon^{N}\) and an extra factor of \(2\)). In the same vein, we can consider a circle of length \(L\) as a limit of circle graphs (Example 3.3) with spacing \(\epsilon\). Then in the scaling limit, from (42) we have \[G_{\text{graph}}(x,y)\underset{\epsilon\to 0}{\sim}\;\frac{\cosh m(\frac{L}{2}-|x- y|)}{2m\sinh\frac{mL}{2}},\] where the r.h.s. coincides with the continuum Green's function on a circle. For the determinant (43), we have \[\det K_{\text{graph }\underset{\epsilon\to 0}{\sim}}\;\epsilon^{-N}4\sinh^{2} \frac{mL}{2}. \tag{85}\] For comparison, the corresponding zeta-regularized functional determinant is \[\det^{\zeta}(\Delta+m^{2})=4\sinh^{2}\frac{mL}{2},\] which coincides with the r.h.s. of (85) up to the scaling factor \(\epsilon^{N}\). ## 4. Interacting theory via Feynman diagrams Consider scalar field theory on a closed graph \(X\) defined by the action (21) - the perturbation of the Gaussian theory by an interaction potential \(p(\phi)\): \[S_{X}(\phi)=\underbrace{\frac{1}{2}(\phi,K_{X}\phi)}_{S_{X}^{0}(\phi)}+ \underbrace{\langle\mu,p(\phi)\rangle}_{S_{X}^{\text{int}}(\phi)}.\] The partition function (23) can be computed by perturbation theory - the Laplace method for the \(\hbar\to 0\) asymptotics of the integral, with corrections given by Feynman diagrams (see e.g. [11]): (86) \[Z_{X}=\int_{F_{X}}D\phi\,e^{-\frac{1}{\hbar}(S^{0}_{X}(\phi)+S^{ \mathrm{int}}_{X}(\phi))}=\ll e^{-\frac{1}{\hbar}S^{\mathrm{int}}_{X}(\phi)} \gg^{0}=\\ =\ll\sum_{n\geq 0}\frac{(-1)^{n}}{\hbar^{n}n!}\sum_{v_{1},\ldots,v_{n} \in V_{X}}p(\phi(v_{1}))\cdots p(\phi(v_{n}))\gg^{0}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Here the sum is over Feynman graphs \(\Gamma\) with vertices split into two subsets - "bulk" vertices \(V_{\Gamma}^{\rm bulk}\) and "boundary" vertices \(V_{\Gamma}^{\partial}\) - with bulk vertices of valence \(\geq 3\) and univalent boundary vertices. In graphs \(\Gamma\) we are not allowing edges connecting two boundary vertices (while bulk-bulk and bulk-boundary edges are allowed). The weight of a Feynman graph is a polynomial in the boundary field \(\phi_{Y}\): \[\Phi_{\Gamma,(X,Y)}(\phi_{Y})=\\ =\sum_{f\colon\begin{subarray}{c}V_{\Gamma}^{\rm bulk}\to V_{X} \setminus V_{Y}\\ V_{\Gamma}^{\partial}\to V_{Y}\end{subarray}}\prod_{\mathsf{v}\in V_{\Gamma}^ {\rm bulk}}(-p_{\rm val(\mathsf{v})})\cdot\prod_{\mathsf{u}^{\partial}\in V_{ \Gamma}^{\partial}}\phi(f(\mathsf{u}^{\partial}))\cdot\\ \cdot\prod_{(\mathsf{u},\mathsf{v})\in E_{\Gamma}^{\rm bulk- bulk}}G_{X,Y}(f(\mathsf{u}),f(\mathsf{v}))\cdot\prod_{(\mathsf{u}^{\partial}, \mathsf{v})\in E_{\Gamma}^{\rm bdry-bulk}}E_{Y,X}(f(\mathsf{u}^{\partial}),f( \mathsf{v})). \tag{89}\] The sum over \(f\) here can be seen as a sum over tuples of bulk and boundary vertices in \(X\). Similarly to (87), it is a graph QFT analog of a configuration space integral formula for the Feynman diagrams in the interacting scalar field theory on manifolds with boundary (cf. [16]), where one is integrating over configurations of \(n\) bulk points and \(m\) boundary points on the spacetime manifold. We will denote the r.h.s. of (88) by \(Z_{X,Y}^{\rm pert}(\phi_{Y})\). **Example 4.1**.: Figure 13 is an example of a map \(f\) contributing to the Feynman weight (89): The full Feynman weight of the graph on the left is: \[\Phi_{\Gamma,(X,Y)}(\phi_{Y})=\sum_{a,b\in V_{X}\setminus V_{Y},\,c,d\in V_{Y }}(p_{3})^{2}G_{X,Y}(a,b)^{2}E_{Y,X}(c,a)E_{Y,X}(d,b)\,\phi(c)\phi(d),\] where we denoted \(a=f(\mathsf{a})\), \(b=f(\mathsf{b})\), \(c=f(\mathsf{c})\), \(d=f(\mathsf{d})\). _Remark 4.2_.: (i) By the standard argument, due to multiplicativity of Feynman weights w.r.t. disjoint unions of Feynman graphs, the sum Figure 13. A Feynman graph with boundary vertices and a map contributing to its Feynman weight. over graphs \(\Gamma\) in (86), (88) can be written as the exponential of the sum over connected Feynman graphs, \(\sum_{\Gamma}\cdots=e^{\sum_{\Gamma\,\text{connected}}\cdots}\). 2. One can rewrite the r.h.s. of (88) without the DN operator in the exponent in the prefactor, but instead allowing graphs \(\Gamma\) with boundary-boundary edges. The latter contribute extra factors \(-\text{DN}_{Y,X}(\mathfrak{u}^{\partial},\mathfrak{v}^{\partial})\) in the Feynman weight (89). 3. Unlike the case of closed \(X\), the sum over \(\Gamma\) in the r.h.s. of (88) generally contributes infinitely many terms to each nonnegative order in \(\hbar\) (for instance, in the order \(O(\hbar^{0})\), one has 1-loop graphs formed by trees connected to a cycle). However, there are finitely many graphs contributing to a given order in \(\hbar\), in any fixed polynomial degree in \(\phi_{Y}\). Moreover, one can introduce a rescaled boundary field \(\eta_{Y}\) so that (90) \[\phi_{Y}=\sqrt{\hbar}\,\eta_{Y}.\] Then (88) expressed as a function of \(\eta_{Y}\) is a power series in nonnegative half-integer powers of \(\hbar\), with finitely many graphs contributing at each order.31 Footnote 31: The power of \(\hbar\) accompanying a graph is \(\hbar^{|E_{\Gamma}|-|V|^{\text{bulk}}_{\Gamma}-\frac{1}{2}|V|^{\Gamma}_{\Gamma}}\), i.e., one can think that with this normalization of the boundary field, boundary vertices contribute \(1/2\) instead of \(1\) to the Euler characteristic of a Feynman graph. We also note that the rescaling (90) is rather natural, as the expected magnitude of fluctuations of \(\phi_{Y}\) around zero is \(O(\sqrt{\hbar})\). ### Cutting/gluing of perturbative partition functions via cutting/gluing of Feynman diagrams As in Section 3.3.1, consider a closed graph \(X=X^{\prime}\cup_{Y}X^{\prime\prime}\) obtained from graphs \(X^{\prime},X^{\prime\prime}\) by gluing along a common subgraph \(X^{\prime}\supset Y\subset X^{\prime\prime}\) (but now we consider the interacting scalar QFT). As we know from Proposition 2.3, the nonperturbative partition functions satisfy the gluing formula \[Z_{X}=\int_{F_{Y}}D\phi_{Y}\,Z_{X^{\prime},Y}(\phi_{Y})Z_{X^{\prime\prime},Y} (\phi_{Y}).\] Replacing both sides with their expansions (asymptotic series) in \(\hbar\), we have the gluing formula for the perturbative partition functions \[Z_{X}^{\text{pert}}=\int_{F_{Y}}D\phi_{Y}\,Z_{X^{\prime},Y}^{\text{pert}}(\phi _{Y})Z_{X^{\prime\prime},Y}^{\text{pert}}(\phi_{Y}). \tag{91}\] This latter formula admits an independent proof in the language of Feynman graphs which we will sketch here (adapting the argument of [16]). Consider "decorations" of Feynman graphs \(\Gamma\) for the theory on \(X\) by the following data: * Each vertex \(\mathsf{v}\) of \(\Gamma\) is decorated by one of three symbols \(\{X^{\prime},X^{\prime\prime},Y\}\), meaning that in the Feynman weight \(f(\mathsf{v})\) is restricted to be in \(V_{X^{\prime}}\setminus V_{Y}\), in \(V_{X^{\prime\prime}}\setminus V_{Y}\), or in \(V_{Y}\), respectively. * Each edge \(e=(\mathsf{u},\mathsf{v})\) of \(\Gamma\) is decorated by either \(u\) or \(c\) ("uncut" or "cut"), corresponding to the splitting of the Green's function on \(X\) in Theorem 3.8: \[G_{X}(f(\mathsf{u}),f(\mathsf{v}))=G_{X}^{u}(f(\mathsf{u}),f(\mathsf{v}))+G_{X} ^{c}(f(\mathsf{u}),f(\mathsf{v})).\] Here: * If \(\mathsf{u},\mathsf{v}\) are both decorated with \(X^{\prime}\), the "uncut" term is \(G^{u}_{X}\colon=G_{X^{\prime},Y}\). Similarly, if \(\mathsf{u},\mathsf{v}\) are both decorated with \(X^{\prime\prime}\), \(G^{u}_{X}\colon=G_{X^{\prime\prime},Y}\). For all other decorations of \(\mathsf{u},\mathsf{v}\), \(G^{u}_{X}\colon=0\). Because of this, we will impose a selection rule: \(u\)-decoration is only allowed for \(X^{\prime}-X^{\prime}\) or \(X^{\prime\prime}-X^{\prime\prime}\) edges. * The "cut" term is \[G^{\mathrm{c}}_{X}(f(\mathsf{u}),f(\mathsf{v}))\colon=\sum_{w_{1},w_{2}\in Y}E _{Y,\alpha}(f(\mathsf{u}),w_{1})\mathrm{DN}^{-1}_{Y,X}(w_{1},w_{2})E_{Y,\beta} (f(\mathsf{v}),w_{2}),\] where \(\alpha,\beta\) are the decorations of \(\mathsf{u},\mathsf{v}\) (and we understand \(E_{Y,Y}\) as identity operator). Let \(\mathrm{Dec}(\Gamma)\) denote the set of all possible decorations of a Feynman graph \(\Gamma\). Theorem 3.8 implies that for any Feynman graph \(\Gamma\) its weight splits into the contributions of its possible decorations: \[\Phi_{\Gamma,X}=\sum_{\Gamma^{\mathrm{dec}}\in\mathrm{Dec}(\Gamma)}\Phi_{ \Gamma^{\mathrm{dec}},X},\] where in the summand on the r.h.s., we have restrictions on images of vertices of \(\Gamma\) as prescribed by the decoration, and we only select either cut or uncut piece of each Green's function. Thus, the l.h.s. of (91) can be written as \[Z^{\mathrm{pert}}_{X}=\det(K_{X})^{-\frac{1}{2}}\cdot\sum_{\Gamma^{\mathrm{ dec}}}\frac{\hbar^{-\chi(\Gamma)}}{|\mathrm{Aut}(\Gamma^{\mathrm{dec}})|}\Phi_{ \Gamma^{\mathrm{dec}},X}. \tag{92}\] where on the right we are summing over all Feynman graphs with all possible decorations. The r.h.s. of (91) is: \[\ll\det(K_{X^{\prime},Y})^{-\frac{1}{2}}\det(K_{X^{\prime\prime},Y})^{-\frac{1}{2}}e^{-\frac{1}{\hbar}S^{\mathrm{int}}_{Y}(\phi_{Y})}.\\ \cdot\sum_{\Gamma^{\prime},\Gamma^{\prime\prime}}\frac{\hbar^{- \chi(\Gamma^{\prime}\cup\Gamma^{\prime\prime})}}{|\mathrm{Aut}(\Gamma^{\prime} \cup\Gamma^{\prime\prime})|}\Phi_{\Gamma^{\prime},(X^{\prime},Y)}(\phi_{Y}) \Phi_{\Gamma^{\prime\prime},(X^{\prime\prime},Y)}(\phi_{Y})\gg_{Y}=\\ \det(K_{X})^{-\frac{1}{2}}\cdot\left\langle e^{-\frac{1}{\hbar}S ^{\mathrm{int}}_{Y}(\phi_{Y})}\cdot\sum_{\Gamma^{\prime},\Gamma^{\prime\prime} }\frac{\hbar^{-\chi(\Gamma^{\prime}\cup\Gamma^{\prime\prime})}}{|\mathrm{Aut }(\Gamma^{\prime}\cup\Gamma^{\prime\prime})|}\Phi_{\Gamma^{\prime},(X^{\prime},Y)}(\phi_{Y})\Phi_{\Gamma^{\prime\prime},(X^{\prime\prime},Y)}(\phi_{Y}) \right\rangle_{Y}, \tag{93}\] where \(\ll\cdots\gg_{Y}\colon=\int_{FY}D\phi_{Y}\,e^{-\frac{1}{2\hbar}(\phi_{Y}, \mathrm{DN}_{Y,X}\phi_{Y})}\dots\) is the non-normalized Gaussian average w.r.t. the total DN operator; \(\langle\cdots\rangle_{Y}\) is the corresponding normalized average. The correspondence between (92) and (93) is as follows. Consider a decorated graph \(\Gamma^{\mathrm{dec}}\) and form out of it subgraphs \(\Gamma^{\prime},\Gamma^{\prime\prime}\) in the following way. Let us cut every cut edge in \(\Gamma\) (except \(Y-Y\) edges) into two, introducing two new boundary vertices. Then we collapse every edge between a newly formed vertex and a \(Y\)-vertex. \(\Gamma^{\prime}\) is the subgraph of \(\Gamma\) formed by vertices decorated by \(X^{\prime}\) and uncut edges between them, and those among the newly formed boundary vertices which are connected to an \(X^{\prime}\)-vertex by an edge; \(\Gamma^{\prime\prime}\) is formed similarly. Then the contribution of \(\Gamma^{\rm dec}\) to (92) is equal to the contribution of a particular Wick pairing for the term in (93) corresponding to the induced pair of graphs \(\Gamma^{\prime},\Gamma^{\prime\prime}\), and picking a term in the Taylor expansion of \(e^{-\frac{1}{\hbar}S^{\rm int}_{Y}(\phi_{Y})}\) corresponding to \(Y\)-vertices in \(\Gamma^{\rm dec}\). The sum over all decorated Feynman graphs in (92) recovers the sum over all pairs \(\Gamma^{\prime},\Gamma^{\prime\prime}\) and all Wick contractions in (93). This shows Feynman-graph-wise the equality of (92) and (93). One can also check that the combinatorial factors work out similarly to the argument in [16, Lemma 6.10]. **Example 4.3**.: Figure 14 is an example of a decorated Feynman graph on \(X\) (on the left; vertex decorations \(X^{\prime},Y,X^{\prime\prime}\) are according to the labels in the bottom) and the corresponding contribution to (93) on the right. Dashed edges on the right denote the Wick pairing for \(\langle\cdots\rangle_{Y}\) and are decorated with \({\rm DN}^{-1}_{Y,X}\). Circle vertices are the boundary vertices of graphs \(\Gamma^{\prime},\Gamma^{\prime\prime}\) or equivalently the vertices formed by cutting the \(c\)-edges of \(\Gamma^{\rm dec}\). ## 5. Path sum formulae for the propagator and determinant (Gaussian theory in the first quantization formalism) ### Quantum mechanics on a graph Following the logic of Section 1.1, we now want to understand the kinetic operator \(\Delta_{X}+m^{2}\) of the second quantized theory as the Hamiltonian of an auxiliary quantum mechanical system - a quantum particle on the graph \(X\).32 The space of states \(\mathcal{H}_{X}\) for graph quantum mechanics on \(X\) is \(\mathbb{C}^{V}\), i.e. the space of \(\mathbb{C}\)-valued functions on \(V\). The graph Schrodinger equation33 on \(X\) is Footnote 32: This model of quantum mechanics on a graph – as a model for the interplay between the operator and path integral formalisms – was considered in [19, 20], see also [5]. Footnote 33: Here we are talking about the Wick-rotated Schrödinger equation (i.e. describing quantum evolution in imaginary time), or equivalently the heat equation. \[\frac{\partial}{\partial t}\left|\psi(t,v)\right\rangle(t,v)=-\left(\Delta_{X} +m^{2}\right)\left|\psi(t,v)\right\rangle, \tag{94}\] Figure 14. An example of cutting a Feynman graph. where \(|\psi(t,v)\rangle\) is a (time-dependent) state, i.e. a vector in \(\mathbb{C}^{V}\). The explicit solutions to (94) are given by \[|\psi(t_{f})\rangle=e^{-(t_{f}-t_{0})(\Delta_{X}+m^{2})}\,|\psi(t_{0})\rangle\,. \tag{95}\] One can explicitly solve equation (95) by summing over certain paths on \(X\), see equations (102), (100), (118) below, in a way reminiscent of Feynman's path integral.34 This graph quantum mechanics is the first step of our first quantization approach to QFT on a graph. Footnote 34: This analogy is discussed in more detail in [19, 20]. ### Path sum formulae on closed graphs #### 5.2.1. Paths and h-paths in graphs We start with some terminology. A _path_\(\gamma\) from a vertex \(u\) to a vertex \(v\) of a graph \(X\) is a sequence \[\gamma=(u=v_{0},e_{0},v_{1},\ldots,e_{k-1},v_{k}=v)\] where \(v_{i}\) are vertices of \(X\) and \(e_{i}\) is an edge between \(v_{i}\) and \(v_{i+1}\).35 We denote \(V(\gamma)\) the ordered collection \((v_{0},v_{1},\ldots,v_{k})\) of vertices of \(\gamma\). We call \(l(\gamma)=k\) the length of the path, and denote \(P_{X}^{k}(u,v)\) the set of paths in \(X\) of length \(k\) from a vertex \(u\) to a vertex \(v\). We denote by \(P_{X}(u,v)=\cup_{k=0}^{\infty}P_{X}^{k}(u,v)\) the set of paths of any length from \(u\) to \(v\). We also denote by \(P_{X}^{k}=\cup_{u,v\in X}P_{X}^{k}(u,v)\), and \(P_{X}=\cup_{k=0}^{\infty}P_{X}^{k}\) the sets of paths between any two vertices of \(X\). Footnote 35: For simplicity of the exposition, we assume that the graph \(X\) has no short loops. The generalization allowing short loops is straightforward: in the definition of a path and h-path, the edges traversed \(e_{i}\) are not allowed to be short loops (and in the formulae involving the valence of a vertex, it should be replaced with valence excluding the contribution of short loops). This is ultimately due to the fact that short loops do not contribute to the graph Laplacian \(\Delta_{X}\). Below we will also need a variant of this notion that we call _hesitant paths_. Namely, a _hesitant path_ (or "h-path") from a vertex \(u\) to a vertex \(v\) is a sequence \[\tilde{\gamma}=(u=v_{0},e_{0},v_{1},\ldots,e_{k-1},v_{k}=v),\] but now we allow the possibility that \(v_{i+1}=v_{i}\), in which case \(e_{i}\) is allowed to be any edge starting at \(v_{i}=v_{i+1}\). In this case we say that \(\tilde{\gamma}\)_hesitates_ at step \(i\). If \(v_{i+1}\neq v_{i}\), then we say that \(\tilde{\gamma}\) jumps at step \(i\). As before, we say that such a path has length \(l(\tilde{\gamma})=k\), and we introduce the notion of the _degree_ of a h-path as36 Footnote 36: Diverting slightly from the notation of [9]. \[\deg(\tilde{\gamma})=|\{i|v_{i}\neq v_{i+1},0\leq i\leq l(\gamma)-1\}|, \tag{96}\] i.e. the degree is the number of jumps of a h-path. We denote by \[h(\tilde{\gamma})=|\{i|v_{i}=v_{i+1},0\leq i\leq l(\gamma)-1\}|\] the number of hesitations of \(\tilde{\gamma}\). Obviously \(l(\tilde{\gamma})=\deg(\tilde{\gamma})+h(\tilde{\gamma})\). We denote the set of h-paths from \(u\) to \(v\) by \(\Pi_{X}(u,v)\), and the set of length \(k\) hesitant paths by \(\Pi_{X}^{k}(u,v)\). There is an obvious concatenation operation \[\begin{array}{ccc}\Pi_{X}^{k}(u,v)\times\Pi_{X}^{l}(v,w)&\to&\Pi_{X}^{k+l} (u,w)\\ (\tilde{\gamma}_{1},\tilde{\gamma}_{2})&\mapsto&\tilde{\gamma}_{1}*\tilde{ \gamma}_{2}\end{array} \tag{97}\] Observe that for every h-path \(\tilde{\gamma}\) there is a usual ("non-hesitant") path \(\gamma\) of length \(l(\gamma)=\deg(\tilde{\gamma})\) given by simply forgetting repeated vertices, giving a map \(P\colon\Pi(u,v)\twoheadrightarrow P(u,v)\). See Figure 15. A (hesitant) path is called _closed_ if \(v_{k}=v_{0}\), i.e. the first and last vertex agree. The cyclic group \(C_{k}\) acts on closed paths of length \(k\) by shifting the vertices and edges. We call the orbits of this group action _cycles_ (i.e. closed paths without a preferred start- or end-point), and denote them by \(\Gamma_{X}\) for equivalence classes of h-paths, and \(C_{X}\) for equivalence classes of regular paths. A cycle \([\tilde{\gamma}]\) is called _primitive_ if its representatives have trivial stabilizer under this group action. Equivalently, this means that there is no \(k>1\) and \(\tilde{\gamma}^{\prime}\) such that \[\tilde{\gamma}=\underbrace{\tilde{\gamma}^{\prime}*\tilde{\gamma}^{\prime}* \ldots*\tilde{\gamma}^{\prime}}_{k\text{ times}},\] i.e. the cycle is traversed exactly once. In general, the order of the stabilizer of \(\tilde{\gamma}\) is precisely the number of traverses. We will denote this number by \(t(\tilde{\gamma})\). Obviously, it is well-defined on cycles. **Example 5.1**.: In the \(N=3\) circle graph, the two closed paths \(\tilde{\gamma}_{1}=(1,(12),2,(23),3,(31),1)\) and \(\tilde{\gamma}_{2}=(2,(23),3,(31),1,(12))\) define the same primitive cycle, while the closed path \(\tilde{\gamma}_{3}=(1,(31),3,(23),2,(12),1)\) defines a different cycle (since the graph is traversed in a different order). The closed (hesitant) path \(\tilde{\gamma}_{4}=(1,(12),1,(12),1)\) is not primitive, since \(\gamma_{4}=(1,(12),1)*(1,(12),1)\). #### 5.2.2. h-path formulae for heat kernel, propagator and determinant It is a simple observation that \[p_{X}^{k}(u,v):=|P_{X}^{k}(u,v)|=\langle u|(A_{X})^{k}|v\rangle, \tag{98}\] where \(A_{X}\) denotes the adjacency matrix of the graph \(X\), \(|v\rangle\) denotes the state which is \(1\) at \(v\) and vanishes elsewhere, and \[\langle u|A|v\rangle=A_{uv}\] denotes the \((u,v)\)-matrix element of the operator \(A\) (in the bra-ket notation for the quantum mechanics on \(X\)). We consider the heat operator \[e^{-t\Delta_{X}}\colon C^{0}(X)\to C^{0}(X), \tag{99}\] which is the propagator of the quantum mechanics on the graph \(X\) (95). Suppose that \(X\) is regular, i.e. all vertices have the same valence \(n\). Then \(\Delta_{X}=n\cdot I-A_{X}\) and (98) implies that the heat kernel \(\langle u|e^{-t\Delta_{X}}|v\rangle\) is given by \[\langle u|e^{-t\Delta_{X}}|v\rangle=e^{-tn}\sum_{k=0}^{\infty}\frac{t^{k}}{k! }p_{X}^{k}(u,v)=e^{-tn}\sum_{\gamma\in P_{X}(u,v)}\frac{t^{l(\gamma)}}{l(\gamma )!}. \tag{100}\] One can think of the r.h.s. as a discrete analog of the Feynman path integral formula where one is integrating over all paths (see [19]). For a general graph, one can derive a formula for the heat kernel in terms of h-paths, by using the formula \(\Delta=d^{T}d\). Namely, one has (see [9]) \[\langle u|\Delta_{X}^{k}|v\rangle=\sum_{\tilde{\gamma}\in\Pi_{X}^{k}(u,v)}(-1 )^{\deg(\tilde{\gamma})}. \tag{101}\] This implies the following formula for the heat kernel: \[\langle u|e^{-t\Delta_{X}}|v\rangle=\sum_{k=0}^{\infty}\frac{t^{k}}{k!}(-1)^{ k}\sum_{\tilde{\gamma}\in\Pi_{X}^{k}(u,v)}(-1)^{\deg(\tilde{\gamma})}=\sum_{ \tilde{\gamma}\in\Pi_{X}(u,v)}\frac{t^{l(\tilde{\gamma})}}{l(\tilde{\gamma})!}(-1)^{h(\tilde{\gamma})}. \tag{102}\] Here we have used that \(l(\tilde{\gamma})+\deg(\tilde{\gamma})=h(\tilde{\gamma})\) mod \(2\). Then we have the following h-path sum formula for the Green's function: **Lemma 5.2**.: _The Green's function \(G_{X}\) is given by_ \[\langle u|G_{X}|v\rangle=m^{-2}\sum_{k=0}^{\infty}(m^{-2})^{k}\sum_{\tilde{ \gamma}\in\Pi_{X}^{k}}(-1)^{h(\tilde{\gamma})}=m^{-2}\sum_{\tilde{\gamma}\in \Pi_{X}(u,v)}(m^{-2})^{l(\tilde{\gamma})}(-1)^{h(\tilde{\gamma})}. \tag{103}\] Proof.: By expanding \(m^{2}G_{X}=(m^{-2}K_{X})^{-1}=(1+m^{-2}\Delta_{X})^{-1}\) in powers of \(m^{-2}\) using the geometric series,37 we obtain Footnote 37: This series converges absolutely if the operator norm of \(m^{-2}\Delta_{X}\) is less that one, or equivalently \(m^{2}>\lambda_{max}(\Delta_{X})\), i.e. \(m^{2}\) is larger than the largest eigenvalues of \(\Delta_{X}\). \[\left\langle u\left|(1+m^{-2}\Delta_{X})^{-1}\right|v\right\rangle=\sum_{ \tilde{\gamma}\in\Pi_{X}(u,v)}(-m^{-2})^{l(\tilde{\gamma})}(-1)^{\deg(\tilde{ \gamma})}, \tag{104}\] which proves (103). Alternatively, one can prove (103) by integrating the heat kernel \(e^{-tK_{X}}=e^{-tm^{2}}e^{-t\Delta_{X}}\) for \(K_{X}\) over the time parameter \(t\) and using the Gamma function identity \[\int_{0}^{\infty}dt\ e^{-tm^{2}}\frac{t^{k}}{k!}=(m^{-2})^{k+1}.\] In equation (103), we see two slightly different ways of interpreting the path sum formula. In the middle we see that when expanding in powers of \(m^{2}\), the coefficient of \(m^{-2(k+1)}\) is given by a signed count of h-path of length \(k\), and that the sign is determined by the number of hesitations. On the right hand side we interpret the propagator as a weighted sum over all h-paths, in accordance with the first quantization picture. We have the following formula for the determinant of the kinetic operator (normalized by \(1/m^{2}\)) in terms of closed h-paths or h-cycles: **Lemma 5.3**.: _The determinant of \(K_{X}/m^{2}\) is given by_ \[\begin{split}\log\det\left(\frac{K_{X}}{m^{2}}\right)& =-\sum_{v\in X}\sum_{\tilde{\gamma}\in\Pi^{\geq 1}_{X}(v,v)}\frac{(m^{-2} )^{l(\tilde{\gamma})}}{l(\tilde{\gamma})}(-1)^{h(\tilde{\gamma})}\\ &=-\sum_{[\tilde{\gamma}]\in\Gamma^{\geq 1}_{X}}\frac{(m^{-2})^{l( \tilde{\gamma})}}{t(\tilde{\gamma})}(-1)^{h(\tilde{\gamma})}.\end{split} \tag{105}\] Proof.: Expand38 Footnote 38: Again, this power series converges absolutely for \(m^{2}>\lambda_{max}(\Delta_{X})\). \[\log\det\left(\frac{K_{X}}{m^{2}}\right)=\operatorname{tr}\log(1+m^{-2} \Delta_{X})=-\sum_{v\in X}\sum_{k=0}^{\infty}\frac{(-m^{-2})^{k}}{k}\langle v |\Delta_{X}^{k}|v\rangle,\] which implies (105). Note that in the expression in the middle of (105), we are summing over h-paths of length at least 1 with a fixed starting point. To obtain the right hand side, we sum over orbits of the group action of \(C_{k}\) on closed paths of length \(k\), the size of the orbit of \(\tilde{\gamma}\) is exactly \(l(\tilde{\gamma})/t(\tilde{\gamma})\). _Remark 5.4_.: Both h-paths and paths form monoids w.r.t. concatenation, with \(P\) a monoid homomorphism. A map \(s\) from a monoid to \(\mathbb{R}\) or \(\mathbb{C}\) is called _multiplicative_ if it is a homomorphism of monoids, i.e. \[s(\tilde{\gamma}_{1}*\tilde{\gamma}_{2})=s(\tilde{\gamma}_{1})s(\tilde{ \gamma}_{2}). \tag{106}\] Notice that in the path sum expression for the propagator (103), we are summing over h-paths \(\tilde{\gamma}\) with the weight \[s(\tilde{\gamma}):=(m^{-2})^{l(\tilde{\gamma})}(-1)^{h(\tilde{\gamma})}. \tag{107}\] Below it will be important that this weight is in fact multiplicative, which is obvious from the definition. _Remark 5.5_.: Using multiplicativity of \(s\), we can resum over iterates of primitive cycles to rewrite the right hand side of (105): \[\log\det\left(\frac{K_{X}}{m^{2}}\right)=-\sum_{\begin{subarray}{c}[\tilde{ \gamma}]\in\Gamma^{\geq 1}_{X}\\ \tilde{\gamma}\text{ primitive}\end{subarray}}\ \sum_{k\geq 1}\frac{s(\tilde{ \gamma})^{k}}{k}=\sum_{\begin{subarray}{c}[\tilde{\gamma}]\in\Gamma^{\geq 1} _{X}\\ \tilde{\gamma}\text{ primitive}\end{subarray}}\log\left(1-m^{-2l(\tilde{ \gamma})}(-1)^{h(\tilde{\gamma})}\right).\] #### 5.2.3. Resumming h-paths. Path sum formulae for propagator and determinant. Summing over the fibers of the map \(P\colon\Pi_{X}(u,v)\twoheadrightarrow P_{X}(u,v)\), we can rewrite (104) as a path sum formula as follows: **Lemma 5.6**.: _If \(m^{2}>\operatorname{val}(v)\) for all \(v\in X\), we have_ \[\left\langle u\left|(1+m^{-2}\Delta_{X})^{-1}\right|v\right\rangle=m^{2}\sum_ {\gamma\in P_{X}(u,v)}\prod_{v_{i}\in V(\gamma)}\frac{1}{m^{2}+\operatorname{ val}(v_{i})}. \tag{108}\] Proof.: For a path \(\gamma\in P_{X}^{k}(u,v)\), the fiber \(P^{-1}(\gamma)\) consists of h-paths \(\tilde{\gamma}\) which hesitate an arbitrary number \(j_{i}\) of times at every vertex \(v_{i}\) in \(V(\gamma)\). For each vertex \(v_{i}\), there are \(\operatorname{val}(v_{i})^{j_{i}}\) possibilities for a path to hesitate \(j_{i}\) times at \(v_{i}\). The length of such a h-path is \(l(\tilde{\gamma})=k+j_{0}+\ldots+j_{k}\) and its degree is \(\deg(\tilde{\gamma})=k\), hence we can rewrite equation (104) as \[\sum_{\tilde{\gamma}\in\Pi_{X}(u,v)}(-m^{-2})^{l(\tilde{\gamma})} (-1)^{\deg(\tilde{\gamma})}\] \[=\sum_{k=0}^{\infty}\sum_{\gamma\in P_{X}^{k}(u,v)}\sum_{j_{0}, \ldots,j_{k}=0}^{\infty}\operatorname{val}(v_{0})^{j_{0}}\cdot\ldots\cdot \operatorname{val}(v_{k})^{j_{k}}(-m^{-2})^{k+j_{0}+\ldots+j_{k}}(-1)^{k}\] \[=\sum_{k=0}^{\infty}\sum_{\gamma\in P_{X}^{k}(u,v)}(m^{-2})^{k} \sum_{j_{0},\ldots,j_{k}}^{\infty}\operatorname{val}(v_{0})^{j_{0}}\cdot\ldots \cdot\operatorname{val}(v_{k})^{j_{k}}(-m^{-2})^{j_{0}+\ldots+j_{k}}\] \[=\sum_{\gamma\in P_{X}(u,v)}m^{2}\prod_{v_{i}\in V(\gamma)}\frac{ m^{-2}}{1+m^{-2}\cdot\operatorname{val}(v_{i})}.\] **Corollary 5.7**.: _The Green's function of the kinetic operator has the expression_ \[\langle u|G_{X}|v\rangle=\sum_{\gamma\in P_{X}(u,v)}\prod_{v_{i}\in V(\gamma)} \frac{1}{m^{2}+\operatorname{val}(v_{i})}. \tag{109}\] _In particular, if \(X\) is regular of degree \(n\), then_ \[\langle u|G_{X}|v\rangle=\sum_{\gamma\in P_{X}(u,v)}\left(\frac{1}{m^{2}+n} \right)^{l(\gamma)+1}=\frac{1}{m^{2}+n}\sum_{k=0}^{\infty}p_{X}^{k}(u,v)(m^{2} +n)^{-k}. \tag{110}\] To derive a path sum formula for the determinant, we use a slightly different idea, that also provides an alternative proof of the resummed formula for the propagator. Consider the operator \(\Lambda\) which acts on \(C^{0}(X)\) diagonally in the vertex basis and sends \(|v\rangle\mapsto(m^{2}+\operatorname{val}(v))|v\rangle\), that is, \[\Lambda=\operatorname{diag}(m^{2}+\operatorname{val}(v_{1}),\ldots,m^{2}+ \operatorname{val}(v_{N}))\] in the basis of \(C^{0}(X)\) corresponding to an enumeration \(v_{1},\ldots,v_{N}\) of the vertices of \(X\). Then, consider the "normalized" kinetic operator \[\tilde{K}_{X}=\Lambda^{-1}K_{X}=I-\Lambda^{-1}A_{X}, \tag{111}\] with \(A_{X}\) the adjacency matrix of the graph. Then, we have the simple generalization of the observation that matrix elements of the \(k\)-th power of the adjacency matrix \(A_{X}\) count paths of length \(k\) (see (98)), namely, matrix elements of \((\Lambda^{-1}A_{X})^{k}\Lambda^{-1}\) count paths weighted with \[w(\gamma):=\prod_{v\in V(\gamma)}\frac{1}{m^{2}+\operatorname{val}(v)}. \tag{112}\] Then, we immediately obtain \[\langle u|G_{X}|v\rangle=\langle u|\tilde{K}_{X}^{-1}\Lambda^{-1 }|v\rangle=\sum_{k=0}^{\infty}\left\langle u\left|(\Lambda^{-1}A_{X})^{k} \Lambda^{-1}\right|v\right\rangle\\ =\sum_{k=0}^{\infty}\sum_{\gamma\in P_{X}^{k}(u,v)}w(\gamma), \tag{113}\] which is (109). For the determinant, we have the following statement: **Proposition 5.8**.: _The determinant of the normalized kinetic operator has the expansions_ \[\log\det\tilde{K}_{X} =-\sum_{v\in X}\sum_{k=0}^{\infty}\sum_{\gamma\in P_{X}^{k}(v,v)} \frac{w^{\prime}(\gamma)}{l(\gamma)} \tag{115}\] \[=-\sum_{[\gamma]\in C_{X}^{\geq 1}}\frac{w^{\prime}(\gamma)}{t( \gamma)}, \tag{114}\] _where for a closed path \(\gamma\in P_{X}^{k}(v,v)\), \(w^{\prime}(\gamma)=w(\gamma)\cdot(m^{2}+\operatorname{val}(v)\).39_ Footnote 39: Note that this is well-defined on a cycle, we are simply taking the product over all vertices in the path but without repeating the one corresponding to start-and endpoint. Proof.: To see (114), we simply observe \[\log\det\tilde{K}_{X}=\operatorname{tr}\log(1-\Lambda^{-1}A_{X})=- \operatorname{tr}\sum_{k=0}^{\infty}\frac{(\Lambda^{-1}A_{X})^{k}}{k}=-\sum_{ v\in V}\sum_{k=0}^{\infty}\sum_{\gamma\in P_{X}^{k}(v,v)}\frac{w^{\prime}( \gamma)}{k}.\] To see the second formula (115), one sums over orbits of the cyclic group action on closed paths. In particular, for regular graphs we obtain a formula also derived in [9]: **Corollary 5.9**.: _If \(X\) is a regular graph, then_ \[\log\det\tilde{K}_{X}=-\sum_{k=1}^{\infty}\sum_{[\gamma]\in C_{X}^{k}}\frac{( m^{2}+n)^{-k}}{t(\gamma)}. \tag{116}\] Another corollary is the following first quantization formula for the partition function: **Theorem 5.10** (First quantization formula for Gaussian theory on closed graphs).: _The partition function of the Gaussian theory on a closed graph _can be expressed by_ \[\log Z_{X}=\frac{1}{2}\left(\sum_{[\gamma]\in C_{X}^{\geq 1}}\frac{w^{\prime}( \gamma)}{t(\gamma)}-\sum_{v\in X}\log(m^{2}+\operatorname{val}(v))\right). \tag{117}\] I.e., the logarithm of the partition function is given, up to the "normalization" term \(-\sum_{v\in X}\log(m^{2}+\operatorname{val}(v))\), by summing over all cycles of length at least 1, dividing by automorphisms coming from orientation reversing and multiple traversals. Proof.: We have \[\log Z_{X}=-\frac{1}{2}\log\det K_{X}=-\frac{1}{2}(\log\det\tilde{K}_{X}+\log \det\Lambda),\] from where the theorem follows by Proposition 5.8. _Remark 5.11_.: Notice that the weight \(w(\gamma)\) of the resummed formula (109) is not multiplicative: if \(\gamma_{1}\in P_{X}(u,v)\) and \(\gamma_{2}\in P_{X}(v,w)\) then \[\prod_{v_{i}\in\gamma_{1}}\frac{1}{m^{2}+\operatorname{val}(v_{i})}\prod_{v_ {i}\in\gamma_{2}}\frac{1}{m^{2}+\operatorname{val}(v_{i})}=\frac{1}{m^{2}+ \operatorname{val}(v)}\prod_{v_{i}\in\gamma_{1}\star\gamma_{2}}\frac{1}{m^{2} +\operatorname{val}(v_{i})},\] since on the left hand side the vertex \(v\) appears twice. _Remark 5.12_.: The sum over \(k\) in (113), (114) is absolutely convergent for any \(m^{2}>0\). The reason is that the matrix \(a=\Lambda^{-1}A_{X}\) has spectral radius smaller than 1 for \(m^{2}>0\). This in turn follows from Perron-Frobenius theorem: Since \(a\) is a nonnegative matrix, its spectral radius \(\rho(a)\) is equal to its largest eigenvalue (also known as Perron-Frobenius eigenvalue), which in turn is bounded by the maximum of the row sums of \(a\).40 Footnote 40: For any matrix \(A\) with entries \(a_{ij}\), \(\lambda\) an eigenvalue of \(A\) and \(x\) an eigenvector for \(\lambda\), we have \[|\lambda|=\frac{||\lambda x||_{\infty}}{||x||_{\infty}}\leq\sup_{||y||_{ \infty}=1}||Ay||_{\infty}=\max_{i}\sum_{j}|a_{ij}|.\] Here \(||x||_{\infty}=\max_{i}|x_{i}|\) denotes the maximum norm of a vector \(x\). The sum of entries on the \(v\)-th row of \(a\) is \(\frac{\operatorname{val}(v)}{m^{2}+\operatorname{val}(v)}<1\), which implies \(\rho(a)<\max_{v}\frac{\operatorname{val}(v)}{m^{2}+\operatorname{val}(v)}<1\). In particular, resummation from h-path-sum formula to a path-sum formula extends the absolute convergence region from \(m^{2}>\lambda_{max}(\Delta_{X})\) to \(m^{2}>0\). \begin{table} \begin{tabular}{c|c|c} Object & h-path sum & path sum \\ \hline \(\langle u|G_{X}|v\rangle\) & \(m^{-2}\sum_{\tilde{\gamma}\in\Pi_{X}(u,v)}s(\tilde{\gamma})\) (Eq. (104)) & \(\sum_{\gamma\in P_{X}(u,v)}w(\gamma)\) (Eq. (109) \\ \(\log\det m^{-2}K_{X}\) & \(-\sum_{[\tilde{\gamma}]\in\Gamma_{X}^{\geq 1}}\frac{s(\tilde{\gamma})}{t( \tilde{\gamma})}\) (Eq. (105)) & \\ \(\log\det\tilde{K}_{X}\) & & \(-\sum_{[\gamma]\in C_{X}^{\geq 1}}\frac{w^{\prime}(\gamma)}{t(\gamma)}\) (Eq. (115)) \\ \end{tabular} \end{table} Table 4. Summary of path sum formulae, closed case. #### 5.2.4. Aside: path sum formulae for the heat kernel and the propagator - "1d gravity" version There is the following generalization of the path sum formula (100) for the heat kernel for a not necessarily regular graph \(X\). **Proposition 5.13**.: (118) \[\langle u|e^{-t\Delta_{X}}|v\rangle=\sum_{\gamma\in P_{X}(u,v)}W(\gamma;t),\] _where the \(t\)-dependent weight for a path \(\gamma\) of length \(k\) is given by an integral over a standard \(k\)-simplex of size \(t\):_ \[W(\gamma;t)=\int_{\begin{smallmatrix}t_{0},\dots,\,t_{k}>0\\ t_{0}+\dots+t_{k}=t\end{smallmatrix}}dt_{1}\cdots dt_{k}\,e^{-\sum_{i=0}^{k}t_ {i}\mathrm{val}(v_{i})}, \tag{119}\] _where we denoted \(v_{0},\dots,v_{k}\) the vertices along the path._ Proof.: To prove this result, note that the Green's function \(G_{X}\) as a function of \(m^{2}\) is the Laplace transform \(L\) of the heat kernel \(e^{-t\Delta_{X}}\) as a function of \(t\). Thus, one can recover the heat kernel as the inverse Laplace transform \(L^{-1}\) of \(G_{X}\). Applying \(L^{-1}\) to (109) termwise, we obtain (118), (119) (note that the product of functions \(\frac{1}{m^{2}+\mathrm{val}(v)}\) is mapped by \(L^{-1}\) to the convolution of functions \(L^{-1}(\frac{1}{m^{2}+\mathrm{val}(v)})=e^{-t\mathrm{val}(v)}\)). As a function of \(t\), the weight (119) is a certain polynomial in \(t\) and \(e^{-t}\) with rational coefficients (depending on the sequence of valences \(\mathrm{val}(v_{i})\)). If all valences along \(\gamma\) are the same (e.g. if \(X\) is regular), then the integral over the simplex evaluates to \(W(\gamma;t)=\frac{t^{k}}{k!}e^{-t\cdot\mathrm{val}}\) - same as the weight of a path in (100). Note also that integrating (119) (multiplied by \(e^{-m^{2}t}\)) in \(t\), we obtain an integral expression for the weight (112) of a path in the path sum formula for the Green's function: \[w(\gamma)=\int_{t_{0},\dots,t_{k}>0}dt_{0}\cdots dt_{k}\,e^{-\sum_{i=0}^{k}t_ {i}(\mathrm{val}(v_{i})+m^{2})}. \tag{120}\] Here unlike (119) the integral is over \(\mathbb{R}_{+}^{k+1}\), not over a \(k\)-simplex. Observe that the resulting formula for the Green's function \[\langle u|G_{X}|v\rangle=\sum_{\gamma\in P_{X}(u,v)}\int_{t_{0},\dots,t_{k}>0 }dt_{0}\cdots dt_{k}\,e^{-\sum_{i=0}^{k}t_{i}(\mathrm{val}(v_{i})+m^{2})} \tag{121}\] bears close resemblance to the first quantization formula (8), where the proper times \(t_{0},\dots,t_{k}\) should be though of as parametrizing the worldline metric field \(\xi\) (and the path \(\gamma\) is the field of the "1d sigma model").41 We imagine the particle moving on \(X\) along \(\gamma\), spending time \(t_{i}\) at the \(i\)-th vertex and making instantaneous jumps between the vertices, with the "action functional" \[\overline{S}^{1q}(\gamma,\{t_{i}\})=\sum_{i=0}^{k}t_{i}(\operatorname{val}(v_{i}) +m^{2}). \tag{122}\] ### Examples #### 5.3.1. Circle graph, \(N=3\) Consider again the circle graph of Example 3 for \(N=3\) (Figure 16). Counting h-paths from \(1\) to \(2\), we see that there are no paths of length \(0\), a unique path \((1,(12),2)\) of length \(1\), and \(5\) paths of length \(2\): \[(1,(13),3,(23),2),\quad(1,(13),1,(12),2),\quad(1,(12),1,(12),2),\\ (1,(12),2,(12),2),\quad(1,(12),2,(23),2).\] The first one comes with a \(+\) sign, since it has no hesitations, the other \(4\) paths hesitate once either at \(1\) or \(2\) and come with a minus sign, the overall count is therefore \(-3\). Counting paths beyond that is already quite hard. Looking at the Greens' function, we have \[\langle 1|G_{X}|2\rangle =\frac{1}{m^{2}(m^{2}+3)}=m^{-4}\left(\frac{1}{1+3m^{-2}}\right)= m^{-4}\sum_{k\geq 0}(-3m^{-2})^{k}\] \[=m^{-2}\left(0\cdot m^{0}+1\cdot m^{-2}-3m^{-4}+9m^{-6}+\ldots \right).\] Since the circle graph is regular, we can count paths from \(u\) to \(v\) by expanding in the parameter \(\alpha^{-1}=\frac{1}{m^{2}+2}\). Here we observe that \[\langle 1|G_{X}|2\rangle =\frac{1}{(\alpha-2)(\alpha+1)}=\frac{1}{3\alpha}\left(\frac{1}{ 1-2\alpha^{-1}}-\frac{1}{1+\alpha^{-1}}\right)\] \[=\frac{1}{3\alpha}\sum_{k\geq 0}(2^{k}-(-1)^{k})\alpha^{-k}\] \[=\alpha^{-1}\left(0\cdot\alpha^{0}+1\cdot\alpha^{-1}+1\alpha^{-2 }+3\alpha^{-3}+5\alpha^{-4}+11\alpha^{-5}+\ldots\right),\] and one can count explicitly that there is no path of length zero, a unique path (12) of length \(1\), a unique path (132) of length \(2\), \(3\) paths (1212),(1312),(1232) of length \(3\), \(5\) paths (12312), (13212), (13132), (12132), (13232) of length \(4\), and so on.42 Similarly, we could expand Footnote 42: For brevity, here we just denote a path by its ordered collection of vertices, which determines the edges that are traversed. \[\langle 1|G_{X}|1\rangle=\frac{m^{2}+1}{m^{2}(m^{2}+3)}=m^{-2}\left(1+m^{-2} \right)\left(\frac{1}{1+3m^{-2}}\right)\] Figure 16. The \(N=3\) circle graph. \[=m^{-2}\left(1+(-2)\sum_{k\geq 1}(-3)^{k-1}m^{-2k}\right)\] \[=m^{-2}\left(1\cdot m^{0}+(-2)\cdot m^{-2}+6\cdot m^{-4}+(-18)\cdot m ^{-6}+\ldots\right),\] which counts h-paths from vertex \(1\) to itself: a single paths of length \(0\), \(2\) length \(1\) paths which hesitate once at \(1\), two length \(2\) paths with \(0\) hesitations and \(4\) length \(2\) paths with \(2\) hesitations, and so on. In terms of \(\alpha=m^{2}+2\), we get \[\langle 1|G_{X}|1\rangle =\frac{\alpha-1}{(\alpha-2)(\alpha+1)}\] \[=\alpha^{-1}\sum_{k\geq 0}\frac{2^{k}+2(-1)^{k}}{3}\alpha^{-k}\] \[= \alpha^{-1}\left(1\cdot\alpha^{0}+0\cdot\alpha^{-1}+2\cdot\alpha ^{-2}+2\cdot\alpha^{-3}+6\cdot\alpha^{-4}+\ldots\right),\] where we recognize the path counts from \(1\) to itself: A unique path (1) of length \(0\), no paths of length \(1\), two paths (121),(131) of length \(2\), \(2\) paths (1231),(1321) of length \(3\), and so on. The determinant is \(\det K_{X}=m^{2}(m^{2}+3)^{2}\), so we have \[\log\det m^{-2}K_{X} =2\log(1+3m^{-2})=-2\sum_{k\geq 1}\frac{(-3m^{-2})^{k}}{k}=\] \[=6m^{-2}-9m^{-4}+18m^{-6}-\frac{81}{2}m^{-8}+\ldots\] and we can see that rational numbers appear, because we are either counting paths with \(\frac{1}{l(\bar{\gamma})}\), or cycles with \(\frac{1}{t(\bar{\gamma})}\). Let us verify the cycle count for the first two powers of \(m^{2}\). Indeed, there is a total of \(6\) cycles of length \(1\) that hesitate once, of the form \((1,(12),1)\), and similar. At length \(2\), there are \(3\) closed cycles that do no hesitate, of the form \((1,(12),2,(12),1)\). Then, there are three cycles that hesitate twice and are of the form \((1,(12),1,(31),1)\) (they visit both edges starting at a vertex). Moreover, at every vertex we have the cycles of the form \((1,(12),1,(12),1)\). There are a total of \(6\) such cycles, however, they come with a factor of \(1/2\) because those are traversed twice! Overall we obtain \(3+3+\frac{1}{2}\cdot 6=9\) cycles (they all come with the same \(+\) sign). Finally, we can count cycles in \(X\) by expanding the logarithm of the determinant in powers of \(\alpha\): \[\log\det\tilde{K}_{X} =\log\frac{(\alpha-2)(\alpha+1)^{2}}{\alpha^{3}}=\log 1-2\alpha^{ -1}+2\log 1+\alpha^{-1}\] \[=-\sum_{k\geq 1}\frac{(2\alpha^{-1})^{k}}{k}-2\sum_{k\geq 1} \frac{(-\alpha)^{-k}}{k} \tag{123}\] \[=-\sum_{k\geq 1}\frac{2^{k}+2(-1)^{k}}{k}\alpha^{-k}\] \[=-\left(0\cdot\alpha^{-1}+3\cdot\alpha^{-2}+2\alpha^{-3}+\frac{9 }{2}\alpha^{-4}+\ldots\right).\] Counting cycles we see there are 0 cycles of length 1, 3 cycles of length 2, namely (121),(131),(232), 2 cycles of length 3, namely (1231), (1321). There are 3 primitive cycles of length 4 (those of the form (12131) and similar), and 3 cycles which are traversed twice ((12121) and similar), which gives \(3+\frac{3}{2}=\frac{9}{2}\). #### 5.3.2. Line graph, \(N=3.\) Consider again the line graph of example 3.1. For instance, we have \[\langle 1|G_{X}|3\rangle =\frac{1}{m^{2}(1+m^{2})(3+m^{2})}=m^{-2}\frac{1}{2}\left(\frac{1 }{1+m^{2}}-\frac{1}{3+m^{2}}\right)\] \[=\frac{m^{-4}}{2}\sum_{k=0}^{\infty}(-m^{-2})^{k}-(-3m^{-2})^{-k} =\frac{m^{-4}}{2}\sum_{k=0}^{\infty}((-1)^{k}-(-3)^{k})m^{-2k}\] \[=m^{-2}\left(0\cdot m^{0}+0\cdot m^{-2}+1\cdot m^{-4}-4\cdot m^{- 6}+13m^{-8}+\ldots\right)\] and indeed we can observe there are no h-paths from 1 to 3 of length 0 and 1, and there is a unique path \(\gamma\) of length 2. At length 3, there are 4 different h-paths whose underlying path is \(\gamma\) and who hesitate exactly once, there are a total of 1 + 2 + 1 possibilities to do so. At the next order, there are a total of 11 possibilities for \(\gamma\) to hesitate twice, and two new paths of length 4 appear, explaining the coefficient 13. The path sum (109) becomes \[\langle 1|G_{X}|3\rangle=\frac{1}{(1+m^{2})^{2}(2+m^{2})}+\frac{2}{(1+m^{2})^{3 }(2+m^{2})^{2}}+\cdots\] Here the numerator 1 corresponds to the single path of length 2, (123); the numerator 2 corresponds to the two paths of length 4, \((12123),(12323)\). In fact, there are exactly \(2^{l-1}\) paths \(1\to 3\) of length \(2l\) for each \(l\geq 1\), and along these paths the 1-valent vertices (endpoints) alternate with the 2-valent (middle) vertex, resulting in \[\langle 1|G_{X}|3\rangle=\sum_{l\geq 1}\frac{2^{l-1}}{(1+m^{2})^{l+1}(2+m^{2})^{ l}}.\] For the determinant \(\det K_{X}=m^{2}(m^{2}+1)(m^{2}+3)\), we can give the hesitant cycles expansion \[\log\det m^{-2}K_{X} =\log(1+m^{-2})+\log(1+3m^{-2})=-\sum_{k\geq 1}\frac{(-1)^{k}+(-3 )^{k}}{k}m^{-2k}\] \[=-\left(-4m^{-2}+5m^{-4}+\frac{28}{3}m^{-6}+\ldots\right).\] Here the first 4 is given by the four hesitant cycles of length 1. At length 2, we have the 4 iterates of length 1 hesitant cycles, contributing 2, a new hesitant cycle that hesitates twice at 2 (in different directions), and 2 regular cycles of length 2, for a total of \(4\cdot\frac{1}{2}+1+2=5.\) For the path sum we have \[-\log\det\tilde{K}_{X}=-\log\frac{m^{2}(m^{2}+3)}{(m^{2}+1)(m^{2}+2)}\] \[=-\log\left(1-\frac{2}{(m^{2}+1)(m^{2}+2)}\right)=\sum_{k\geq 1}\frac{2^{k}}{k} (m^{2}+1)^{-k}(m^{2}+2)^{-k},\] which means there are \(2^{k}/k\) cycles (counted with \(1/t(\gamma)\)) of length \(2k\). For instance, there are 2 cycles of length 2, namely (121) and (232). There is a unique primitive length 4 cycle, namely (12321), and the two non-primitive cycles (12121),(23232), which contribute \(k=\frac{1}{2}\), so we obtain \(1+2\frac{1}{2}=2\). There are 2 primitive length 6 cycles, namely (1232321) and (1212321), and the two non-primitive cycles (1212121), (232322), contributing \(\frac{1}{3}\) each, for a total of \(2+\frac{2}{3}=\frac{8}{3}\). At length 8 there are 3 new primitive cycles, the iterate of the length 4 cycle and the iterates of the 2 length 2 cycles for a total of \(3+\frac{1}{2}+2\cdot\frac{1}{4}=4=2^{4}/4\). ### Relative versions In this section we will study path-sum formulae for a graph \(X\) relative to a boundary subgraph \(Y\). We will then give a path-sum proof of the gluing formula (Theorem 3.8) in the case of a closed graph presented as a gluing of subgraphs over \(Y\). The extension to gluing of cobordisms is straightforward but notationally tedious. 4.1. \(h\)-path formulae for Dirichlet propagator, extension operator, Dirichlet-to-Neumann operator In this section we consider the path sum versions of the objects introduced in Section 3.2. Remember that, for a graph \(X\) and a subgraph \(Y\), we have the notations (48): \[(K_{X})^{-1}=\left(\begin{array}{c|c}A&B\\ \hline C&D\end{array}\right)\] and (52): \[K_{X}=\left(\begin{array}{c|c}\widehat{A}=K_{X,Y}&\widehat{B}\\ \hline\widehat{C}&\widehat{D}\end{array}\right).\] We are interested in the following objects: * The propagator with Dirichlet boundary conditions on \(Y\), \(G_{X,Y}=K_{X,Y}^{-1}\)(cf. Section 3.2.3). * The determinant of the kinetic operator \(K_{X,Y}\) with Dirichlet boundary on \(Y\) (cf. Section 3.2.3). * The combinatorial Dirichlet-to-Neumann operator \(\operatorname{DN}_{Y,X}=D^{-1}\colon F_{Y}\to F_{Y}\) (cf. Section 3.2.2). * The extension operator \(E_{Y,X}=BD^{-1}\colon F_{Y}\to F_{X}\) (cf. equation (51)). **Propagator with Dirichlet boundary conditions.** For \(u,v\) two vertices of \(X\setminus Y\), let us denote by \(\Pi_{X,Y}(u,v)\) the set of h-paths from \(u\) to \(v\) that contain no vertices in \(Y\) (but they may contain edges between \(X\setminus Y\) and \(Y\)), and \(\Pi_{X,Y}^{k}(u,v)\) the subset of such paths that have length \(k\). Then we have the formula ([9]) \[\langle u|\Delta_{X,Y}^{k}|v\rangle=\sum_{\tilde{\gamma}\in\Pi_{X,Y}^{k}(u,v)} (-1)^{\deg(\tilde{\gamma})}. \tag{124}\] In exactly the same manner as in the previous subsection, we can then prove \[\langle u|(1+m^{-2}\Delta_{X,Y})^{-1}|v\rangle=\sum_{\tilde{\gamma}\in\Pi_{X,Y}(u,v )}s(\tilde{\gamma}) \tag{125}\] and therefore \[\langle u|G_{X,Y}|v\rangle=m^{-2}\sum_{\tilde{\gamma}\in\Pi_{X,Y}(u,v)}s( \tilde{\gamma}). \tag{126}\] **Determinant of relative kinetic operator.** In the same fashion, we obtain the formula \[\log\det\left(\frac{K_{X,Y}}{m^{2}}\right)=-\sum_{\tilde{\gamma}\in C_{X,Y}^{ \geq 1}}\frac{s(\tilde{\gamma})}{t(\tilde{\gamma})}, \tag{127}\] where we have introduced the notation \(C_{X,Y}^{\geq 1}\) for cycles corresponding to closed h-paths in \(X\setminus Y\) that may use edges between \(X\setminus Y\) and \(Y\). **Dirichlet-to-Neumann operator.** Notice also that as a submatrix of \(K_{X}^{-1}\), we have the following path sums for \(D\) (here \(u,v\in Y\)): \[\langle u|D|v\rangle=m^{-2}\sum_{\tilde{\gamma}\in\Pi_{X}(u,v)}s(\tilde{ \gamma}). \tag{128}\] For \(u,v\in Y\), we introduce the notation \(\Pi_{X,Y}^{\prime\prime}(u,v)\) to be those h-paths from \(u\) to \(v\) containing exactly two vertices in \(Y\), i.e. the start- and end-points. We define the operator \(D^{\prime}\colon C^{0}(Y)\to C^{0}(Y)\) given by summing over such paths (see Figure 17a) \[\langle u|D^{\prime}|v\rangle\colon\,=\sum_{\tilde{\gamma}\in\Pi_{X,Y}^{\prime \prime}(u,v)}s(\tilde{\gamma}). \tag{129}\] (A) h-paths in \(\Pi_{X}^{\prime\prime}(u,v)\), contributing (B) h-paths contributing to \(\langle u|D^{\prime}|v\rangle\). (B) \(\langle u|(D^{\prime})^{k}|v\rangle\). Notice that \(\langle u|(D^{\prime})^{k}|v\rangle\) is given by summing over paths which cross the interface \(Y\) exactly \(k-1\) times between the start- and the end-point (see Figure 17b). Since the summand is multiplicative, we can therefore rewrite \(D\) as \[D=m^{-2}\sum_{k\geq 0}(D^{\prime})^{k}=m^{-2}(I-D^{\prime})^{-1}. \tag{130}\] Therefore the Dirichlet-to-Neumann operator is given by the formula \[\text{DN}_{Y,X}=D^{-1}=m^{2}(I-D^{\prime}). \tag{131}\] **Extension operator.** Finally, we give a path sum formula for the extension operator. To do so we introduce the notation \(\Pi^{\prime}_{X,Y}(u,v)\) for h-paths that start at a vertex \(u\in X\setminus Y\), end at a vertex \(v\in Y\), and contain only a single vertex on \(Y\), i.e. the end-point. **Lemma 5.14**.: _The extension operator can be expressed as_ \[BD^{-1}(u,v)=E_{Y,X}(u,v)=\sum_{\tilde{\gamma}\in\Pi^{\prime}_{X,Y}(u,v)}s( \tilde{\gamma}). \tag{132}\] Proof.: We will prove that composing with \(D\) we obtain \(B\). Indeed, denote the right hand side of equation (132) by \(\tilde{B}\). Then, using the h-path sum expression for \(D\) (128) we obtain \[\tilde{B}D=m^{-2}\sum_{v\in Y}\left(\sum_{\tilde{\gamma}\in\Pi^{\prime}_{X,Y} (u,v)}s(\tilde{\gamma})\right)\left(\sum_{\tilde{\gamma}\in\Pi_{X}(v,w)}s( \tilde{\gamma})\right).\] Using multiplicativity, we can rewrite this as \[m^{-2}\sum_{(\tilde{\gamma}_{1},\tilde{\gamma}_{2})\in\sqcup_{v}\Pi^{\prime}_ {X,Y}(u,v)\times\Pi_{X}(v,w)}s(\tilde{\gamma}_{1}*\tilde{\gamma}_{2}).\] Now the argument finishes by observing that any h-path \(\tilde{\gamma}\) from a vertex \(u\) in \(X\setminus Y\) to a vertex \(w\) in \(Y\) can be decomposed as follows. Let \(v\in Y\) be the first vertex of \(Y\) that appears in \(\tilde{\gamma}\) and denote \(\tilde{\gamma}_{1}\) the part of the path before \(v\), and \(\tilde{\gamma}_{2}\) the rest. Then \(\tilde{\gamma}=\tilde{\gamma}_{1}*\tilde{\gamma}_{2}\) and \(\tilde{\gamma}_{1}\in\Pi^{\prime}_{X}(u,v)\). This decomposition is the inverse of the composition map \[\begin{array}{ccc}\sqcup_{v\in Y}\Pi^{\prime}_{X,Y}(u,v)\times\Pi_{X}(v,w) &\rightarrow&\Pi_{X}(u,w)\\ (\tilde{\gamma}_{1},\tilde{\gamma}_{2})&\mapsto&\tilde{\gamma}_{1}*\tilde{ \gamma}_{2}\end{array}\] which is therefore a bijection. In particular, we can rewrite the expression above as \[m^{-2}\sum_{\tilde{\gamma}\in\Pi_{X}(u,w)}s(\tilde{\gamma})=B(u,w).\] We conclude that \(\tilde{B}=BD^{-1}\) #### 5.4.2. Resumming h-paths In the relative case, for any path \(\gamma\) we use the notation \[w_{X,Y}(\gamma)=\prod_{v\in V(\gamma)\setminus V(Y)}\frac{1}{m^{2}+\mathrm{val}_ {X}(v)}, \tag{133}\] where for a vertex \(v\in X\setminus Y\), we put the subscript \(X\) on \(\mathrm{val}_{X}(v)\) to emphasize we are considering its valence in \(X\), i.e. we are counting all edges in \(X\) incident to \(v\) regardless if they end on \(Y\) or not. Then we have the following path sum formulae for the relative objects: **Proposition 5.15**.: _The propagator with Dirichlet boundary condition can be expressed as_ \[\langle u|G_{X,Y}|v\rangle=\sum_{\gamma\in P_{X\setminus Y}(u,v)}w_{X,Y}( \gamma). \tag{134}\] _Here the sum is over paths involving only vertices in \(X\setminus Y\). 43_ Footnote 43: Notice that if instead we were using the path weight \(w_{X\setminus Y}(\gamma)\), we would obtain the Green’s function \(G_{X\setminus Y}\) of the closed graph \(X\setminus Y\), not the relative Green’s function \(G_{X,Y}\). _Similarly, for the extension operator we have_ \[\langle u|E_{Y,X}|v\rangle=\sum_{\gamma\in P^{\prime}_{X,Y}(u,v)}w_{X,Y}( \gamma), \tag{135}\] _where \(P^{\prime}_{X,Y}(u,v)\) denotes paths in \(X\setminus Y\) from \(u\) to \(v\) with exactly one vertex (i.e. the endpoint) in \(Y\). Finally, the operator \(D^{\prime}\) appearing in the Dirichlet-to-Neumann operator can be written as_ \[\langle u|D^{\prime}|v\rangle=-m^{-2}\mathrm{val}(v)\delta_{uv}+m^{-2}\sum_{ \gamma\in P^{\prime\prime}_{X,Y}(u,v)}w_{X,Y}(\gamma), \tag{136}\] _where \(P^{\prime\prime}_{X,Y}(u,v)\) denotes paths in \(X\) with exactly two (i.e. start- and endpoint) vertices in \(Y\). In particular, the Dirichlet-to-Neumann operator is_ \[\langle u|\mathrm{DN}_{Y,X}|v\rangle=(m^{2}+\mathrm{val}(v))\delta_{uv}-\sum_{ P^{\prime\prime}_{X,Y}(u,v)}w_{X,Y}(\gamma). \tag{137}\] Figure 18. Paths contributing to \(B\) (left) can be decomposed into paths contributing to \(E_{Y,X}\) (middle) and paths contributing to \(D\) (right), proving that \(B=E_{Y,X}D\). Proof.: Equation (134) is proved with a straightforward generalization of the arguments in the previous section. For equation (135), notice that because of the final jump there is an additional factor of \(m^{-2}\). For the Dirichlet-to-Neumann operator, we have the initial and final jumps contributing a factor of \(-m^{-2}\). In the case \(u=v\), the contribution of the h-paths which simply hesitate once at \(v\) have to be taken into account separately and result in the first term in (136). Finally, (137) follows from (136) and \(\operatorname{DN}_{Y,X}=m^{2}(I-D^{\prime})\). We also have a similar statement for the determinant. For this, we introduce the normalized relative kinetic operator \[\tilde{K}_{X,Y}=\Lambda_{X,Y}^{-1}K_{X,Y}=I-\Lambda_{X,Y}^{-1}A_{X\setminus Y},\] where \(\Lambda_{X,Y}\) is the diagonal matrix whose entries are \(m^{2}+\operatorname{val}_{X}(v)\). For a closed path \(\gamma\in P_{X\setminus Y}(v,v)\), we introduce the notation \[w^{\prime}_{X,Y}(\gamma)=(m^{2}+\operatorname{val}_{X})(v)\prod_{w\in V( \gamma)}\frac{1}{m^{2}+\operatorname{val}_{X}(w)}.\] **Proposition 5.16**.: _The determinant of the normalized relative kinetic operator is_ \[\log\det\tilde{K}_{X,Y}=-\sum_{v\in X\setminus Y}\sum_{k=1}^{\infty}\sum_{ \gamma\in P_{X\setminus Y}^{k}(v,v)}\frac{w^{\prime}_{X,Y}(\gamma)}{k}=-\sum_ {[\gamma]\in C_{X\setminus Y}^{\geq 1}}\frac{w^{\prime}_{X,Y}(\gamma)}{t( \gamma)}. \tag{138}\] Proof.: Again, simply notice that \[\log\det\tilde{K}_{X,Y}=\operatorname{tr}\log(1-\Lambda_{X,Y}^{-1}A_{X \setminus Y}).\] Then, the argument is the same as in the proof of Proposition 5.8 above. In the relative case, we are counting paths in \(X\setminus Y\), but weighted according to the valence of vertices in \(X\). This motivates the following definition. **Definition 5.17**.: We say that pair \((X,Y)\) of a graph \(X\) and a subgraph \(Y\) is quasi-regular of degree \(n\) if all vertices \(v\in X\setminus Y\) have the same valence \(n\) in \(X\), i.e. \[\operatorname{val}_{X}(v)=n,\forall v\in V(X\setminus Y).\] If \(X\) is regular, the pair \((X,Y)\) is quasi-regular for any subgraph \(Y\subset X\). An important class of examples are the line graphs \(X\) of example 3.5 with \(Y\) both boundary vertices, or more generally rectangular graphs or their higher-dimensional counterparts with \(Y\) given by the collection of boundary vertices. See Figure 19. For quasi-regular graphs, the path sums of Proposition 5.15 simplify to power series in \((m^{2}+n)^{-1}\), with \(n\) the degree of \((X,Y)\): **Corollary 5.18**.: _Suppose \((X,Y)\) is quasi-regular, then we have the following power series expansions for the relative propagator, extension operator, Dirichlet-to-Neumann operator and determinant:_ \[\langle u|G_{X,Y}|v\rangle =\frac{1}{m^{2}+n}\sum_{k=0}^{\infty}p_{X\setminus Y}^{k}(u,v)(m^ {2}+n)^{-k}, \tag{140}\] \[\langle u|E_{Y,X}|v\rangle =\sum_{k=1}^{\infty}({p^{\prime}}_{X,Y}^{k})(u,v)(m^{2}+n)^{-k},\] (141) \[\langle u|\mathrm{DN}_{Y,X}|v\rangle =(m^{2}+\mathrm{val}(v))\delta_{uv}-\sum_{k=2}^{\infty}({p^{ \prime\prime}}_{X,Y}^{k})(m^{2}+n)^{-k+1},\] (142) \[\log\det\tilde{K}_{X,Y} =-\sum_{k=1}^{\infty}\sum_{[\gamma]\in C_{X}^{k}}\frac{(m^{2}+n) ^{-k}}{t(\gamma)}. \tag{139}\] Again, we can collect our findings in the following first quantization formula for the partition function: **Theorem 5.19**.: _The logarithm of the partition function of the Gaussian theory relative to a subgraph \(Y\) is_ \[\hbar\log Z_{X,Y}(\phi_{Y})=-\frac{1}{2}\sum_{u,v\in Y}\phi_{Y}(u) \phi_{Y}(v)\cdot\\ \cdot\bigg{(}\left(\frac{m^{2}}{2}+\mathrm{val}_{X}(v)-\frac{1}{ 2}\mathrm{val}_{Y}(v)\right)\delta_{uv}+\frac{1}{2}(A_{Y})_{uv}\ -\sum_{\gamma\in P_{X,Y}^{\prime\prime}(u,v)}w_{X,Y}(\gamma)\bigg{)} \tag{143}\] Figure 19. A quasi-regular graph pair \((X,Y)\) with \(n=4\). Figure 20. The \(2\)–vertex line graph. \[+\frac{\hbar}{2}\left(\sum_{[\gamma]\in C_{X\setminus Y}^{\geq 1}}\frac{w_{X,Y} ^{\prime}(\gamma)}{t(\gamma)}-\sum_{v\in X}\log(m^{2}+\operatorname{val}(v)) \right).\] In (143) we are summing over all connected Feynman diagrams with no bulk vertices: boundary-boundary edges in the last term of the second line of the r.h.s. at order \(\hbar^{0}\) (together with the diagonal terms and \(\frac{1}{2}(A_{Y})_{uv}\), they sum up to \(\operatorname{DN}_{Y,X}-\frac{1}{2}K_{Y}\)) and "1-loop graphs" (cycles) on the third line at order \(\hbar^{1}\). #### 5.4.3. Examples **Example 5.20**.: Consider the graph \(X\) in Figure 20, with \(Y\) the subgraph consisting of the single vertex on the right. Then, the set \(\Pi_{X,Y}\) consists exclusively of iterates of the path which hesitates once along the single edge at \(1\), \(\tilde{\gamma}=(1,(12),1)\). Therefore, we obtain \[\langle 1|G_{X,Y}|1\rangle=m^{-2}\sum_{k=0}^{\infty}(-m^{-2})^{k}=\frac{m^{-2}}{ 1+m^{-2}}=\frac{1}{1+m^{2}}.\] Alternatively, we can obtain this from the path sum formula (134) by noticing there is a single (constant) path from \(1\) to \(1\) in \(X\setminus Y\). For the determinant, we obtain \[\log\det K_{X,Y}/m^{2}=-\sum_{k\geq 1}\frac{(-m^{-2})^{k}}{k}=\log(1+m^{-2})= \log\frac{1+m^{2}}{m^{2}}.\] h-paths in \(\Pi_{X,Y}^{\prime\prime}(2,2)\) are either \((2,(12),2)\) or of the form \((2,(12),1,(12),1,\ldots,1,(12),2)\) -i.e. jump from \(2\) to \(1\), hesitate \(k\) times and jump back - and therefore the operator \(D^{\prime}\) is given by \[D^{\prime}=-m^{-2}+\sum_{k\geq 0}(m^{-2})^{k+1}(-1)^{k}=-m^{-2}+\frac{m^{-2}}{1 +m^{-2}}=\frac{-1}{m^{2}+1}.\] Alternatively, one can just notice there is a unique path in \(P_{X,Y}^{\prime\prime}(2,2)\), namely (212), and use formula (137). Therefore the Dirchlet-to-Neumann operator is \[\operatorname{DN}_{Y,X}=m^{2}\left(1-\frac{-1}{m^{2}+1}\right)=\frac{m^{2}(2+ m^{2})}{1+m^{2}}.\] Finally, h-paths in \(\Pi_{X,Y}^{\prime}(1,2)\) are only those that hesitate \(k\) times at \(1\) before eventually jumping to \(2\), and therefore the extension operator is \[\langle 1|E_{Y,X}|2\rangle=m^{-2}\sum_{k=0}^{\infty}(-m)^{-2}=m^{-2}\frac{1}{1 +m^{-2}}=\frac{1}{1+m^{2}},\] alternatively, this follows directly from formula (135), because \(P_{X,Y}^{\prime}(1,2)=\{(12)\}\). **Example 5.21**.: Consider \(X\) the \(N=4\) line graph with \(Y\) both endpoints (\(1\) and \(4\)). Then \((X,Y)\) is quasi-regular of degree \(2\) and we can count paths \(X\setminus Y\) easily, namely, we have \[p^{k}_{X\setminus Y}(2,2)=p^{k}_{X\setminus Y}(3,3)=\begin{cases}1&k\text{ even}\\ 0&k\text{ odd}\end{cases}\] and \[p^{k}_{X\setminus Y}(2,3)=p^{k}_{X\setminus Y}(3,2)=\begin{cases}0&k\text{ even}\\ 1&k\text{ odd}\end{cases}\] Therefore, the relative Green's function is \[G_{X,Y}(2,2)=\frac{1}{m^{2}+2}\sum_{k=0}^{\infty}\frac{1}{(m^{2}+2)^{2k}}= \frac{1}{m^{2}+2}\cdot\frac{1}{1-\frac{1}{(m^{2}+2)^{2}}}=\frac{m^{2}+2}{(m^{2} +1)(m^{2}+3)}\] and \[G_{X,Y}(2,3)=\frac{1}{m^{2}+2}\sum_{k=0}^{\infty}\frac{1}{(m^{2}+2)^{2k+1}}= \frac{1}{(m^{2}+1)(m^{2}+3)},\] in agreement with (58). As for the determinant, notice there is a unique cycle of length \(2\), all other cycles are iterates of this one, therefore, the logarithm of the normalized determinant is given by \[\log\det\tilde{K}_{X,Y}=-\sum_{k=1}^{\infty}\frac{(m^{2}+2)^{-2k}}{k}=\log(1- (m^{2}+2)^{-2})\] and the determinant is then \[\det K_{X,Y}=(m^{2}+1)(m^{2}+3),\] in agreement with (61). For an example of the extension operator, notice that \((p^{\prime})^{k}_{X,Y}(2,1)\) is \(1\) for odd \(k\) and \(0\) for even \(k\), and therefore \[\langle 2|E_{Y,X}|1\rangle=\sum_{k=0}^{\infty}(m^{2}+2)^{-(2k+1)}=\frac{m^{2}+ 2}{(m^{2}+1)(m^{2}+3)}\] and similarly \((p^{\prime})^{k}_{X,Y}(3,1)=1\) for \(k\geq 2\) even and \(0\) for odd \(k\), and therefore \[\langle 3|E_{Y,X}|1\rangle=\sum_{k=0}^{\infty}(m^{2}+2)^{-(2k+2)}=\frac{1}{(m^{ 2}+1)(m^{2}+3)},\] in agreement with (60). Finally, we can compute the matrix elements of the Dirichlet-to-Neumann operator: we have \((p^{\prime\prime})^{k}(1,1)=1\) for even \(k\geq 2\) and it vanishes for odd \(k\), therefore \[\langle 1|\text{DN}_{Y,X}|1\rangle=m^{2}+1-\sum_{k=1}^{\infty}(m^{2}+2)^{-2k+1} =m^{2}+1-\frac{m^{2}+2}{(m^{2}+1)(m^{2}+3)}.\] Similarly, \((p^{\prime\prime})^{k}(1,4)\) vanishes for even \(k\) and is \(1\) for odd \(k\geq 3\), and therefore \[\langle 1|\text{DN}_{Y,X}|4\rangle=-\sum_{k=1}^{\infty}\frac{1}{(m^{2}+2)^{2k}} =-\frac{1}{(m^{2}+1)(m^{2}+3)}.\] These formulae agree with (59). ### Gluing formulae from path sums In this section we prove Theorem 3.8 from the path sum formulae presented in this chapter. The main observation in this proof is a decomposition of h-paths in \(X\) with respect to a subgraph \(Y\). **Lemma 5.22**.: _Let \(u,v\in X\), then we have a bijection_ \[\Pi_{X}(u,v)\leftrightarrow\\ \Pi_{X,Y}(u,v)\bigsqcup_{w_{1},w_{2}\in Y}\Pi^{\prime}_{X,Y}(u,w _{1})\times\Pi_{X}(w_{1},w_{2})\times\Pi^{\prime}_{X,Y}(w_{2},v), \tag{144}\] _where \(\Pi_{X,Y}(u,v)\) denotes h-paths in \(X\) that contain no vertices in \(Y\) (but they may contain edges between \(X\setminus Y\) and \(Y\)) and \(\Pi^{\prime}_{X,Y}(u,w)\), for either \(u\) or \(w\) in \(Y\), denote h-paths containing exactly one vertex in \(Y\), namely the initial or final one.44_ Footnote 44: It is possible to have \(u=w\in Y\), in which case there \(\Pi^{\prime}_{X}(w,w)\) contains only the 1-element path. Proof.: One may decompose \(\Pi_{X}(u,v)\) into paths containing no vertex in \(Y\) and those containing at least one vertex in \(Y\). The former are precisely \(\Pi_{X,Y}(u,v)\). If \(\tilde{\gamma}\) is an element of the latter, let \(w_{1}\) be the first vertex in \(\tilde{\gamma}\) in \(Y\) and \(w_{2}\) the last vertex in \(\tilde{\gamma}\) in \(Y\). Splitting \(\tilde{\gamma}\) at \(w_{1}\) and \(w_{2}\) gives the map from left to right. The inverse map is given by composition of h-paths. See also Figure 21 \begin{table} \begin{tabular}{c|c|c} Object & h-path sum & path sum \\ \hline \(\langle u|G_{X,Y}|v\rangle\) & \(m^{-2}\sum_{\tilde{\gamma}\in\Pi_{X,Y}(u,v)}s(\tilde{\gamma})\) (Eq. (126)) & \(\sum_{\gamma\in P_{X\setminus Y}(u,v)}w_{X,Y}(\gamma)\) (Eq. (134) \\ \(\log\det\frac{K_{X,Y}}{m^{2}}\) & \(-\sum_{[\tilde{\gamma}]\in\Gamma^{\geq 1}_{X,Y}}\frac{s(\tilde{\gamma})}{t( \tilde{\gamma})}\) (Eq. (127)) & \\ \(\log\det\tilde{K}_{X,Y}\) & & \(-\sum_{[\gamma]\in C^{\geq 1}_{X,Y}}\frac{w^{\prime}_{X,Y}(\gamma)}{t( \gamma)}\) (Eq. (138)) \\ \(\langle u|E_{Y,X}|v\rangle\) & \(\sum_{\tilde{\gamma}\in\Pi^{\prime}_{X,Y}(u,v)}s(\tilde{\gamma})\) (Eq. (132) & \(\sum_{\gamma\in P^{\prime}_{X,Y}(u,v)}w_{X,Y}(\gamma)\) (135) \\ \(\langle u|\mathrm{DN}_{Y,X}|v\rangle\) & \(m^{2}\delta_{uv}-\) & \((m^{2}+\mathrm{val}(v))\delta_{uv}-\) \\ & \(-m^{2}\sum_{\tilde{\gamma}\in\Pi^{\prime\prime}_{X,Y}(u,v)}s(\tilde{\gamma})\) (Eq. (131)) & \(-\sum_{P^{\prime\prime}_{X,Y}(u,v)}w_{X,Y}(\gamma)\) (Eq. (137)) \\ \end{tabular} \end{table} Table 5. Summary of path sum formulae, relative case. For the gluing formula for the determinant, we will also require the following observation on counting of closed paths. **Lemma 5.23**.: _Denote \(\Gamma_{X,Y}^{\geq 1,(k)}\) the set of \(h\)-cycles \([\tilde{\gamma}]\) in \(X\) of length \(l(\tilde{\gamma})\geq 1\) that intersect \(Y\) exactly \(k\) times, with \(k\geq 1\). Then concatenation of paths_ \[\bigsqcup_{w_{1},w_{2},\ldots,w_{k}\in Y}\Pi_{X}^{\prime\prime}(w_{1},w_{2}) \times\Pi_{X}^{\prime\prime}(w_{2},w_{3})\times\ldots\times\Pi_{X}^{\prime \prime}(w_{k},w_{1})\to\Gamma_{X,Y}^{\geq 1,(k)} \tag{145}\] _is surjective, and a cycle \([\tilde{\gamma}]\) has precisely \(k/t(\tilde{\gamma})\) preimages._ Proof.: For a cycle \(\tilde{\gamma}\in\Gamma_{X,Y}^{\geq 1,(k)}\), denote \(w_{1},\ldots w_{k}\) the intersection points with \(Y\) and \(\tilde{\gamma}^{(i)}\) the segment of \(\tilde{\gamma}\) between \(w_{i+1}\) and \(w_{i}\) (here we set \(w_{k+1}=w_{1}\)). See Figure 22. Then obviously \(\tilde{\gamma}\) is the concatenation of the \(\tilde{\gamma}^{(i)}\), so concatenation is surjective. On the other hand a \(k\)-tuple of paths concatenates to the same closed path if and only if they are related to each other by a cyclic shift (this corresponds to a cyclic shift of the labeling of the intersection points). They are precisely \(k/t(\tilde{\gamma})\) such shifts. Recall that \(D^{\prime}\) is the operator given by summing the weight \(s(\tilde{\gamma})=(m^{-2})^{l(\tilde{\gamma})}(-1)^{h(\tilde{\gamma})}\) over paths starting and ending on \(Y\) without intersecting \(Y\) in between (Eq. (129).) **Corollary 5.24**.: _We have that_ \[\operatorname{tr}(D^{\prime})^{k}=k\sum_{\tilde{\gamma}\in\Gamma_{X,Y}^{\geq 1,(k )}}\frac{(m^{-2})^{l(\tilde{\gamma})}(-1)^{h(\tilde{\gamma})}}{t(\tilde{\gamma})}. \tag{146}\] Proof.: The statement follows by summing the weight \(s(\tilde{\gamma})\) over the l.h.s. and r.h.s. of (145) in Lemma 5.23, using multiplicativity of \(s(\tilde{\gamma})\) in the l.h.s. and with multiplicity \(k/t(\tilde{\gamma})\) in the r.h.s. (corresponding to the count of preimages of the map (145)). ### \(h\)-path sum proof of Theorem 3.8 We first prove the gluing formula \[\langle u|G_{X}|v\rangle=\langle u|G_{X,Y}|v\rangle+\sum_{v_{1},v_{2}\in Y} \langle u|E_{Y,X}|v_{1}\rangle\langle v_{1}|\text{DN}_{Y,X}^{-1}|v_{2}\rangle \langle v_{2}|E_{Y,X}|v\rangle.\] Applying the decomposition of \(\Pi_{X}(u,v)\) (144), and using multiplicativity of the weight \(s(\tilde{\gamma})=(m^{-2})^{l(\tilde{\gamma})}(-1)^{h(\tilde{\gamma})}\), we get \[G_{X}(u,v)=m^{-2}\sum_{\tilde{\gamma}\in\Pi_{X,Y}(u,v)}s(\tilde{ \gamma})+\\ \sum_{w_{1},w_{2}\in Y}\left(\sum_{\tilde{\gamma}_{1}\in\Pi_{X,Y }^{\prime}(u,w_{1})}s(\tilde{\gamma})\right)\left(m^{-2}\sum_{\tilde{\gamma}_{ 2}\in\Pi_{X}(w_{1},w_{2})}s(\tilde{\gamma})\right)\left(\sum_{\tilde{\gamma}_{ 1}\in\Pi_{X,Y}^{\prime}(w_{2},v)}s(\tilde{\gamma})\right). \tag{147}\] Figure 22. Cycles in \(X\) either do not intersect \(Y\) (like \(\tilde{\gamma}_{1}\)) and such that intersect \(Y\)\(k\) times (in the case of \(\tilde{\gamma}_{2}\), \(k=4\). Such paths can be decomposed into \(k\) h-paths \(\tilde{\gamma}^{(i)}\) in \(\Pi_{X,Y}^{\prime\prime}(w_{i},w_{i+1})\) in \(k\) different ways, corresponding to cyclic shift of the labels of \(w_{i}\)’s. The first term is \(G_{X,Y}\) by equation (126). In the second term, we recognize the path sum expressions (132) for the extension operator and (128) for the operator \(D\), which is the inverse of the total Dirichlet-to-Neumann operator. This completes the proof of the gluing formula for the propagator. Next, we prove the gluing formula for the determinant \[\det(K_{X})=\det(K_{X,Y})\det(\operatorname{DN}_{Y,X}).\] Dividing both sides by \(m^{2N}\), where \(N\) is the number of vertices in \(X\), this is equivalent to \[\det(m^{-2}K_{X})=\det(m^{-2}K_{X,Y})\det(m^{-2}\operatorname{DN}_{Y,X}).\] Taking negative logarithms and using \(\log\det=\operatorname{tr}\log\), this is equivalent to \[-\log\det(1+m^{-2}\Delta_{X})=-\log\det(1+m^{-2}\Delta_{X,Y})-\operatorname{ tr}\log(I-D^{\prime}), \tag{148}\] where we have used that \(\operatorname{DN}_{Y,X}=m^{2}(I-D^{\prime})\). We claim that equation (148) can be proven by summing over paths. Indeed, the left hand side is given by summing over closed h-paths in \(X\). We decompose them into paths which do not intersect \(Y\), and those that do. From the former we obtain \(-\log\det K_{X,Y}/m^{2}\) by equation (127). Decompose the latter set into paths that intersect \(Y\) exactly \(k\) times, previously denoted \(C_{X,Y}^{\geq 1,(k)}\). By Corollary 5.23, when summing over those paths we obtain precisely \(\operatorname{tr}(D^{\prime})^{k}/k\). Summing over \(k\) we obtain \(\operatorname{tr}\sum_{k\geq 1}(D^{\prime})^{k}/k=-\operatorname{tr}\log(I-D^{ \prime})\), which proves the gluing formula for the determinant. Path sum proof of Theorem 3.8.: In the proof above we used the h-path expansions, but of course one could have equally well used the formulae in terms of paths. To prove the gluing formula for the Green's function \[\langle u|G_{X}|v\rangle=\langle u|G_{X,Y}|v\rangle+\sum_{v_{1},v_{2}\in Y} \langle u|E_{Y,X}|v_{1}\rangle\langle v_{1}|\operatorname{DN}_{Y,X}^{-1}|v_{2 }\rangle\langle v_{2}|E_{Y,X}|v\rangle\] in terms of path counts, notice that a path crossing \(Y\) can again be decomposed into a path from \(X\) to \(Y\), then a path from \(Y\) to \(Y\) and another path from \(Y\) to \(X\). The weight \(w(\gamma)=\prod_{v\in V(\gamma)}(m^{2}+\operatorname{val}(v))^{-1}\) is distributed by among those three paths by taking the vertices on \(Y\) to the \(Y-Y\) path: In this way, when summing over all paths from the \(Y-Y\) paths we obtain precisely the operator \(D=\operatorname{DN}_{Y,X}^{-1}\) (this is a submatrix of \(G_{X}\) and hence the weights of paths _include_ start and end vertices) while from the other parts we obtain the extension operator \(E_{Y,X}\) (where weights of paths _do not_ include the vertex on \(Y\)). Next, we consider the gluing formula for the determinant, \[\det K_{X}=\det K_{X,Y}\det\operatorname{DN}_{Y,X}.\] Dividing both sides by \[\det\Lambda_{X}=\prod_{v\in X}(m^{2}+\operatorname{val}(v))=\prod _{v\in X\setminus Y}(m^{2}+\operatorname{val}_{X}(v))\prod_{v\in Y}(m^{2}+ \operatorname{val}_{X}(v))\\ =\det\Lambda_{X,Y}\det\Lambda_{Y},\] this is equivalent to \[\det\tilde{K}_{X}=\det\tilde{K}_{X,Y}\det\Lambda_{Y}^{-1}\operatorname{DN}_{Y,X}.\] Taking logarithms and using the formulae (114) and (138) for logarithms of determinants of kinetic operators, we get \[\log\det\tilde{K}_{X}-\log\det\tilde{K}_{X,Y}=-\sum_{[\gamma]\in C_{X}^{\geq 1},V( \gamma)\cap Y\neq\varnothing}\frac{w^{\prime}_{X}(\gamma)}{t(\gamma)}, \tag{149}\] where on the right hand side we are summing over cycles in \(X\) that intersect \(Y\). We therefore want to show that the sum on the r.h.s of (149) equals \(\log\det\Lambda_{Y}^{-1}\mathrm{DN}_{Y,X}\). From (137), we have that \(\mathrm{DN}_{Y,X}=\Lambda_{Y}-D^{\prime\prime}\), where we introduced the auxiliary operator \(D^{\prime\prime}\colon C^{0}(Y)\to C^{0}(Y)\) with matrix elements \[\langle u|D^{\prime\prime}|v\rangle=\sum_{\gamma\in P_{X}^{\prime\prime}(u,v) }w_{X,Y}(\gamma).\] Then \[-\log\det\Lambda_{Y}^{-1}\mathrm{DN}_{Y,X}=-\operatorname{tr}\log(I-\Lambda_ {Y}^{-1}D^{\prime\prime})=\sum_{k\geq 1}\operatorname{tr}\frac{(\Lambda_{Y}^{-1 }D^{\prime\prime})^{k}}{k}. \tag{150}\] Notice that \(\operatorname{tr}(\Lambda_{Y}^{-1}D^{\prime\prime})^{k}\) is given by summing over closed paths \(\gamma\) that intersect \(Y\) exactly \(k\) times, with the weight \(w^{\prime}(\gamma)\): the factor \((m^{2}+\operatorname{val}(v))^{-1}\) in \(w^{\prime}(\gamma)\) for vertices not on \(Y\) comes from \(D^{\prime\prime}\) (recall that \(w_{X,Y}(\gamma)\) does not contain factors for vertices on \(Y\)), and from \(\Lambda_{Y}^{-1}\), for \(v\in Y\). By a combinatorial argument analogous to Lemma 5.23, every cycle appears in this way exactly \(\frac{k}{t(\gamma)}\) times. Therefore the sum on the r.h.s. of (149) equals the sum on the r.h.s of equation (150), which finishes the proof. ## 6. Interacting theory: first quantization formalism In this section, we extend the path sum formulae to the interacting theory. In this language, weights of Feynman graphs are given by summing over all possible maps from a Feynman graph to a spacetime graph where edges are mapped to paths. We also analyze the gluing formula in terms of path sums. ### Closed graphs We first consider the case of closed graphs. #### 6.1.1. Edge-to-path maps Let \(\Gamma\) and \(X\) be graphs. Recall that by \(P_{X}\) we denote the set of all paths in \(X\), and by \(\Pi_{X}\) the set of h-paths in \(X\). **Definition 6.1**.: An _edge-to-path map_\(F=(F_{V},F_{P})\) from \(\Gamma\) to \(X\) is a pair of maps \(F_{V}\colon V_{\Gamma}\to V_{X}\) and \(F_{P}\colon E(\Gamma)\to P_{X}\) such that for every edge \(e=(\mathsf{u},\mathsf{v})\) in \(\Gamma\) we have \[F_{P}(e)\in P_{X}(F_{V}(\mathsf{u}),F_{V}(\mathsf{v})).\] The set of edge-to-path maps is denoted \(P_{X}^{\Gamma}\). Equivalently, an edge-to-path map is a lift of a map \(F_{V}\colon V_{\Gamma}\to V_{X}\) to the fibrations \(E_{\Gamma}\to V_{\Gamma}\times V_{\Gamma}\) and \(P_{X}\to V_{X}\times V_{X}\). Similarly, we define an _edge-to-h-path map_ as a lift of a map \(F_{V}\colon V_{\Gamma}\to V_{X}\) to the fibrations \(E_{\Gamma}\to V_{\Gamma}\times V_{\Gamma}\), \(\Pi_{X}\to V_{X}\times V_{X}\). The set of such maps is denoted \(\Pi_{X}^{\Gamma}\). Alternatively, an edge-to-path map can be thought of as labeling of \(\Gamma\) where we label vertices in \(\Gamma\) by vertices of \(X\) and edges in \(\Gamma\) by _paths_ in \(X\). #### 6.1.2. Feynman weights Suppose that \(\Gamma\) is a Feynman graph appearing in the perturbative partition function on a closed graph, with weight given by (87). By the results of the previous section, we have the following first quantization formula, a combinatorial analog of the first quantization formula (6): **Proposition 6.2**.: _The weight of the Feynman graph \(\Gamma\) has the path sum expression_ \[\Phi_{\Gamma,X} =\prod_{\mathsf{v}\in V_{\Gamma}}(-p_{\operatorname{val(v)}}) \sum_{F\in P_{X}^{\Gamma}}\prod_{e\in E_{\Gamma}}w(F_{P}(e)) \tag{152}\] \[=\prod_{\mathsf{v}\in V_{\Gamma}}(-p_{\operatorname{val(v)}}) \sum_{F\in\Pi_{X}^{\Gamma}}\prod_{e\in E_{\Gamma}}s(F_{\Pi}(e)), \tag{151}\] _where in (151) we are summing over all edge-to-path maps from \(\Gamma\) to \(X\), and in (152) we are summing over all edge-to-h-path maps._ Figure 23 contains an example of an edge-to-path map from \(\Gamma\) the \(\Theta\)-graph to a grid \(X\). We then have the following expression of the perturbative partition function: Figure 23. An example of an edge-to-path map. **Corollary 6.3**.: _The perturbative partition function of \(X\) is given in terms of edge-to-paths maps as_ \[Z_{X}^{\rm pert}=\det(K_{X})^{-\frac{1}{2}}\sum_{\Gamma}\frac{\hbar^{-\chi( \Gamma)}}{\operatorname{Aut}(\Gamma)}\sum_{F\in P_{X}^{\Gamma}}\prod_{\nu\in V _{\Gamma}}(-p_{\operatorname{val}(\nu)})\prod_{e\in E_{\Gamma}}w(F_{P}(e)). \tag{153}\] We can reformulate this as the following "first quantization formula." **Corollary 6.4**.: _The logarithm of the perturbative partition function has the expression_ \[\log\,Z_{X}^{\rm pert}=\\ =\frac{1}{2}\sum_{[\gamma]\in C_{X}^{\geq 1}}\frac{w^{\prime}( \gamma)}{t(\gamma)}+\sum_{\Gamma^{\rm conn}}\frac{\hbar^{-\chi(\Gamma)}}{ \operatorname{Aut}(\Gamma)}\sum_{F\in P_{X}^{\Gamma}}\prod_{\nu\in V_{\Gamma} }(-p_{\operatorname{val}(\nu)})\prod_{e\in E_{\Gamma}}w(F_{P}(e))\\ -\frac{1}{2}\sum_{v\in X}\log(m^{2}+\operatorname{val}(v)). \tag{154}\] _Here \(\Gamma^{\rm conn}\) stands for connected Feynman graphs._ We remark that in the second line of (154), one can interpret the first term as coming from an analog of edge-to-path maps for the circle, divided by automorphisms of such maps (the factor of 2 comes from orientation reversal). In this sense, the second line can be interpreted as the partition function of a 1d sigma model with target \(X\). The term in the third line should be interpreted as a normalizing constant. ### Relative version Now we let \(X\) be a graph and \(Y\) a subgraph, and consider the interacting theory on \(X\) relative to \(Y\). Recall that in the relative case, Feynman graphs \(\Gamma\) have vertices split into bulk and boundary vertices, with Feynman weight given by (89). Bulk vertices have valence at least 3, while boundary vertices are univalent. Again, we do not want to allow boundary-boundary edges. Edge-to-path maps now additionally have to respect the type of edge: bulk-bulk edges are mapped to paths in \(P_{X\setminus Y}\) and bulk-boundary edges are mapped to paths in \(P_{X,Y}^{\prime}\). We collect this in following technical definition: **Definition 6.5**.: Let \(\Gamma\) be a graph with \(V(\Gamma)=V_{\Gamma}^{\rm bulk}\sqcup V_{\Gamma}^{\partial}\), such that \(\operatorname{val}(v)\geq 3\) for all \(v\in V_{\Gamma}^{\rm bulk}\) and \(\operatorname{val}(v^{\partial})=1\) for all \(v^{\partial}\in V_{\Gamma}^{\partial}\). Denote by the induced decomposition of edges by \[E(\Gamma)=E_{\Gamma}^{\rm bulk-bulk}\sqcup E_{\Gamma}^{\rm bulk-bdry} \sqcup E_{\Gamma}^{\rm bdry-bdry}.\] Let \(X\) be a graph and \(Y\subset X\) be a subgraph. Then a _relative edge-to-path map_ (resp. _relative edge-to-h-path map_) is a pair \(F=(F_{V},F_{P})\) (resp. \(F=(F_{V},F_{\Pi})\)) where \(F_{V}\colon V(\Gamma)\to V(X)\) and \(F_{P}\colon E(\Gamma)\to P_{X}\) (resp. \(F_{\Pi}\colon E(\Gamma)\to\Pi_{X}\)) such that * \(F_{V}\) respects the vertex decompositions, i.e. \(F_{V}(V_{\Gamma}^{\rm bulk})\subset V(X)\setminus V(Y)\) and \(F_{V}(V_{\Gamma}^{\rm bdry})\subset V(Y)\), * \(F_{E}\) (resp. \(F_{\Pi}\)) is a lift of \(F_{V}\) i.e. for all edges \(e=(\mathfrak{u},\mathfrak{v})\in E(\Gamma)\) we have \(F_{P}(e)\in P_{X}(F_{V}(\mathfrak{u}),F_{V}(\mathfrak{v}))\) (resp. \(F_{\Pi}(e)\in\Pi_{X}(F_{V}(\mathfrak{u}),F_{V}(\mathfrak{v}))\)), * \(F_{P}\) (resp. \(F_{\Pi}\)) respects the edge decompositions, i.e. \(F_{P}(E_{\Gamma}^{\mathrm{bulk-bulk}})\subset P_{X\setminus Y}\), \(F_{P}(E_{\Gamma}^{\mathrm{bulk-bdry}})\subset P_{X,Y}^{\prime}\), and similarly for \(F_{\Pi}\). The set of relative edge-to-(h-)path maps is denoted \(P_{X,Y}^{\Gamma}\) (resp. \(\Pi_{X,Y}^{\Gamma}\)). Figure 24 contains an example of a relative edge-to-path map from \(\Gamma\) a Feynman graph with boundary vertices to a grid \(X\) relative to a subgraph \(Y\). We can now express the weight of a Feynman graph with boundary vertices as a sum over relative edge-to-path maps - the combinatorial analog of the first quantization formula (19): **Proposition 6.6**.: _Suppose that \(\Gamma\) is a Feynman graph with boundary vertices and \(\phi\in C^{0}(Y)\). Then, the Feynman weight \(\Phi_{\Gamma,(X,Y)}(\phi_{Y})\) can be expressed by summing over relative edge-to-path maps as_ \[\Phi_{\Gamma,(X,Y)}(\phi_{Y})=\\ =\sum_{F\in P_{X,Y}^{\Gamma}}\prod_{\mathsf{v}\in V_{\Gamma}^{ \mathrm{bulk}}}(-p_{\mathrm{val(v)}})\prod_{\mathsf{v}^{\partial}\in V_{ \Gamma}^{\partial}}\phi_{Y}(F_{V}(\mathsf{v}^{\partial}))\cdot\prod_{e\in E_{ \Gamma}}w_{X,Y}(F_{P}(e)). \tag{155}\] _In terms of h-paths, the expression is_ \[\Phi_{\Gamma,(X,Y)}(\phi_{Y})=\\ =(m^{2})^{-\#E^{\mathrm{bulk-bulk}}}\sum_{F\in\Pi_{X,Y}^{\Gamma}} \prod_{\mathsf{v}\in V_{\Gamma}^{\mathrm{bulk}}}(-p_{\mathrm{val(v)}})\prod_{ \mathsf{v}^{\partial}\in V_{\Gamma}^{\partial}}\phi_{Y}(F_{V}(\mathsf{v}^{ \partial}))\cdot\prod_{e\in E_{\Gamma}}s(F_{\Pi}(e)). \tag{156}\] Proof.: In (155) we are using the path sum formulae (134), (135)Similarly, to see (156) we are using the relative h-path sums (126), (132) and notice that every bulk-bulk Green's function comes with an additional power of \(m^{-2}\). We immediately obtain the following formula for the partition function: Figure 24. An example of a relative edge-to-path map. **Proposition 6.7**.: _The relative perturbative partition function can be expressed as_ \[Z_{X,Y}^{\rm pert}(\phi)=\det(K_{X,Y})^{-\frac{1}{2}}\cdot e^{- \frac{1}{2\hbar}((\phi_{Y,({\rm DN}_{Y,X}-\frac{1}{2}K_{Y})}\phi_{Y})-S_{Y}^{ \rm int}(\phi_{Y}))}.\\ \cdot\sum_{\Gamma}\frac{\hbar^{-\chi(\Gamma)}}{|{\rm Aut}(\Gamma) |}\sum_{F\in P_{X,Y}^{\Gamma}}\prod_{{\mathsf{v}}\in V_{\Gamma}^{\rm bulk}}(-p_ {\rm val({\mathsf{v}})})\prod_{\mathsf{v}^{\partial}\in V_{\Gamma}^{\partial}} \phi_{Y}(F_{V}({\mathsf{v}}^{\partial}))\cdot\prod_{e\in E_{\Gamma}}w_{X,Y}(F_{ P}(e)). \tag{157}\] _Remark 6.8_.: As in Remark 4.2, the Dirichlet-to-Neumann operator in the exponent of (157) could be expanded in terms of Feynman diagrams with boundary-boundary edges. An edge-to-path map \(F\) should map such a boundary-boundary edge \(e=({\mathsf{u}}^{\partial},{\mathsf{v}}^{\partial})\) either to a path \(\gamma\in P_{X}^{\prime\prime}\) (which is weighted with \(w_{X,Y}(\gamma)\)) or, in the case where \(F_{V}({\mathsf{u}}^{\partial})=F_{V}({\mathsf{v}}^{\partial})\), possibly to the constant path \(({\mathsf{v}}^{\partial})\) (which is then weighted with \(-(m^{2}+{\rm val}({\mathsf{v}}^{\partial}))\). Equivalently, one has the following expression for the logarithm of the relative perturbative partition function: \[\log\,Z_{X,Y}^{\rm pert}(\phi)=\\ =\frac{1}{2\hbar}\Bigg{(}\sum_{u,v\in Y}\phi_{Y}(u)\phi_{Y}(v) \cdot\bigg{(}-\big{(}m^{2}+{\rm val}_{X}(v)\big{)}\,\delta_{uv}+\sum_{\gamma \in P_{X,Y}^{\prime\prime}(u,v)}w_{X,Y}(\gamma)\bigg{)}+\\ +\frac{1}{2}(\phi_{Y},K_{Y}\phi_{Y})+\sum_{v\in Y}p(\phi_{Y}(v)) \Bigg{)}\\ +\frac{1}{2}\left(\sum_{[\gamma]\in C_{X\setminus Y}^{\geq 1}} \frac{w_{X,Y}^{\prime}(\gamma)}{t(\gamma)}-\sum_{v\in X}\log(m^{2}+{\rm val}(v ))\right)\\ +\sum_{\Gamma^{\rm conn}}\frac{\hbar^{-\chi(\Gamma)}}{|{\rm Aut}( \Gamma)|}\sum_{F\in P_{X,Y}^{\Gamma}}\prod_{{\mathsf{v}}\in V_{\Gamma}^{\rm bulk }}(-p_{\rm val({\mathsf{v}})})\prod_{\mathsf{v}^{\partial}\in V_{\Gamma}^{ \partial}}\phi_{Y}(F_{V}({\mathsf{v}}^{\partial}))\cdot\prod_{e\in E_{\Gamma}} w_{X,Y}(F_{P}(e)). \tag{158}\] This generalizes the results (154) and (143) to relative interacting case. ### Cutting and gluing The goal of this section is to provide a sketch of a proof of the gluing of perturbative partition functions (91) by counting paths. Suppose that \(X=X^{\prime}\cup_{Y}X^{\prime\prime}\) and \(F\in P_{X}^{\Gamma}\) is an edge-to-path map from a Feynman graph \(\Gamma\) to \(X\).45 Then, the decomposition \(X=X^{\prime}\cup_{Y}X^{\prime\prime}\) induces a decoration of \(\Gamma\), as in Section 4.2. Namely, we decorate a vertex \({\mathsf{v}}\in V_{\Gamma}\) with \(\alpha\in\{X^{\prime},Y,X^{\prime\prime}\}\) if \(F_{V}({\mathsf{v}})\in\alpha\), and we decorate an edge \(e\) with \(c\) if and only if the path \(F_{P}(e)\) contains a vertex in \(Y\). See Figure 25. Footnote 45: Again, for notational simplicity we consider only the case where \(X\) is closed, with the generalization to cobordisms notationally tedious but straightforward. Recall that from a decorated graph, we can form two new graphs \(X^{\prime}\) and \(X^{\prime\prime}\) with boundary vertices. Given and edge-to-path map \(F\) and its induced decoration of \(\Gamma\), we can define two new relative edge-to-path maps \((F^{\prime}_{V},F^{\prime}_{P})\) and \((F^{\prime\prime}_{V},F^{\prime\prime}_{P})\) for the new graphs \(X^{\prime}\) and \(X^{\prime\prime}\) as follows. The map \(F^{\prime}_{V}\) is simply the restriction of \(F_{V}\) to vertices colored \(X^{\prime}\). For edges labeled \(u\), \(F^{\prime}_{P}(e)=F_{P}(e)\). For a bulk-boundary edge in \(X^{\prime}\), \(F^{\prime}_{P}(e)\) is the segment of the path \(F_{P}(\tilde{e})\) of the corresponding edge \(\tilde{e}\) in \(\Gamma\) (that was necessarily labeled \(c\)) up to (and including) the first vertex in \(Y\). The construction of \((F^{\prime\prime}_{V},F^{\prime\prime}_{P})\) is similar, as is the extension to edge-to-hesitant-path maps. The definition of \(\Gamma^{\prime},\Gamma^{\prime\prime}\) ensures that \((F^{\prime}_{V},F^{\prime}_{P})\) and \((F^{\prime\prime}_{V},F^{\prime\prime}_{P})\) are well-defined relative edge-to-path maps. For example, from the edge-to-path map in Figure 25, one obtains the two edge-to-path maps in Figure 26. Notice that in the process of creating the cut edge-to-path-maps we are forgetting about the parts of the paths between the first and the last crossing of \(Y\), as well as the vertices labelled with \(Y\). This information is encoded in the Dirichlet-to-Neumann operator and the interacting term \(S_{Y}^{\text{int}}\) respectively. Integrating the product of a pair of relative edge-to-path maps appearing in the product \(Z_{X^{\prime},Y}^{\text{pert}}(\phi_{Y})Z_{X^{\prime\prime},Y}^{\text{pert}}( \phi_{Y})\) over \(\phi_{Y}\), two things happen: * An arbitrary number of vertices on \(Y\) is created (due to the factor of \(e^{-\frac{1}{\hbar}S_{Y}^{\text{int}}}(\phi_{Y})\)). * All vertices on \(Y\) (the new boundary vertices and those coming from the relative edge-to-path maps) are connected by the inverse \(D\) of total Dirichlet-to-Neumann operators. In this way, we obtain all edge-to-path maps that give rise to this pair of relative edge-to-path maps. This provides a sketch of an alternative proof of the gluing formula for perturbative partition functions using the first quantization formalism, i.e. path sums. ## 7. Conclusion and outlook In this paper we analyzed a combinatorial toy model for massive scalar QFT, where the spacetime manifold is a graph. We focused on incarnations of locality - the behaviour under cutting and gluing - and the interplay with the first quantization formalism. In particular, we showed that the convergent functional integrals naturally define a functor with source a graph cobordism category and target the category of Hilbert spaces, and we proposed an extended version with values in commutative unital algebras. We discussed the perturbative theory - the \(\hbar\to 0\) limit - and its behaviour under cutting and gluing. Finally, we analyzed the theory in the first quantization formalism, where all objects have expressions in terms of sums over paths (or h-paths) in the spacetime graph. We showed that cutting and gluing interacts naturally with those path sums. Below we outline several promising directions for future research. * see [16]. Footnote 47: One approach to constructing a QFT with corners involves geometric quantization of the BV-BFV phase spaces attached to corners, [6, 14], see also [24]. Another interesting problem is the study of the limit \(m\to 0\). In this limit, the kinetic operator becomes degenerate if no boundary conditions are imposed, and extra work is needed to make sense of theory.48 This will be particularly interesting in the case of two-dimensional lattice graphs, where the massless limit of the continuum theory is a conformal field theory (in the free case, \(p(\phi)=0\)), thus the massless limit of our toy model is a discrete model for this CFT. We also remark that while the h-path formulae do not interact well with the \(m\to 0\) limit, since they are expansions in \(m^{-2}\), for path sums the weight of a path at \(m=0\) is Footnote 48: In this case it is natural to formulate the perturbative quantum answers in terms of effective actions of the zero-mode of the field \(\phi\); it might be natural here to employ the BV-BFV formalism [7] combining effective actions with cutting-gluing. \[w(\gamma)=\prod_{v\in V(\Gamma)}\frac{1}{\operatorname{val}(v)},\] i.e. the weight of the path is the probability of a random walk on the graph where at every vertex, the walk can continue along all adjacent edges with probability \(1/\operatorname{val}(v)\). * _Gauge theories on cell complexes._ Finally, it will be interesting to study in a similar fashion (including first quantization formalism) gauge theories (e.g. \(p\)-form electrodynamics, Yang-Mills or AKSZ theories) on a cell complex, with gauge fields (and ghosts, higher ghosts and antifields) becoming cellular cochains.49 Footnote 49: Second quantization formalism for (abelian and non-abelian) \(BF\) theory on a cell complex is developed in [8]. What we propose here is a generalization to other models, possibly involving metric on cochains, and a focus on the path-sum approach.
2303.12818
An Empirical Analysis of the Shift and Scale Parameters in BatchNorm
Batch Normalization (BatchNorm) is a technique that improves the training of deep neural networks, especially Convolutional Neural Networks (CNN). It has been empirically demonstrated that BatchNorm increases performance, stability, and accuracy, although the reasons for such improvements are unclear. BatchNorm includes a normalization step as well as trainable shift and scale parameters. In this paper, we empirically examine the relative contribution to the success of BatchNorm of the normalization step, as compared to the re-parameterization via shifting and scaling. To conduct our experiments, we implement two new optimizers in PyTorch, namely, a version of BatchNorm that we refer to as AffineLayer, which includes the re-parameterization step without normalization, and a version with just the normalization step, that we call BatchNorm-minus. We compare the performance of our AffineLayer and BatchNorm-minus implementations to standard BatchNorm, and we also compare these to the case where no batch normalization is used. We experiment with four ResNet architectures (ResNet18, ResNet34, ResNet50, and ResNet101) over a standard image dataset and multiple batch sizes. Among other findings, we provide empirical evidence that the success of BatchNorm may derive primarily from improved weight initialization.
Yashna Peerthum, Mark Stamp
2023-03-22T12:41:12Z
http://arxiv.org/abs/2303.12818v1
# An Empirical Analysis of the Shift and Scale Parameters in BatchNorm ###### Abstract Batch Normalization (BatchNormNorm) is a technique that improves the training of deep neural networks, especially Convolutional Neural Networks (CNN). It has been empirically demonstrated that BatchNorm increases performance, stability, and accuracy, although the reasons for such improvements are unclear. BatchNorm includes a normalization step as well as trainable shift and scale parameters. In this paper, we empirically examine the relative contribution to the success of BatchNorm of the normalization step, as compared to the re-parameterization via shifting and scaling. To conduct our experiments, we implement two new optimizers in PyTorch, namely, a version of BatchNorm that we refer to as AffineLayer, which includes the re-parameterization step without normalization, and a version with just the normalization step, that we call BatchNorm-minus. We compare the performance of our AffineLayer and BatchNorm-minus implementations to standard BatchNorm, and we also compare these to the case where no batch normalization is used. We experiment with four ResNet architectures (ResNet18, ResNet34, ResNet50, and ResNet101) over a standard image dataset and multiple batch sizes. Among other findings, we provide empirical evidence that the success of BatchNorm may derive primarily from improved weight initialization. ## 1 Introduction In recent years, computational advances have resulted in the ability to train deeper networks, which has led to increases in classification accuracy on many tasks [2]. This has allowed for the use of deep learning for problems that had previously been considered extremely difficult. One such example is the use of deep Convolutional Neural Networks (CNN) for image classification. CNNs apply filters--also called kernels--to images to extract high level features such as edges. Modern CNNs can use many such layers, which can increase their classification accuracy. However, deeper networks come with a set of challenges [5]. For example, in deep networks, overfitting is often problematic, and convergence can become harder to achieve. In addition, deeper networks are more likely to result in so-called vanishing and exploding gradients. These gradient issues arise due to products of weights that are computed. In order to avoid these pitfalls, a number of approaches have been suggested with, arguably, the most notable being Batch Normalization (BatchNorm). In a CNN, BatchNorm can be viewed as a layer that we insert between convolutional layers. In effect, BatchNorm is a statistical regularization process, where we obtain a standardized output for each node in a layer. The mean \(\mu\) and standard deviation \(\sigma\) for a given neuron in a given layer is determined over an entire batch. Then we normalize so that \((\mu,\sigma)=(0,1)\), and we compute the \(z\)-score. This \(z\)-score measures how far--in terms of standard deviation units--the input to the neuron is from the norm. The \(z\)-score is then multiplied by a parameter \(\gamma\), added to another parameter \(\beta\), and the result of this affine transformation is passed on to the next layer. The parameters \(\gamma\) and \(\beta\) are learned via training through backpropagation, along with the CNN weights. When BatchNorm was proposed in 2015, it was hailed as a breakthrough that unlocked the ability to develop networks with much greater complexity that did not experience the degradation in performance that had previously been observed [10]. The use of BatchNorm yields an empirical increase in training accuracy, a reduction in the number of training steps, and it improves overfitting, in the sense of decreasing the number of dropout layers that are required. However, it is not clear why BatchNorm produces such improvements. The original authors of BatchNorm plausibly suggested that the reason for the success of the technique was a reduction in the Internal Covariate Shift (ICS) [10]. ICS is defined as "the change in the distribution of network activations due to the change in network parameters during training" [10]. In effect, each layer "sees" its input as statistically differently, and the more layers, the greater this effect is likely to be. In backpropagation, such differences can lead to inconsistent updates--when gradient descent is used to modify the weights of a layer, there is an implicit assumption that the other layers have remained static [6]. In its original formulation, a BatchNorm layer was placed before an activation function, which would serve to make the input to each layer statistically similar, thus reducing ICS and thereby improving the representational power of the overall network. However, there is empirical evidence that placing the BatchNorm layer after the activation function achieves better results. This contradicts the original justification for BatchNorm, since the statistics of the inputs are not normalized. One attempt at analyzing BatchNorm injected noise into the data before the activation layers to skew the statistics of the layer--and thereby induce ICS--and found that there was no significant reduction in training accuracy as compared to standard BatchNorm [21]. The authors of [21] argue that what BatchNorm is really doing is re-parameterizing the model in a way that serves to smooth the loss surface and thereby accelerate convergence. They further posit that ICS may not even be an issue that needs to be addressed when training a neural network. Another study claimed that the speed and stability of BatchNorm are independent effects [4]. There are many alternatives to BatchNorm, with the normalization procedure taking place over differing attributes, e.g., normalization over layers, weights, and color channels each provide improvement, as compared to a standard model with no normalization. Many of these BatchNorm alternatives include shift and scale parameters, which serve to re-parameterize the data [16, 17, 20, 23, 27]. In this paper, we investigate the relationship between the re-parameterization in BatchNorm, and the normalization itself. The shift and scale parameters provide two additional trainable parameters--and hence two additional degrees of freedom--for each layer, which could account for some of the improvements observed in BatchNorm. To isolate the role of these parameters, we have produced a package that we call AffineLayer, which foregoes the normalization in BatchNorm and only performs the shift and scale re-parametrization step. We have also produced a package that we call BatchNorm-minus that includes the normalization step of BatchNorm, but not the shift and scale re-parameterization. By comparing our AffineLayer and BatchNorm-minus results to BatchNorm, we hope to obtain insight into the relative contribution of the normalization step, as compared to the re-parameterization step, with the ultimate goal of shedding further light on the mechanism by which BatchNorm improves the training of models. The remainder of this paper is organized as follows. In Section 2, we discuss relevant background topics, including the development of BatchNorm and previous attempts that have been made to understand why it works. We also discuss several alternatives to BatchNorm that have been previously developed. Section 3 focuses on our experimental design and provides implementation details. In Section 4, we give our experimental results and discuss our findings. We conclude the paper in Section 5, where we also consider directions for possible future work. ## 2 Background This section deals with the details of BatchNorm, the principles behind it, some potential reasons for why it works, and alternatives to BatchNorm. We also discuss why ResNet was chosen for this research and the contributions of this research paper. ### How BatchNorm Works BatchNorm is an optimization and re-parametrization procedure performed per mini-batch. It consists of a normalization step that is based on the statistics of each mini-batch, along with per-layer shift and scale parameters. During the validation phase, the shift and scale parameters from the training phase are used to evaluate inputs. #### 2.1.1 Mathematics of BatchNorm BatchNormperates on every individual feature per mini-batch. For each feature \(x_{j}\), we calculate the mean and variance as \[\mu_{j}=\frac{1}{m}\sum_{i=1}^{m}x_{ij}\ \ \text{and}\ \ {\sigma_{j}}^{2}=\frac{1}{m} \sum_{i=1}^{m}\left(x_{ij}-\mu_{j}\right)^{2}\] respectively. We then uses these values to determine the \(z\)-score \[\widehat{x}_{j}=\frac{x_{ij}-\mu_{j}}{\sqrt{{\sigma_{j}}^{2}+\varepsilon}}\] where \(\varepsilon\) is a constant for stabilizing the output Finally, we calculate the output \[y_{j}=\gamma_{\ell}\widehat{x}_{j}+\beta_{\ell}\] where \(\gamma_{\ell}\) and \(\beta_{\ell}\) are per-layer parameters that are updated through backpropagation, alongside the weights. According to the original BatchNorm paper [10], the purpose of the scale and shift parameters \(\gamma\) and \(\beta\) is to restore information that may be lost through the normalization process of zeroing the mean. This scale and shift serves to re-parametrize the activations in a way that allows for the same family of functions to be expressed, but with trainable parameter that may make it easier to learn via gradient descent [6]. #### 2.1.2 Internal Covariate Shift Internal Covariate Shift (ICS) is a known issue for deep neural nets. The more layers the neural network has, the more the statistics of each layer change as a result of the preceding layers and, more specifically, the distribution of the activation functions change as more updates are performed. In backpropagation, when parameters are updated, all layers are updated simultaneously, under the implicit assumption that the statistics of previous layers are static [6]. The "covariate" refers to the way that the inputs to the neural network vary relative to each other, while the "shift" refers to the change in the distributions of the outputs across different layers. The effect of ICS on statistical predictions predates neural networks [22]. In the context of neural networking, ICS has also been discussed with respect to distribution changes between the training and validation domains [11]. #### What is BatchNorm Trying to Achieve? There is no rigorous proof as to why BatchNorm improves the performance of deep neural networks, nor is there a consensus on why the technique is so successful. BatchNorm has been shown empirically to regulate overfitting and reduce or eliminate the need for dropout layers. A large learning rate can cause erratic training behavior, but a learning rate that is too small can cause a model to fail to converge or to require more training epochs. BatchNorm allows for a smaller learning rate with no loss in performance, and it seems to perform better with a batch size of 30 or more. The creators of BatchNorm posited that it could be reducing the ICS and hence it works by reducing the "randomness" that occurs in each batch of data as a consequence of simultaneous updates over many layers [10]. Where to place the BatchNorm layers within a CNN has been a topic of much debate. The original paper places BatchNorm layers before the activation function, which is supposed to improve ICS and thereby reduce vanishing or exploding gradient issues during training. However, in practice it has been observed that BatchNorm performs as well--or even better--when it is applied to the output of the activation function. This would seem to contradict the stated purpose of the parameters \(\beta\) and \(\gamma\), which are supposed to control the statistics of the layer, since that ability is reduced when the output is passed through the activation function prior to BatchNorm. There are published research papers, including [9, 10, 11], that argue that BatchNorm works because it reduces ICS and acts like a normalizer. However, there are other papers, such as [8, 21], that provide evidence that ICS is not even a real issue for neural networks, and that BatchNorm is just smoothing the gradient landscape and thereby accelerating convergence. Part of the evidence for this latter perspective is that we can place the BatchNorm layer anywhere and obtain improvements, as compared to no such normalization. Others make the argument that BatchNorm smoothes out the loss surface, which makes it easier to reach a global maximum [21]. In this paper, we conduct experiments that attempt to separate out the relative effect of the two components of BatchNorm, namely, the normalization step and the re-parameterization via the trainable parameters \(\gamma\) and \(\beta\). By considering these components separately, as detailed in Section 3.2 below, we hope to gain insight into how and why BatchNorm is so effective. As far as the authors are aware, this is a novel approach to analyzing BatchNorm. ### Datasets The Canadian Institute for Advanced Research 10-class dataset, commonly referred to as CIFAR10 [15], is comprised of 60,000 images, each of which is a \(32\times 32\) pixel image. Alongside ImageNet, it is one of the most widely used datasets for machine learning and deep learning research. The CIFAR10 dataset consists of 50,000 training images (5000 from each of the 10 classes), and 10,000 validation images (1000 from each class). The 10 mutually exclusive classes are airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. Also, each image only contains the specified object. This dataset is available as part of the PyTorch [19] package. An example of a CIFAR10 image is given in Figure 1. In this paper, we use the CIFAR10 dataset for our experiments. CIFAR10 has been considered in several other research papers that attempt to evaluate BatchNorm. ### ResNet We use Residual Networks (ResNet) for our experiments, in part because ResNet was developed with BatchNorm built into it. Papers evaluating BatchNorm or its variants typically choose ResNet for this reason, as well as it being one of the current best classifiers [24]. ResNet was developed in 2015 and it can be viewed as a type of Convolutional Neural Network (CNN). As alluded to above, ResNet is considered the current state of the art in computer vision [7], as it consistently outperforms its competitors in competitions involving image tasks. A ResNet architectures uses a series of repeating blocks made from convolution layers that are designed to Figure 1: Example of CIFAR10 image of truck model the "residual function" instead of the output. Here, the residual function is defined as the difference between the input and the output of a residual block. If the input is \(x\) and the output is \(F(x)\) then the residual being modeled is \(F(x)-x\). The motivation behind ResNet was the observation that deeper networks sometimes performed worse than shallower networks, which is counterintuitive, as "excess" layers should simply model the identity function. By modeling the residual, ResNet blocks model the identity function as 0, which may be easier to learn via backpropagation. Each residual block in a ResNet has two pathways, namely, the residual mapping or an identity mapping, where the identity mapping implies that the block is, in effective, being skipped over, thus improving gradient flow through the remainder of the network. This makes deeper networks more feasible--ResNets can be trained with vastly more layers as compared to traditional neural networks. Residual blocks also serve to decrease overfitting. In practice, ResNet appears to act as a collection of shallow neural nets--in effect, an ensemble that is trained simultaneously [24]. ResNet uses Rectified Linear Unit (ReLu) activations, which have been empirically shown to decrease the incidence of vanishing gradients. In addition, ResNet makes use of BatchNorm before each activation function. Each specific ResNet architecture is specified as "ResNet\(n\)", where \(n\) denotes the number of convolution layers in the ResNet. Popular ResNet architectures include ResNet18, ResNet34, ResNet50, ResNet101, ResNet110, ResNet152, ResNet164, and ResNet1202. Note that, in general, the residual block structure in different ResNet architectures differ. As mentioned above, ResNet architectures utilize BatchNorm within residual blocks, and the contribution of BatchNorm to the success of these architectures is well documented. These factors makes ResNet an ideal candidate for our experiments, where we attempt to understand the relative contributions of the components of BatchNorm (i.e., normalization and re-parameterization). In this research, we consider ResNet18, ResNet34, ResNet50, and ResNet101. All of these achieve reasonably high accuracies on the CIFAR10 dataset. These architectures use different residual blocks: ResNet18 and ResNet34 use smaller two-convolution layer blocks called _basic blocks_, while ResNet50 and ResNet101 use three-convolution layer blocks called _bottleneck blocks_. The bottleneck blocks in ResNet50 and ResNet101 reduce the dimensionality of the 256 dimensional input before applying \(3\times 3\) convolution filters, the result of which is then projected back into 256 dimensions. Basic blocks and bottleneck blocks are illustrated in Figures 2(a) and (b), respectively. Since these four architectures perform comparably well on the CIFAR datasets, they will help us to determine whether our empirical results are dependent on the depth of the ResNet. Also, since there are two different block structures among the four architectures under consideration, we can observe the effect of the block type, relative to the components of BatchNorm. ### Related Work In this section, we consider relevant related research that has attempted to demystify BatchNorm, summarizing the methods used and the results that were obtained. We also consider a few BatchNorm alternatives and discuss how they achieve their results. #### Refuting ICS In the 2018 paper [21], evidence is provided that the success of BatchNorm is not due to ICS at all, and that it might not even be reducing ICS. To support this hypothesis, based on a VGG classifier and CIFAR10, they injected noise into the activations following each BatchNorm layer so that any mitigation of ICS that BatchNorm might have provided was no longer valid. This noise was randomized at each time step so that none of the distributions were identical. These experiments showed that there was no drop in performance between the noise-injected network and the one with standard BatchNorm. In addition, both networks significantly outperformed a VGG neural net that did not use BatchNorm. This contradicts the original BatchNorm paper; if ICS reduction was the main benefit of BatchNorm, then adding ICS to a network would degrade its performance. In Figure 2: Residual blocks in selected ResNet architectures contrast, these experiments show that BatchNorm improves the performance of VGG, even when ICS is increasing. In the paper [21], the authors also proposed a more precise definition for ICS in the context of neural networking. ICS should be reflected in how much a neural networking layer needs to adapt to changes in its inputs. Therefore, they quantify ICS as the difference between \(G\) and \(G^{\prime}\), where \(G\) consists of the gradient parameters before updates and \(G^{\prime}\) consists of these same set of parameters after the updates. They measured this difference in terms of the \(L_{2}\) norm and cosine similarity. If BatchNorm was indeed reducing ICS, as defined by the metric in the previous paragraph, then the use of BatchNorm layers would decrease the correlation between \(G\) and \(G^{\prime}\) since BatchNorm would cause less cross-layer dependency. However, the authors of [21] obtained the surprising result that a network with BatchNorm was increasing the correlation between these variables. The authors believe that this occurs because BatchNorm effectively re-parameterizes the loss function, and hence its impact is likely due to an improvement in this surface. That is, BatchNorm could be smoothing the loss surface, making it easier for gradient descent to reach the desired outcome. To verify this, they considered the "Lipschitzness" of the loss function with and without BatchNorm. A function \(f\) is \(K\)-Lipschitz provided that \[\|f(x_{1})-f(x_{2})\|\leq K\|x_{1}-x_{2}\|\] for all choices of \(x_{1}\) and \(x_{2}\), where \(K\) is real-valued, with \(K>0\). Lipschitz continuity means that the function is strongly uniform continuous and \(K\) provides a limit on how rapidly it can change. A larger \(K\) implies a smoother loss function, which is a good thing with respect to convergence via gradient descent. The authors of [21] discovered that BatchNorm did not just improve the Lipschitzness of the loss function but also the Lipschitzness of its gradients, which implies increased convexity. This makes a strong case for the underlying mechanics of BatchNorm, but it does not settle which aspects of BatchNorm lead to the improvements. It could simply be the case that the addition of more trainable parameters (i.e., shift and scale) helps, since it is theorized that having more parameters might explain some of the performance disparity between shallow and deep networks [2]. #### 2.4.2 Further Experiments on the ICS In the 2020 paper [9], the authors claim to improve BatchNorm by using an alternative metric for ICS, and they determine upper and lower bounds for the ICS. They use the so-called earth mover's distance, a measure that quantifies the distance between two probability distributions. In this case, the distributions being compared are based on the statistics of the gradient values before and after updates. The paper claims to obtain an improvement over BatchNorm, albeit a small one. Interestingly, their normalization step involves an additional parameter \(\alpha\), which is trained alongside the \(\beta\) and \(\gamma\) parameters. They tested their algorithm on various ResNets [1]. #### Regularization and BatchNorm In the 2018 paper [18], population statistics were used instead of batch statistics, and a regularization term was included for the \(\gamma\) parameter. In these experiments, it was noted that for batches of size larger than 32, the population statistics function as well as the batch statistics. It was also noted that BatchNorm introduces Gaussian noise into the mean and variance parameters. The algorithm in [18] produces accuracies that are comparable to BatchNorm, but unlike the case with BatchNorm, they found that introducing dropout layers improved performance further. They also state that BatchNorm has very similar effects to \(L^{p}\) norm regularization where the \(p\)-norm is defined as \[\|x_{p}\|=\Big{(}\sum_{i=1}^{n}|x_{i}|^{p}\Big{)}^{1/p}.\] If \(L^{p}\) norm regularization was sufficient, there would be no need for an optimization such as BatchNorm, since regularization is less computationally intensive. However, this regularization claim is not consistent with other empirical studies, such as that presented in the 2018 paper [25]. #### Weight Normalization and BatchNorm While BatchNorm deals with the input to activation functions in a layer, it would be reasonable to attempt to normalize the weights of a layer directly [20]. This WeightNorm approach works per batch, similar to BatchNorm but claims to be less noisy and more computationally efficient, especially for shallower networks. In WeightNorm, the weight vector is re-parametrized as \[w=\frac{g}{\|v\|}v\] where \(g\) is a trainable parameter and \(\|v\|\) is the Euclidean norm of \(v\). The authors of [20] combine this with a form of BatchNorm, meant only to center the gradients. This is referred to as "mean-only BatchNorm", where only the mean of the neuron inputs is calculated. The authors of WeightNorm emphasize that one advantage of their normalization is that it decouples the direction of the weight vector from its magnitude, and this has led to speculation that the performance of BatchNorm is also due to this property [14]. #### Decoupling the Length and Direction of the Weights In [14], it is shown that the transformations that BatchNorm imposes results in the magnitude of the weight vector being independent of the direction of the vector. The authors hypothesize that this allows BatchNorm to use properties of the optimization landscape in a way that other regularization methods cannot. Using this property in their optimization step, they were able to achieve a linear convergence on a non-convex problem. #### 2.4.6 Residual Learning without Normalization Another way in which BatchNorm could be improving deep neural nets is by making the weight initializations more consistent at the start of each epoch. The weight initialization problem had been discussed before the inception of BatchNorm [5]. In 2019, a paper based on research done at Facebook developed a method called Fixup Initialization [28]. This method is another attempt to solve the exploding and vanishing gradient issue, which is related to the fact that the deeper the neural net, the larger the variance of its output will tend to be. Fixup introduces a rescaling of the standard weight initializations and it also includes trainable shift and scale parameter similar to BatchNorm. The authors claim that using their Fixup, they can obtain results that are superior to BatchNorm on the CIFAR10 dataset, based on a ResNet architecture. Given that they also use shift and scale parameters, this does not have clear implications for the effective mechanism behind BatchNorm. #### 2.4.7 Decorrelated Batch Normalization One of the motivations behind the development of BatchNorm was the idea that inputs to the activations should be whitened, which requires scaling, standardizing, and decorrelating. BatchNorm however only implements the first two, because it is computationally intensive to decorrelate--this would require computing the inverse square root of the covariance matrix during back propagation [13]. The 2018 paper [8] implements what the authors call "Decorrelated BatchNorm" through a process called Zero Phase Component Analysis. This process involves scaling along eigenvectors, and is similar to Principal Component Analysis, except that it but does not rotate the coordinate axes. The authors use the transformation \[\widehat{x}_{i}=\Sigma^{1/2}\big{(}x_{i}-\mu\big{)}\] where \(\mu\) is the mean of the mini-batch and \(\Sigma\) is the covariance matrix of the mini batch. Testing on CIFAR10, and the more challenging CIFAR100 dataset, using ResNet the authors of [8] note that the whitening process creates an improvement in performance over vanilla BatchNorm. They also recommend including shift and scale parameters, since these also improved performance. However, the computational cost of whitening is non-trivial. #### Adaptive Batch Normalization Adaptive Batch Normalization (AdaBN) improves on BatchNorm in transfer learning applications [16]. One of the issues with BatchNorm is that there is a disconnect between source and target domains, in the sense that the statistics used for each differ. Here, the source is the data the weights are derived from, while the target is the new data that we are classifying. AdaBN uses the BatchNorm statistics and combines them with weight statistics, with the rationale being that BatchNorm statistics provide information about the source, while the weight statistics provide information about the target. For neuron \(j\) and for an image \(m\) in the dataset, they calculate \[y_{j}(m)=\gamma_{j}\frac{x_{j}(m)-\mu_{j}^{t}}{\alpha_{j}^{t}}+\beta_{j}\] where \(\mu_{j}^{t}\) and \(\alpha_{j}^{t}\) are, respectively, the mean and variance of the outputs of the neuron in the target domain. #### AutoDIAL: Automatic DomaIn Alignment Layers Another transfer learning algorithm is AutoDIAL [3], which attempts to maximize classification accuracy by aligning the source and target domains. They do so by looking at statistics from both domains in advance and designing a parameter \(\theta\) that represents the shared weights. They still use BatchNorm layers to bring the two domain together but they do so via a parameter \(\alpha\) that quantifies the degree of mixing of both sets of statistics. If \(\alpha=1\), then the domains are not aligned while \(\alpha=0.5\) indicates that they are partially aligned. #### Layer Normalization LayerNorm functions within a mini-batch, where it is trying to normalize the inputs with respect to the other features in the same layer of the neural network [27]. This approach uses the same statistics as BatchNorm but whereas BatchNorm is based on the same feature, LayerNorm is computed across different features. This works best when the features are similar to each other in scale. LayerNorm has been used successfully in Recurrent Neural Networks (RNN) and transformers-based machine learning models. Since it functions per layer, unlike BatchNorm, there are no dependencies between layers and hence LayerNorm would not be expected to result in any decrease of ICS within a network. #### Instance Normalization InstanceNorm is a variation of LayerNorm that works across RGB channels instead of features [27]. This is an attempt at maximizing contrast within images and it has been applied with success to GANs. #### Group Normalization GroupNorm was created to allow for smaller batch sizes, as compared to standard BatchNorm [26]. For high-resolution images, smaller batches of size one or two are preferred, whereas BatchNorm requires larger batch sizes to perform well. GroupNorm, does not normalize in batches, but instead normalizes along the feature dimension by considering groups of features. GroupNorm has been shown to work well for batches of size two, and it may enhance object segmentation and detection. #### SwitchBlade Normalization SwitchBlade Normalization (SN) combines three approaches that we have discussed above [17]. Specifically, SN combines InstanceNorm (to normalize across each feature), LayerNorm (to normalize across each layer), and BatchNorm (to normalize across each batch). The algorithm learns which of the three types of normalizations works best with the data and can "switch" between any combination of the three that achieves the best result. The authors of SN note that of the three normalizations, BatchNorm is assigned the highest weight during image classification tasks. ## 3 Experimental Design In this section, we present our experimental process from a high-level perspective. Specifically, we discuss the design of our BatchNorm variants, our PyTorch implementations, and the hyperparameters selected for the experiments presented in Section 4. ### Architecture Selection Above, we explained that ResNet was chosen for our experiments because it is the current best image classifier and it is heavily dependent on BatchNorm. ResNet also comes in multiple variants, enabling us to easily experiment with different depths and different residual block structures. Based on preliminary tests, we chose to focus on ResNet18, ResNet34, ResNet50, and ResNet101, since these models are fast to train, and they are sufficient to illustrate the key points of our research. Recall that ResNet18 and ResNet34 use basic residual blocks, while ResNet50 and ResNet101 use bottleneck blocks, as illustrated in Figure 2, above. Hence, ResNet34 can be viewed as a deeper version of ResNet18, and the same can be said of the pair ResNet101 and ResNet50. However, ResNet50 is not just a deeper version of ResNet18, for example. ### BatchNorm Variants As mentioned above, we have implemented two BatchNorm variants that are designed to help us determine the relative contributions of the normalization step, as compared to the re-parameterization step. The first of these, which we refer to as AffineLayer, includes only the affine transformation part of BatchNorm. That is, AffineLayer does not normalize the output, but does include trainable shift and scale parameters (\(\gamma\) and \(\beta\), respectively). These parameters offer two additional degrees of freedom for each neuron, which may allowing for more expressive models. We also develop and analyze a variant that includes the normalization step of BatchNorm, but not the re-parameterization, which we refer to as BatchNorm-minus. We compare these techniques to standard BatchNorm and to the case where no normalization is use, which we denote as "none" in our tables and graphs. Table 1 summarizes the four variations that we test on the ResNet18, ResNet34, ResNet50, and ResNet101 networks. Tables 2 and 3 summarize the hyperparameters tested (via grid search) for the ResNet architectures under consideration. In both of these tables, boldface is used to indicate the hyperparameter value that yields the best result. Note that in every case, the Adam optimizer is best, as is a learning rate of 0.001, while the best choice for batch size varies considerably. Therefore, for all experiments in Section 4, we use the Adam optimizer and a learning rate of 0.001, and all models are trained for 15 epochs. To reduce the number of potential confounding variables, we only consider batch sizes of 20 and 50. Thus, our experiments will not necessarily yield the best possible accuracies, but that is not our purpose. Instead, our goal to highlight differences between the ResNet models, relative to the four normalization schemes under consideration. ### Implementation We conduct our experiments using PyTorch, an open source machine learning library, which is itself based on the Torch package. For our purposes, the main benefit of PyTorch comes from its use of tensors, which are auto-differentiable numerical arrays that allow GPU parallelized Basic Linear Algebra Subprograms (BLAS) operations [19]. This enables us to implement our AffineLayer as a tensor \begin{table} \begin{tabular}{c|c c} \hline \hline \multirow{2}{*}{Normalization} & Re-parameterize & \multirow{2}{*}{Re-normalize} \\ & (\(\beta\) and \(\gamma\)) & \\ \hline BatchNorm & ✓ & ✓ \\ AffineLayer & ✓ & ✗ \\ BatchNorm-minus & ✗ & ✓ \\ None & ✗ & ✗ \\ \hline \hline \end{tabular} \end{table} Table 1: Normalizations tested layer in the form of a custom PyTorch nn.module, which is one of the base classes. The two parameters, \(\beta\) and \(\gamma\), are tensors and can be multiplied and added to the input layer. Defining \(\beta\) and \(\gamma\) as nn.Parameter, which is a type of array, makes them part of the computational graph which enables them to be trained via backpropagation. PyTorch is also convenient for our purposes because it offers a ResNet builder function that lets the user select which optimization layer to pass to the builder as an argument. To implement BatchNorm-minus, we modified the existing BatchNorm layer in PyTorch to make the shift and scale no longer trainable, which leaves them at their initial values of \((\beta,\gamma)=(0,1)\). To train our models without any type of normalization (which is denoted as "none" in our tables and graphs), we simply use an identity layer in place of BatchNorm. \begin{table} \begin{tabular}{c|c c c} \hline \hline Normalization & Hyperparameter & Values tested & Best validation accuracy \\ \hline \multirow{3}{*}{BatchNorm} & Learning rate & 0.01,0.005,**0.001** & \multirow{3}{*}{0.7665} \\ & Optimizer & **Adam**, SGD & \\ & Batch size & \(20,30,\ldots,100\) (**30**) & \\ \hline \multirow{3}{*}{AffineLayer} & Learning rate & 0.01,0.005,**0.001** & \multirow{3}{*}{0.6904} \\ & Optimizer & **Adam**, SGD & \\ & Batch size & \(20,30,\ldots,100\) (**80**) & \\ \hline \multirow{3}{*}{BatchNorm-minus} & Learning rate & 0.01,0.005,**0.001** & \multirow{3}{*}{0.7730} \\ & Optimizer & **Adam**, SGD & \\ & Batch size & \(20,30,\ldots,100\) (**20**) & \\ \hline \multirow{3}{*}{None} & Learning rate & 0.01,0.005,**0.001** & \multirow{3}{*}{0.6877} \\ & Optimizer & **Adam**, SGD & \\ \cline{1-1} & Batch size & \(20,30,\ldots,100\) (**80**) & \\ \hline \hline \end{tabular} \end{table} Table 2: Hyperparameters for ResNet18 and ResNet34 \begin{table} \begin{tabular}{c|c c c} \hline \hline Normalization & Hyperparameter & Values tested & Best validation accuracy \\ \hline \multirow{3}{*}{BatchNorm} & Learning rate & 0.01,0.005,**0.001** & \multirow{3}{*}{0.7469} \\ & Optimizer & **Adam**, SGD & \\ & Batch size & \(20,30,\ldots,100\) (**70**) & \\ \hline \multirow{3}{*}{AffineLayer} & Learning rate & 0.01,0.005,**0.001** & \multirow{3}{*}{0.6986} \\ & Optimizer & **Adam**, SGD & \\ \cline{1-1} & Optimizer & **Adam**, SGD & \\ \cline{1-1} & Batch size & \(20,30,\ldots,100\) (**100**) & \\ \hline \multirow{3}{*}{None} & Learning rate & 0.01,0.005,**0.001** & \multirow{3}{*}{0.6540} \\ & Optimizer & **Adam**, SGD & \\ \cline{1-1} & Batch size & \(20,30,\ldots,100\) (**70**) & \\ \hline \hline \end{tabular} \end{table} Table 3: Hyperparameters for ResNet50 and ResNet101 Finally, all of our experiments have been run on an RTX3080Ti GPU and take on average two minutes per epoch to complete. Since our hardware enables fast training, we are able to conduct a large number of experiments. ## 4 Experimental Results In this section, we compare the four different normalizations discussed above, namely, BatchNorm, AffineLayer, BatchNorm-minus, and "none" (i.e., no normalization). First, we give results for the ResNet18 and ResNet50 architectures, then we consider ResNet34 and ResNet101. We also provide an in-depth analysis of gradient and weight statistics for our models, and we conclude this section with a discussion of our experimental results. ### ResNet18 and ResNet50 Experiments As with all of our experiments, for the ResNet18 and ResNet50 models, we use the CIFAR10 dataset. These two architectures are the shallowest of their respective types, with ResNet18 using basic residual blocks and ResNet50 using bottleneck residuals blocks. Our experiments with these models provide a point of comparison between the two types of blocks and also enable us to see how depth affects the results. Following our experiments with these models, we further experiment with the ResNet34 and ResNet101 architectures. #### 4.1.1 ResNet18 Results Table 4 compares the four normalization--in terms of validation accuracy--based on the average of four different runs of each. The differences between batch sizes of 20 and 50 are marginal. From Table 4 we observe that BatchNorm-minus achieves slightly better results than BatchNorm. Recall that for BatchNorm-minus we found its optimal batch size to be 20, while 30 was optimal for BatchNorm. This suggests that for ResNet18, it may be better to use BatchNorm-minus, since a smaller batch sizes have been empirically linked to improved convergence properties [12]. Also, \begin{table} \begin{tabular}{c|c c} \hline \hline \multirow{2}{*}{Normalization} & \multicolumn{2}{c}{Validation accuracy} \\ & Batch size 20 & Batch size 50 \\ \hline BatchNorm & 0.7665 & 0.7569 \\ AffineLayer & 0.6794 & 0.6904 \\ BatchNorm-minus & 0.7730 & 0.7644 \\ None & 0.6643 & 0.6877 \\ \hline \hline \end{tabular} \end{table} Table 4: ResNet18 experimental results for ResNet18, our BatchNorm-minus results provide evidence that the shift and scale parameters are not as important as the normalization step. #### 4.1.2 ResNet50 Results As can be observed by comparing Table 5 to Table 4, we obtain much different results with ResNet50 as compared to ResNet18. Specifically, the performance of BatchNorm-minus dropped dramatically, while BatchNorm is the best performer in the ResNet50 case. This could be due to the bottleneck architecture of ResNet50, which projects the convolution layers into a lower dimension before re-projecting them into their original dimensions. In contrast to BatchNorm-minus, the performance of AffineLayer increases for ResNet50, as compared to ResNet18. The parameters \(\beta\) and \(\gamma\) may be adding expressiveness back to the model. Lacking those additional training parameters, BatchNorm-minus has trouble converging, and even under-performs the case where no normalization is used. We conjecture that the \(1\times 1\) convolution of the ResNet50 bottleneck architecture is stripping the model of information, while the additional shift and scale parameters enable the model to recover some of that information. This would explain why the performance of BatchNorm-minus falls below AffineLayer. Whatever the reason, it appears that for ResNet50, the affine parameters play a much more significant role than in ResNet18. Another takeaway is that AffineLayer, which has no normalization step, is clearly outperforming BatchNorm-minus, regardless of the batch size. These results indicate that the normalization step is not the primary driver of improved performance in this case. Note that the hypotheses that the success of BatchNorm is due to smoothing the loss surface [21] or decoupling the length and direction of the weight vectors [14] both are related to the normalization step. Since BatchNorm-minus performs poorly on ResNet50, our results indicate that these hypotheses fail to paint a complete picture, at least with respect to ResNet50. In the next section, we shall see that the same comment holds for ResNet101. Since both of these architectures use bottleneck blocks--whereas ResNet18 and ResNet34 do not--it is likely that this is a key factor. \begin{table} \begin{tabular}{c|c c} \hline \hline \multirow{2}{*}{Normalization} & \multicolumn{2}{c}{Validation accuracy} \\ & Batch size 20 & Batch size 50 \\ \hline BatchNorm & 0.7469 & 0.7424 \\ AffineLayer & 0.6957 & 0.6986 \\ BatchNorm-minus & 0.5597 & 0.6540 \\ None & 0.6786 & 0.6939 \\ \hline \hline \end{tabular} \end{table} Table 5: ResNet50 experimental results ### ResNet34 and ResNet101 Experiments In the previous section, we observed that BatchNorm-minus performs well with ResNet18, but poorly with ResNet50. One possible explanation for this difference is that the bottleneck block of ResNet50--which is not present in ResNet18--benefits from BatchNorm. To test this hypothesis, in this section we consider a series of experiments involving ResNet34 and ResNet101. ResNet34 has the same general structure as ResNet50 and the same number of residual blocks but uses basic blocks instead of bottleneck blocks. Thus, other than the difference in depth, the primary difference between ResNet34 and ResNet50 is the block type. ResNet101 is a deeper version of ResNet50 with the same bottleneck block structure. #### 4.2.1 ResNet34 Results Comparing the results for ResNet34 in Table 6 to those for ResNet18 in Table 4, we see more similarities than differences. In both cases, BatchNorm-minus marginally outperforms BatchNorm, with the other two cases trailing far behind This provides additional evidence that the bottleneck block affects the way that BatchNorm and its variants work. #### 4.2.2 ResNet101 Results The results in Table 7 for ResNet101 are analogous to what we observed for ResNet50. In this case, the performance for each normalization is worse than ResNet18 or ResNet34 and, crucially, the drop-off for BatchNorm-minus is large. AffineLayer is the best performer in this case. Since AffineLayer includes the two parameters that bring ICS back into the model, this is additional evidence that ICS reduction is not the reason for the success of BatchNorm. In fact, BatchNorm-minus would provide the tightest ICS control, and it gives us the worst results in this case. \begin{table} \begin{tabular}{c|c c} \hline \hline \multirow{2}{*}{Normalization} & \multicolumn{2}{c}{Validation accuracy} \\ & Batch size 20 & Batch size 50 \\ \hline BatchNorm & 0.7717 & 0.7554 \\ AffineLayer & 0.6856 & 0.6837 \\ BatchNorm-minus & 0.7719 & 0.7557 \\ None & 0.6661 & 0.6782 \\ \hline \hline \end{tabular} \end{table} Table 6: ResNet34 experimental results ### Analysis of Weights and Gradients For ResNet18, we found that BatchNorm clearly outperformed AffineLayer; see Table 4, above. In an effort to better understand the reasons for the differing performance of these two normalization schemes, we extract the weights and gradients for the first epoch of ResNet18. Specifically, we consider the first and last layers of the ResNet18 model, based on the first 20 updates and the last 20 updates. We refer to these as "input" (first layer), "final" (last layer), "early" (first 20 updates), and "late" (last 20 updates). Here, we extract the weights and gradients for each of input-early, input-late, final-early, and final-late, and we plot various histograms of these distributions. In Figure 3(a), we see that the weight distribution changes from input-early to final-late are fairly modest for BatchNorm, while Figure 3(b) yields the same conclusion for AffineLayer. In Figure 3(c), we have overlayed the final-late weights of BatchNorm and AffineLayer, while Figure 3(d) gives the analogous result for the gradients. We see that the weights and gradients behave similarly, and hence whatever is causing BatchNorm to outperform AffineLayer does not appear to be distinguishable through this weight and gradient distribution comparison. The bottom line here is that AffineLayer has a similar effect on the gradients as BatchNorm, at least in the all-important first epoch. Since AffineLayer performs significantly worse than BatchNorm, these results cast doubt on the claim that the success of BatchNorm is due to it stabilizing the gradients [10]. The results in Figure 3 show minimal differences between BatchNorm and AffineLayer. A significant difference between AffineLayer and BatchNorm can be observed by comparing the input-early and input-late updates. Specifically, in Figure 4(a), we compare the input-early gradient with the input-late gradient for BatchNorm, while Figure 4(b) provides the analogous comparison for AffineLayer. We observe from Figure 4(a) that for BatchNorm, the gradient starts spread out and become more tightly focused, whereas Figure 4(b) shows the opposite behavior for the AffineLayer gradient. Figure 4 provides additional evidence that it is the weight normalization at the beginning of each epoch that is causing the observed performance differences between BatchNorm and AffineLayer, at least in the case of ResNet18. We discuss this further in the next section. \begin{table} \begin{tabular}{c|c c} \hline \hline \multirow{2}{*}{Normalization} & \multicolumn{2}{c}{Validation accuracy} \\ & Batch size 20 & Batch size 50 \\ \hline BatchNorm & 0.6971 & 0.6746 \\ AffineLayer & 0.7032 & 0.6959 \\ BatchNorm-minus & 0.4412 & 0.4128 \\ None & 0.6819 & 0.6845 \\ \hline \hline \end{tabular} \end{table} Table 7: ResNet101 experimental results ### Discussion In Figure 5, we summarize the results of our ResNet image classification experiments. Since we find minimal differences between the results for batch sizes of 20 and 50, for each model and normalization scheme, we have graphed the better of the results for batch size 20 or 50. Figure 4: Gradients for input-early vs input-late Figure 3: Weight and gradient comparisons From Figure 5, we note that for the models tested that employ basic blocks, namely, ResNet18 and ResNet34, BatchNorm-minus performs as well as standard BatchNorm. Since BatchNorm-minus normalizes the weights similarly to BatchNorm, this is consistent with the results in Figure 4, above. Also, since BatchNorm-minus lacks the trainable shift and scale parameters of BatchNorm, it appears that the additional degrees of freedom provided by these parameters are not particularly useful when training ResNet18 or ResNet34, and we conjecture that the same is true of any ResNet model that uses basic blocks. If this is the case, a simpler and more efficient normalization scheme can be used with such models, without any appreciable loss in performance. On the other hand, for models tested that include bottleneck blocks, namely, ResNet50 and ResNet101, BatchNorm-minus performs relatively poorly. Hence, we conclude that the additional degrees of freedom provided by the shift and scale parameters are critical for these models, and we conjecture that the same is true of any ResNet that utilizes bottleneck blocks. Additional evidence that such is the case is provided by the fact that AffineLayer--which includes trainable shift and scale parameters--performs better on the bottleneck block architectures as compared to the basic block architectures. ## 5 Conclusion A considerable body of previous research has focused on BatchNorm, but to the best of our knowledge, none has followed the approach in this paper, where the normalization and re-parameterization steps are separated and analyzed. We applied four distinct normalization schemes to each of ResNet18, ResNet34, Figure 5: Summary of Results ResNet50, and ResNet101, and presented a brief analysis of weights and gradients for ResNet18. Our main results involve the relative contribution of the normalization and re-parameterization steps for the various ResNet architectures tested. Specifically, we found that for ResNet50 and ResNet101, the trainable shift and scale parameters appear to increase the expressiveness of the model, allowing it to recover more information after the dimensionality reduction step that occurs inside bottleneck residual blocks. In contrast, for ResNet18 and ResNet34 which use basic residual blocks, we found that normalization was beneficial, but that the additional degrees of freedom provided by shift and scale parameters did not improve the accuracy. We believe that these results for BatchNorm are new and novel, and provide additional insight into the technique. From a practical perspective, our results indicate that BatchNorm should be used with ResNet architectures that employ bottleneck blocks. However, we also found that a simpler and slightly more efficient technique, BatchNorm-minus, can perform as well as BatchNorm on ResNet architectures that use basic residual blocks. When appropriate, the use of the simpler BatchNorm-minus normalization could allow for smaller batch sizes without sacrificing the speed of convergence. For future work, it would be interesting to develop more customized optimizers for the two types of residual blocks, namely, basic blocks and bottleneck blocks. Given that default implementations of ResNet come with BatchNorm built in, it is possible that there is some reduction in performance caused by assuming that it is optimal for all types of residual blocks. It would also be interesting to consider classifiers where BatchNorm is commonly used that do not rely on residual blocks, and perform similar experiments as presented in this paper.
2310.12920
Generative Marginalization Models
We introduce marginalization models (MAMs), a new family of generative models for high-dimensional discrete data. They offer scalable and flexible generative modeling by explicitly modeling all induced marginal distributions. Marginalization models enable fast approximation of arbitrary marginal probabilities with a single forward pass of the neural network, which overcomes a major limitation of arbitrary marginal inference models, such as any-order autoregressive models. MAMs also address the scalability bottleneck encountered in training any-order generative models for high-dimensional problems under the context of energy-based training, where the goal is to match the learned distribution to a given desired probability (specified by an unnormalized log-probability function such as energy or reward function). We propose scalable methods for learning the marginals, grounded in the concept of "marginalization self-consistency". We demonstrate the effectiveness of the proposed model on a variety of discrete data distributions, including images, text, physical systems, and molecules, for maximum likelihood and energy-based training settings. MAMs achieve orders of magnitude speedup in evaluating the marginal probabilities on both settings. For energy-based training tasks, MAMs enable any-order generative modeling of high-dimensional problems beyond the scale of previous methods. Code is available at https://github.com/PrincetonLIPS/MaM.
Sulin Liu, Peter J. Ramadge, Ryan P. Adams
2023-10-19T17:14:29Z
http://arxiv.org/abs/2310.12920v2
# Generative Marginalization Models ###### Abstract We introduce _marginalization models_ (MaMs), a new family of generative models for high-dimensional discrete data. They offer scalable and flexible generative modeling with tractable likelihoods by explicitly modeling all induced marginal distributions. Marginalization models enable fast evaluation of arbitrary marginal probabilities with a single forward pass of the neural network, which overcomes a major limitation of methods with exact marginal inference, such as autoregressive models (ARMs). We propose scalable methods for learning the marginals, grounded in the concept of "_marginalization self-consistency_". Unlike previous methods, MaMs also support scalable training of any-order generative models for high-dimensional problems under the setting of _energy-based training_, where the goal is to match the learned distribution to a given desired probability (specified by an unnormalized (log) probability function such as energy or reward function). We demonstrate the effectiveness of the proposed model on a variety of discrete data distributions, including binary images, language, physical systems, and molecules, for _maximum likelihood_ and _energy-based training_ settings. MaMs achieve orders of magnitude speedup in evaluating the marginal probabilities on both settings. For energy-based training tasks, MaMs enable any-order generative modeling of high-dimensional problems beyond the capability of previous methods. Code is at [https://github.com/PrincetonLIPS/MaM](https://github.com/PrincetonLIPS/MaM). ## 1 Introduction Deep generative models have enabled remarkable progress across diverse fields, including image generation, audio synthesis, natural language modeling, and scientific discovery. However, there remains a pressing need to better support efficient probabilistic inference for key questions involving marginal probabilities \(p(\mathbf{x}_{\mathcal{S}})\) and conditional probabilities \(p(\mathbf{x}_{t}|\mathbf{x}_{\mathcal{V}})\), for appropriate subsets \(\mathcal{S},\mathcal{U},\mathcal{V}\) of the variables. The ability to directly address such quantities is critical in applications such as outlier detection [50, 40], masked language modeling [11, 73], image inpainting [74], and constrained protein/molecule design [69, 55]. Furthermore, the capacity to conduct such inferences for arbitrary subsets of variables empowers users to leverage the model according to their specific needs and preferences. For instance, in protein design, scientists may want to manually guide the generation of a protein from a user-defined substructure under a particular path over the relevant variables. This requires the generative model to perform arbitrary marginal inferences. Towards this end, neural autoregressive models (ARMs) [3, 30] have been developed to facilitate conditional/marginal inference based on the idea of modeling a high-dimensional joint distribution as a factorization of univariate conditionals using the chain rule of probability. Many efforts have been made to scale up ARMs and enable any-order generative modeling under the setting of maximum likelihood estimation (MLE) [30, 66, 20], and great progress has been made in applications such as masked language modeling [73] and image inpainting [20]. However, marginal likelihood evaluation in the most widely-used modern neural network architectures (e.g., Transformers [68] and U-Nets [53]) is limited by \(\mathcal{O}(D)\) neural network passes, where \(D\) is the length of the sequence. This scaling makes it difficult to evaluate likelihoods on long sequences arising in data such as natural language and proteins. In contrast to MLE, in the setting of _energy-based training_ (EB), instead of empirical data samples, we only have access to an unnormalized (log) probability function (specified by a reward or energy function) that can be evaluated pointwise for the generative model to match. In such settings, ARMs are limited to fixed-order generative modeling and lack scalability in training. The subsampling techniques developed to scale the training of conditionals for MLE are no longer applicable when matching log probabilities in energy-based training (see Section 4.3 for details). To enhance scalability and flexibility in the generative modeling of discrete data, we propose a new family of generative models, **marginalization models** (MoMs), that directly model the marginal distribution \(p(\mathbf{x}_{S})\) for any subset of variables \(\mathbf{x}_{S}\) in \(\mathbf{x}\). Direct access to marginals has two important advantages: 1) _significantly speeding up inference for any marginal_, and 2) _enabling scalable training of any-order generative models under both MLE and EB settings_. The unique structure of the model allows it to simultaneously represent the coupled collection of all marginal distributions of a given discrete joint probability mass function. For the model to be valid, it must be consistent with the sum rule of probability, a condition we refer to as "_marginalization self-consistency_" (see Figure 1); learning to enforce this with scalable training objectives is one of the key contributions of this work. We show that MoMs can be trained under both maximum likelihood and energy-based training settings with scalable learning objectives. We demonstrate the effectiveness of MoMs in both settings on a variety of discrete data distributions, including binary images, text, physical systems, and molecules. We empirically show that MoMs achieve orders of magnitude speedup in marginal likelihood evaluation. For energy-based training, MoMs are able to scale training of any-order generative models to high-dimensional problems that previous methods fail to achieve. ## 2 Background We first review two prevalent settings for training a generative model: _maximum likelihood estimation_ and _energy-based training_. Then we introduce autoregressive models. Maximum likelihood (MLE)Given a dataset \(\mathcal{D}=\{\mathbf{x}^{(i)}\}_{i=1}^{N}\) drawn i.i.d. from a data distribution \(p=p_{\text{data}}\), we aim to learn the distribution \(p_{\theta}(\mathbf{x})\) via maximum likelihood estimation: \[\max_{\theta}\ \mathbb{E}_{\mathbf{x}\sim p_{\text{data}}}\left[\log p_{ \theta}(\mathbf{x})\right]\approx\max_{\theta}\ \frac{1}{N}\sum_{i=1}^{N}\log p_{\theta}(\mathbf{x}^{(i)}) \tag{1}\] which is equivalent to minimizing the Kullback-Leibler divergence under the empirical distribution, i.e., minimizing \(D_{\text{KL}}(p_{\text{data}}(\mathbf{x})\|p_{\theta}(\mathbf{x}))\). This is the setting that is most commonly used in generation of images (e.g., diffusion models [59, 18, 60]) and language (e.g. GPT [49]) where we can easily draw observed data from the distribution. Figure 1: Marginalization models (MoMs) enable estimation of any marginal probability with a neural network \(\theta\) that learns to “marginalize out” variables. The figure illustrates marginalization of a single variable on bit strings (representing molecules) with two alternatives (versus \(K\) in general) for clarity. The bars represent probability masses. Figure 2: Scalability of sequential discrete generative models. The y-axis unit is # of NN forward passes required. Energy-based training (EB)In other cases, data from the distribution are not always available. Instead, we have access to an unnormalized probability distribution \(f(\cdot)\) typically specified as \(f(\mathbf{x})=\exp(r(\mathbf{x})/\tau)\) where \(r(\mathbf{x})\) is an energy (or reward) function and \(\tau>0\) is a temperature parameter. In this setting, the objective is to match \(p_{\theta}(\mathbf{x})\) to \(f(\mathbf{x})/Z\), where \(Z\) is the normalization constant of \(f\). This can be done by minimizing the KL divergence [41, 72, 9], \[\min_{\theta}D_{\text{KL}}\left(p_{\theta}(\mathbf{x})\|f(\mathbf{x})/Z \right)=\mathbb{E}_{\mathbf{x}\sim p_{\theta}(\mathbf{x})}\left[\log p_{ \theta}\left(\mathbf{x}\right)-\log\left(f\left(\mathbf{x}\right)/Z\right) \right], \tag{2}\] The reward function \(r(\mathbf{x})\) can be defined either by human preferences or by the physical system from first principles. For example, (a) In aligning large language models, \(r(\mathbf{x})\) can represent human preferences [43, 42]; (b) In molecular/material design, it can specify the proximity of a sample's measured or calculated properties to some functional desiderata [2]; and (c) In modeling the thermodynamic equilibrium ensemble of physical systems, it is the (negative) energy function of a given state [41, 72, 9]. The training objective in Equation (2) can be optimized with Monte Carlo estimate of the gradient using the REINFORCE algorithm [71]. The learned generative model \(\theta\) allows us to efficiently generate samples approximately from the distribution of interest, which would otherwise be much more expensive via running MCMC with the energy function \(f(\cdot)\). Autoregressive modelsAutoregressive models (ARMs) [3, 30] model a complex high-dimensional distribution \(p(\mathbf{x})\) by factorizing it into univariate conditionals using the chain rule: \[\log p(\mathbf{x})=\sum\nolimits_{d=1}^{D}\log p\left(x_{d}\mid\mathbf{x}_{<d }\right), \tag{3}\] where \(\mathbf{x}_{<d}=\{x_{1},\dots,x_{d-1}\}\). ARMs generate examples by sequentially drawing \(x_{1}\) under \(p(x_{1}),\) then \(x_{2}\) under \(p(x_{2}|x_{1})\), and so on. The ARM approach has produced successful discrete-data neural models for natural language, proteins [58, 32, 36], and molecules [56, 15]. However, a key drawback of ARM is that evaluation of \(p(\mathbf{x})\) or \(p(\mathbf{x}_{s})\) requires \(\mathcal{O}(D)\) neural network passes, making it costly for problems with high dimensions. This hinders ARM scalability for marginal inference during test time. Furthermore, in energy-based training, taking the gradient of Equation (2) that matches \(p_{\theta}(\mathbf{x})\) to \(f(\mathbf{x})/Z\) requires \(\mathcal{O}(D)\) network passes per data sample. As a result, this significantly limits ARM's training scalability under the EB setting for high-dimensional problems. Any-order ARMs (AO-ARMs)Uria et al. [66] propose to learn the conditionals of ARMs for arbitrary orderings that include all permutations of \(\{1,\dots,D\}\). Under the MLE setting, the model \(\phi\) is trained by maximizing a lower-bound objective [66, 20] using an expectation under the uniform distribution of orderings. This objective allows scalable training of AO-ARMs, leveraging efficient parallel evaluation of multiple one-step conditionals for each token in one forward pass with architectures such as the U-Net [53] and Transformers [68]. However, modeling any-order conditionals alone presents training challenges in the EB setting. We discuss this issue in greater detail in Section 4.3. ## 3 Marginalization Models We propose _marginalization models_ (MaMs), a new type of generative model that enables scalable any-order generative modeling on high-dimensional problems as well as efficient marginal evaluation, for both maximum likelihood and energy-based training. The flexibility and scalability of marginalization models are enabled by the explicit modeling of the marginal distribution and enforcing _marginalization self-consistency_. In this paper, we focus on generative modeling of discrete structures using vectors of discrete variables. The vector representation encompasses various real-world problems with discrete structures, including language sequence modeling, protein design, and molecules with string-based representations (e.g., SMILES [70] and SELFIES [29]). Moreover, vector representations are inherently applicable to any discrete problem, since it is feasible to encode any discrete object into a vector of discrete variables. DefinitionLet \(p(\mathbf{x})\) be a discrete probability distribution, where \(\mathbf{x}=(x_{1},\dots,x_{D})\) is a \(D\)-dimensional vector and each \(x_{d}\) takes \(K\) possible values, i.e. \(x_{d}\in\mathcal{X}\triangleq\{1,\dots,K\}\). MarginalizationLet \(\mathbf{x}_{\mathcal{S}}\) be a subset of variables of \(\mathbf{x}\) and \(\mathbf{x}_{\mathcal{S}^{c}}\) be the complement set, i.e. \(\mathbf{x}_{\mathcal{S}}\subseteq\{x_{1},\dots,x_{D}\}\) and \(\mathbf{x}_{\mathcal{S}^{c}}=\{x_{1},\dots,x_{D}\}\setminus\mathbf{x}_{\mathcal{ S}}\). The marginal of \(\mathbf{x}_{\mathcal{S}}\) is obtained by summing over all values of \(\mathbf{x}_{\mathcal{S}^{c}}\): \[p(\mathbf{x}_{\mathcal{S}})=\sum\nolimits_{\mathbf{x}_{\mathcal{S}^{c}}}p( \mathbf{x}_{\mathcal{S}},\mathbf{x}_{\mathcal{S}^{c}}) \tag{4}\] We refer to (4) as the "_marginalization self-consistency_" that any valid distribution should follow. The goal of a marginalization model \(\theta\) is to estimate the marginals \(p(\mathbf{x}_{\mathcal{S}})\) for any subset of variables \(\mathbf{x}_{\mathcal{S}}\) as closely as possible. To achieve this, we train a deep neural network \(p_{\theta}\) that minimizes the distance of \(p_{\theta}(\mathbf{x})\) and \(p(\mathbf{x})\) on the full joint distribution1 while enforcing the marginalization self-consistency. In other words, MaM learns perform marginal inference over arbitrary subset of variables with a single forward pass.2 Footnote 1: An alternative is to consider minimizing distance over some marginal distribution of interest if we only cares about a specific marginal. Note this is impractical under the energy-based training setting, when the true marginal \(p(\mathbf{x}_{\mathcal{S}})\) is intractable to evaluate in general. Footnote 2: Estimating \(p(\mathbf{x})\) is a special case of marginal inference where there are no variables to be marginalized. ParameterizationA marginalization model parameterized by a neural network \(\theta\) takes in \(\mathbf{x}_{\mathcal{S}}\) and outputs the marginal log probability \(f_{\theta}(\mathbf{x}_{\mathcal{S}})=\log p_{\theta}(\mathbf{x}_{\mathcal{S}})\). Note that for different subsets \(\mathcal{S}\) and \(\mathcal{S}^{\prime}\), \(\mathbf{x}_{\mathcal{S}}\) and \(\mathbf{x}_{\mathcal{S}}^{\prime}\) lie in different vector spaces. To unify the vector space that is fed into the NN, we introduce an augmented vector space that additionally includes the "marginalized out" variables \(\mathbf{x}_{\mathcal{S}^{c}}\) for an input \(\mathbf{x}_{\mathcal{S}}\). By defining a special symbol "\(\Box\)" to denote the missing values of the "marginalized out" variables, the augmented vector representation is \(D\)-dimensional and is defined to be: \[\mathbf{x}_{\mathcal{S}}^{\text{aug}}(i)=\begin{cases}x_{i},&\text{if }i\in \mathcal{S}\\ \Box,&\text{otherwise}\end{cases}\] Now, the augmented vector representation \(\mathbf{x}_{\mathcal{S}}^{\text{aug}}\) of all possible \(\mathbf{x}_{\mathcal{S}}\)'s have the same dimension \(D\), and for any \(i\)-th dimension \(\mathbf{x}_{\mathcal{S}}^{\text{aug}}(i)\in\mathcal{X}^{\text{aug}}\triangleq \{1,\cdots,K,\Box\}\). For example, let \(D=4\) and \(\mathcal{X}=\{0,1\}\), for \(\mathbf{x}_{\mathcal{S}}=\{x_{1},x_{3}\}\) with \(x_{1}=0\) and \(x_{3}=1,\)\(\mathbf{x}_{\mathcal{S}}^{\text{aug}}=(0,\Box,1,\Box)\), and \(p(\mathbf{x}_{\mathcal{S}}^{\text{aug}})=\sum_{x_{2}}\sum_{x_{4}}p(0,x_{2},1,x _{4})\). From here onwards we will use \(\mathbf{x}_{\mathcal{S}}^{\text{aug}}\) and \(\mathbf{x}_{\mathcal{S}}\) interchangeably. SamplingWith the marginalization model, one can sample from the learned distribution by picking an arbitrary order and sampling one variable or multiple variables at a time. In this paper, we focus on the sampling procedure that generates one variable at a time. To get the conditionals at each step for generation, we can use the product rule of probability: \[p_{\theta}(x_{\sigma(d)}|\mathbf{x}_{\sigma(<d)})=\frac{p_{\theta}(\mathbf{x }_{\sigma(\leq d)})}{p_{\theta}(\mathbf{x}_{\sigma(<d)})}\,,\] However, the above sampling is not a valid conditional distribution if the following single-step marginalization consistency in (5) is not strictly enforced, \[p_{\theta}(\mathbf{x}_{\sigma(<d)})=\sum\nolimits_{\mathbf{x}_{\sigma(d)}}p_ {\theta}(\mathbf{x}_{\sigma(\leq d)}),\quad\forall\sigma\in S_{D},\mathbf{x} \in\{1,\cdots,K\}^{D},d\in[1:D]\,, \tag{5}\] since it might not sum up exactly to one. Hence we use following normalized conditional: \[p_{\theta}(x_{\sigma(d)}|\mathbf{x}_{\sigma(<d)})=\frac{p_{\theta}([\mathbf{x }_{\sigma(<d)},x_{\sigma(d)}])}{\sum_{x_{\sigma(d)}}p_{\theta}([\mathbf{x}_{ \sigma(<d)},x_{\sigma(d)}])}\,. \tag{6}\] Scalable learning of marginalization self-consistencyIn training, we impose the marginalization self-consistency by minimizing the _squared error_ of the constraints in (5) in log-space. Evaluation of each marginalization constraint in (5) requires \(K\) NN forward passes, where \(K\) is the number of discrete values \(x_{d}\) can take. This makes mini-batch training challenging to scale when \(K\) is large. To address this issue, we augment the marginalization models with learnable conditionals parameterized by another neural network \(\phi\). The marginalization constraints in (5) can be further decomposed into \(K\) parallel marginalization constraints3. Footnote 3: To make sure \(p_{\theta}\) is normalized, we can either additionally enforce \(p_{\theta}\left((\Box,\cdots,\Box)\right)=1\) or let \(Z_{\theta}=p_{\theta}\left((\Box,\cdots,\Box)\right)\) be the normalization constant. \[p_{\theta}(\mathbf{x}_{\sigma(<d)})p_{\phi}(\mathbf{x}_{\sigma(d)}|\mathbf{x}_{ \sigma(<d)})=p_{\theta}(\mathbf{x}_{\sigma(\leq d)}),\qquad\forall\sigma\in S_{D },\mathbf{x}\in\{1,\cdots,K\}^{D},d\in[1:D]. \tag{7}\] By breaking the original marginalization self-consistency in Equation (4) into highly parallel marginalization self-consistency in Equation (7), we have arrived at \(\mathcal{O}(K^{D}.D.D!)\) constraints. Although the number of constraints increases, it becomes _highly scalable_ to train on the marginalization self-consistency via sampling the constraints. During training, we specify a distribution \(q(\mathbf{x})\) for sampling the marginalization constraints. In practice, it can be set to the distribution of interest to perform marginal inference on, such as \(p_{\text{data}}\) or the distribution of the generative model \(p_{\theta,\phi}\). In empirical experiments, we found that training with objectives that decompose to highly parallel self-consistency errors is a key ingredient to learning marginals with scalability. ## 4 Training the Marginalization Models ### Maximum Likelihood Estimation Training In this setting, we train MaMs with the maximum likelihood objective while additionally enforcing the marginalization constraints in Equation (5): \[\max_{\theta,\phi} \mathbb{E}_{\mathbf{x}\sim p_{\text{data}}}\log p_{\theta}( \mathbf{x})\] (8) s.t. \[p_{\theta}(\mathbf{x}_{\sigma(<d)})p_{\phi}(\mathbf{x}_{\sigma( d)}|\mathbf{x}_{\sigma(<d)})=p_{\theta}(\mathbf{x}_{\sigma(\leq d)}),\ \forall\sigma\in S_{D},\mathbf{x}\in\{1,\cdots,K\}^{D},d\in[1:D].\] Two-stage trainingA typical way to solve the above optimization problem is to convert the constraints into a penalty term and optimize the penalized objective, but we empirically found the learning to be slow and unstable. Instead, we identify an alternative two-stage optimization formulation that is theoretically equivalent to Equation (8), but leads to more efficient training: **Proposition 1**.: _Solving the optimization problem in (8) is equivalent to the following two-stage optimization procedure, under mild assumption about the neural networks used being universal approximators:_ **Stage 1:**: \[\max_{\phi}\ \mathbb{E}_{\mathbf{x}\sim p_{\text{data}}}\mathbb{E}_{ \sigma\sim\mathcal{U}(S_{D})}\sum\nolimits_{d=1}^{D}\log p_{\phi}\left(x_{ \sigma(d)}\mid\mathbf{x}_{\sigma(<d)}\right)\] **Stage 2:**: \[\min_{\theta}\ \mathbb{E}_{\mathbf{x}\sim q(\mathbf{x})}\mathbb{E}_{ \sigma\sim\mathcal{U}(S_{D})}\mathbb{E}_{d\sim\mathcal{U}(1,\cdots,D)}\left( \log[p_{\theta}(\mathbf{x}_{\sigma(<d)})p_{\phi}(\mathbf{x}_{\sigma(d)}| \mathbf{x}_{\sigma(<d)})]-\log p_{\theta}(\mathbf{x}_{\sigma(\leq d)})\right)^ {2}.\] The first stage can be interpreted as _fitting the conditionals_ in the same way as AO-ARMs [66, 20] and the second stage acts as _distilling the marginals_ from conditionals. The intuition comes from the chain rule of probability: there is a one-to-one correspondence between optimal conditionals \(\phi\) and marginals \(\theta\), i.e. \(\log p_{\theta}(\mathbf{x})=\sum_{d=1}^{D}\log p_{\phi}\left(x_{\sigma(d)}| \mathbf{x}_{\sigma(<d)}\right)\) for any \(\sigma\) and \(\mathbf{x}\). By assuming neural networks are universal approximators, we can first optimize for the optimal conditionals, and then optimize for the corresponding optimal marginals. We provide proof details in Appendix A.1. ### Energy-based Training In this setting, we train MaMs using the energy-based training objective in Equation (2) with a penalty term to enforce the marginalization constraints in Equation (5): \[\min_{\theta,\phi}D_{\text{KL}}(p_{\theta}\left(\mathbf{x}\right)\|p( \mathbf{x}))+\lambda\,\mathbb{E}_{\mathbf{x}\sim q(\mathbf{x})}\mathbb{E}_{ \sigma}\mathbb{E}_{d}\big{(}\log\big{[}p_{\theta}\big{(}\mathbf{x}_{\sigma(<d) }\big{)}\,p_{\theta}\big{(}\mathbf{x}_{\sigma(d)}|\mathbf{x}_{\sigma(<d)}\big{)} \big{]}-\log p_{\theta}\big{(}\mathbf{x}_{\sigma(\leq d)}\big{)}^{2},\] where \(\sigma\sim\mathcal{U}(S_{D})\), \(d\sim\mathcal{U}(1,\cdots,D)\) and \(q(\mathbf{x})\) is the distribution of interest for evaluating marginals. Scalable trainingWe use REINFORCE [71] to estimate the gradient of the KL divergence term: \[\nabla_{\theta}D_{\text{KL}}(p_{\theta}(\mathbf{x})||p(\mathbf{x })) =\mathbb{E}_{\mathbf{x}\sim p_{\theta}(\mathbf{x})}\left[\nabla_{ \theta}\log p_{\theta}\left(\mathbf{x}\right)\left(\log p_{\theta}\left( \mathbf{x}\right)-\log f\left(\mathbf{x}\right)\right)\right]\] \[\approx\nicefrac{{1}}{{N}}\sum_{i=1}^{N}\nabla_{\theta}\log p_{ \theta}(\mathbf{x}^{(i)})\left(\log p_{\theta}(\mathbf{x}^{(i)})-\log f( \mathbf{x}^{(i)})\right) \tag{9}\] For the self-consistency penalty term, we sample data \(\mathbf{x}\) from the specified data distribution of interest and sample the ordering \(\sigma\), step \(d\) from uniform distributions. Efficient sampling with persistent MCMCWe need cheap and effective samples from \(p_{\theta}\) in order to perform REINFORCE, so a persistent set of Markov chains are maintained by randomly picking an ordering and taking block Gibbs sampling steps using the conditional distribution \(p_{\phi}(\mathbf{x}_{\sigma(d)}|\mathbf{x}_{\sigma(<d)})\) (full algorithm in Appendix A.4), in similar fashion to persistent contrastive divergence [64]. The samples from the conditional network \(p_{\phi}\) serve as approximate samples from the marginal network \(p_{\theta}\) when they are close to each other. Otherwise, we can additionally use importance sampling to get an unbiased estimate. ### Addressing limitations of ARMs We discuss in more detail about how MaMs address some limitations of ARMs. The first one is general to both training settings, while the latter two are specific to energy-based training. 1) **Slow marginal inference of likelihoods** Due to sequential conditional modeling, evaluation of a marginal \(p_{\phi}(\mathbf{x}_{\mathbf{o}})\) with ARMs (or an arbitrary marginal with AO-ARMs) requires applying the NN \(\phi\) up to \(D\) times, which is inefficient in time and memory for high-dimensional data. In comparison, MaMs are able to estimate any arbitrary marginal with one NN forward pass. 2) **Lack of support for any-order training** In energy-based training, the objective in Equation (2) aims to minimize the distance between \(\log p_{\phi}(\mathbf{x})\) and \(\log p(x)\), where \(\phi\) is the NN parameters of an ARM. However, unless the ARM is perfectly self-consistent over all orderings, it will not be the case that \(\log p_{\phi}(\mathbf{x})=\mathbb{E}_{\sigma}\log p_{\phi}(\mathbf{x}|\sigma)\). Therefore, the expected \(D_{\text{KL}}\) objective over the orderings \(\sigma\) would not be equivalent to the original \(D_{\text{KL}}\) objective, i.e., \(\mathbb{E}_{\sigma}\,\mathbb{E}_{p_{\phi}(\cdot|\sigma)}D_{\text{KL}}(p_{\phi }(\cdot|\sigma)\|p)\neq\mathbb{E}_{p_{\phi}}D_{\text{KL}}(p_{\phi}\|p)\). As a result, ARMs cannot be trained with the expected \(D_{\text{KL}}\) objective over all orderings simultaneously, but instead need to resort to a preset order and minimize the KL divergence between \(\log p_{\phi}(\mathbf{x}|\sigma)\) and the target density \(\log p(\mathbf{x})\). The self-consistency constraints imposed by MaMs address this issue. MaMs are not limited to fixed ordering because marginals are order-agnostic and we can optimize over expectation of orderings for the marginalization self-consistency constraints. 3) **Training not scalable on high-dimensional problems** When minimizing the difference between \(\log p_{\phi}(\mathbf{x}|\sigma)\) and the target \(\log p(\mathbf{x})\), ARMs need to sum conditionals to evaluate \(\log p_{\phi}(\mathbf{x}|\sigma)\). One might consider subsampling one-step conditionals \(p_{\phi}(x_{\sigma(d)}|\mathbf{x}_{\sigma(<d)}\) to estimate \(p_{\phi}(\mathbf{x})\), but this leads to high variance of the REINFORCE gradient in Equation (9) due to the product of the score function and distance terms, which are both high variance (We validate this in experiments, see Figure 3). Consequently, training ARMs for energy-based training necessitates a sequence of \(D\) conditional evaluations to compute the gradient of the objective function. This constraint leads to an effective batch size of \(B\times D\) for batch of \(B\) samples, significantly limiting the scalability of ARMs to high-dimensional problems. Furthermore, obtaining Monte Carlo samples from ARMs for the REINFORCE gradient estimator is slow when the dimension is high. Due to the fixed input ordering, this process requires \(D\) sequential sampling steps, making more cost-effective sampling approaches like persistent MCMC infeasible. Marginalization models circumvent this challenge by directly estimating the log-likelihood with the marginal neural network. Additionally, the support for any-order training enables efficient sampling through the utilization of persistent MCMC methods. ## 5 Related Work Autoregressive modelsDevelopments in deep learning have greatly advanced the performance of ARMs across different modalities, including images, audio, and text. Any-order (Order-agnostic) ARMs were first introduced in [66] by training with the any-order lower-bound objective for the maximum likelihood setting. Recent work, ARDM [20], demonstrates state-of-the-art performance for any-order discrete modeling of image/text/audio. Germain et al. [16] train an auto-encoder with masking that outputs the sequence of all one-step conditionals for a given ordering, but does not generate as well as methods [73, 20, 6] that predict one-step conditionals under the given masking. Douglas et al. [14] trains an AO-ARM as a proposal distribution and uses importance sampling to estimate arbitrary conditional probabilities in a DAG-structured Bayesian network, but with limited experiment validation on a synthetic dataset. Shih et al. [57] utilizes a modified training objective of ARMs for better marginal inference performance but loses any-order generation capability. In the Figure 3: Approximating \(\log p_{\phi}(\mathbf{x})\) with one-step conditional (ARM-MC) results in extremely high gradient variance during energy-based training. energy-based training setting, ARMs are applied to science problems [9, 72], but suffer in scaling to when \(D\) is large. MAMs and ARMs are compared in detail in Section 4.3. Arbitrary conditional/marginal modelsFor continuous data, VAEAC [25] and ACFlow [31] extends the idea of conditional variational encoder and normalizing flow to model arbitrary conditionals. ACE [62] improves the expressiveness of arbitrary conditional models through directly modeling the energy function, which reduces the constraints on parameterization but comes with the additional computation cost of to approximating the normalizing constant. Instead of using neural networks as function approximators, probabilistic circuits (PCs) [6, 45] offer tractable probabilistic models for both conditionals and marginals by building a computation graph with sum and product operations following specific structural constraints. Examples of PCs include Chow-Liu trees [7], arithmetic circuits [10], sum-product networks [47], etc. Peharz et al. [45] improved the scalability of PCs by combining arithmetic operations into a single monolithic esnum-operation and automatic differentiation. [33, 34] demonstrated the potential of PCs with distilling latent variables from trained deep generative models on continuous image data. However, expressiveness is still limited by the structural constraints. All methods mentioned above focus on MLE settings. GFlowNetsGFlowNets [2, 4] formulate the problem of generation as matching the probability flow at terminal states to the target normalized density. Compared to ARMs, GFlowNets allow flexible modeling of the generation process by assuming learnable generation paths through a directed acyclic graph (DAG). The advantages of learnable generation paths come with the trade-off of sacrificing the flexibility of any-order generation and exact likelihood evaluation. Under a fixed generation path, GFlowNets reduce to fixed-order ARMs [75]. In Appendix A.3, we further discuss the connections and differences between GFlowNets and AO-ARMs/MAMs. For discrete problems, Zhang et al. [76] train GFlowNets on the squared distance loss with the trajectory balance objective [38]. This is not scalable for large \(D\) (for the same reason as ARMs in Section 4.3) and renders direct access to marginals unavailable. In the MLE setting, an energy function is additionally learned from data so that the model can be trained with energy-based training. ## 6 Experiments We conduct experiments with marginalization models (MAM) on both MLE and EB settings for discrete problems including binary images, text, molecules and phyiscal systems. We consider the following baselines for comparison: Any-order ARM (AO-ARM) [20], ARM [30], GFlowNet [39, 76], Discrete Flow4[65] and Probabilistic Circuit (PC)5[45]. MAM, PC and (AO-)ARM support arbitrary marginal inference. Discrete flow allows exact likelihood evaluation while GFlowNet needs to approximate the likelihood with sum using importance samples. For evaluating AO-ARM's marginal inference, we can either use an ensemble model by averaging over several random orderings (AO-ARM-E) or use a single random ordering (AO-ARM-S). In general, AO-ARM-E should always be better than AO-ARM-S but at a much higher cost. Neural network architecture and training hyperparameter details can be found in Appendix B. Footnote 4: Results are only reported on text8 for discrete flow since there is no public code implementation. ### Maximum Likelihood Estimation Training Binary MNISTWe report the negative test likelihood (bits/digit), marginal estimate quality and marginal inference time per minibatch (of size \(16\)) in Table (1). To keep GPU memory usage the same, we sequentially evaluate the likelihood for ARMs. Both MaM and AO-ARM use a U-Net architecture with 4 ResNet Blocks interleaved with attention layers (see Appendix B). GFlowNets fail to scale to large architectures as U-Net, hence we report GFlowNet results using an MLP from Zhang et al. [76]. For MAM, we use the conditional network to evaluate test likelihood (since this is also how MAM generates data). The marginal network is used for evaluating marginal inference. The quality of the marginal estimates will be compared to the best performing model. In order to evaluate the quality of marginal likelihood estimates, we employ a controlled experiment where we randomly mask out portions of a test image and generate multiple samples with varying levels of masking (refer to Figure 4). This process allows us to obtain a set of distinct yet comparable samples, each associated with a different likelihood value. For each model, we evaluate the likelihood of the generated samples and compare that with AO-ARM-E's estimate since it achieves the best likelihood on test data. We repeat this controlled experiment on a random set of test images. The mean Spearman's and Pearson correlation are reported to measure the strength of correlation in marginal inference likelihoods between the given model and AO-ARM-E. MaM achieves close to \(4\)_order of magnitude speed-up_ in marginal inference while at _comparable quality_ to that from AO-ARM-S. PCs are also very fast in marginal inference but there remains a gap in terms of quality. Generated samples and additional marginal inference on partial images are in Appendix B. Molecular sets (MOSES)We test generative modeling of MaM on a benchmarking molecular dataset [46] refined from the ZINC database [61]. Same metrics are reported as Binary-MNIST. Likelihood quality is measured similarly but on random groups of test molecules instead of generated ones. The generated molecules from MaM and AO-ARM are comparable to standard state-of-the-art molecular generative models, such as CharRNN [56], JTN-VAE [26], and LatentGAN [48] (see Appendix B), with additional controllability and flexibility in any-order generation. MaM supports much faster marginal inference, which is useful for domain scientists to reason about likelihood of (sub)structures. Generated molecules and property histogram plots of are available in Appendix B. Text8Text8 [37] is a widely used character level natural language modeling dataset. The dataset comprises of 100M characters from Wikipedia, split into chunks of 250 character. We follow the same testing procedure as Binary-MNIST and report the same metrics. The test NLL of discrete flow is from [65], for which there are no open-source implementations to evaluate additional metrics. ### Energy-based training We compare with ARM that uses sum of conditionals to evaluate \(\log p_{\phi}\) with fixed forward ordering and ARM-MC that uses a one-step conditional to estimate \(\log p_{\phi}\). ARM can be regarded as the golden standard of learning autoregressive conditionals, since its gradient needs to be evaluated on the full generation trajectory, which is the most informative and costly. MaM uses marginal network to evaluate \(\log p_{\theta}\) and subsamples a one-step marginalization constraint for each data point in the batch. The effective batch size for ARM and GFlowNet is \(B\times\mathcal{O}(D)\) for batch of size \(B\), and \(B\times\mathcal{O}(1)\) for ARM-MC and MaM. MaM and ARM optimizes KL divergence using REINFORCE gradient estimator with baseline. GFlowNet is trained on per-sample gradient of squared distance [76]. Ising modelIsing models [24] model interacting spins and are widely studied in mathematics and physics (see MacKay [35]). We study Ising model on a square lattice. The spins of the \(D\) sites are represented a \(D\)-dimensional binary vector and its distribution is \(p^{*}(\mathbf{x})\propto f^{*}(\mathbf{x})=\exp\left(-\mathcal{E}_{J}(\mathbf{ x})\right)\) \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **NLL (bpd)** \(\downarrow\) & **Spearman’s**\(\uparrow\) & **Pearson**\(\uparrow\) & **Marg. inf. time (s)**\(\downarrow\) \\ \hline AO-ARM-E-U-Net & **0.148** & **1.0** & **1.0** & 661.98 \(\pm\) 0.49 \\ AO-ARM-S-U-Net & 0.149 & 0.996 & 0.993 & 132.40 \(\pm\) 0.03 \\ GflowNet-MLP & 0.189 & \(-\) & \(-\) & \(-\) \\ PC-Image (iNets) & 0.187 & 0.716 & 0.752 & **0.015 \(\pm\) 0.00** \\ MAM-U-Net & 0.149 & 0.992 & 0.993 & 0.018 \(\pm\) 0.00 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance Comparison on Binary-MNIST Figure 4: An example of the data generated (with \(100/400/700\) pixels masked) for comparing the quality of likelihood estimate. Numbers below the images are LL estimates from MAM’s marginal network (left) and AO-ARM-E’s ensemble estimate (right). where \(\mathcal{E}_{\mathbf{J}}(\mathbf{x})\triangleq-\mathbf{x}^{\top}\mathbf{J}\mathbf{x}- \boldsymbol{\theta}^{\top}\mathbf{x}\), with \(\mathbf{J}\) the binary adjacency matrix. These models, although simplistic, bear analogies to the complex behavior of high-entropy alloys [9]. We compare MAM with ARM, ARM-MC, and GFlowNet on a \(10\times 10\) (\(D\!=\!100\)) and a larger \(30\times 30\) (\(D\!=\!900\)) Ising model where ARMs and GFlowNets fail to scale. \(2000\) ground truth samples are generated following Grathwohl et al. [17] and we measure test negative log-likelihood on those samples. We also measure \(D_{\text{KL}}(p_{\theta}(\mathbf{x})||p^{*})\) by sampling from the learned model and evaluating \(\sum_{i=1}^{M}(\log p_{\theta}(\mathbf{x}_{i})-\log f^{*}(\mathbf{x}_{i}))\). Figure 5 contains KDE plots of \(-\mathcal{E}_{\mathbf{J}}(\mathbf{x})\) for the generated samples. As described in Section 4.3, the ARM-MC gradient suffers from high variance and fails to converge. It also tends to collapse and converge to a single sample. MAM has significant speedup in marginal inference and is the only model that supports any-order generative modeling. The performance in terms of KL divergence and likelihood are only slightly worse than models with fixed/learned order, which is expected since any-order modeling is harder than fixed-order modeling, and MAM is solving a more complicated task of jointly learning conditionals and marginals. On a \(30\times 30\) (\(D=900\)) Ising model, MAM achieves a bpd of \(0.835\) on ground-truth samples while ARM and GFlowNet fails to scale. Distribution of generated samples is shown in Figure 5. performs just marginally below ARM. However, only MAM supports any-order modeling and scales to high-dimensional problems. Figure 6 (right) shows molecular generation with MAM for \(D=500\). ## 7 Conclusion In conclusion, marginalization models are a novel family of generative models for high-dimensional discrete data that offer scalable and flexible generative modeling with tractable likelihoods. These models explicitly model all induced marginal distributions, allowing for fast evaluation of arbitrary marginal probabilities with a single forward pass of the neural network. MAMs also support scalable training objectives for any-order generative modeling, which previous methods struggle to achieve under the energy-based training setting. Potential future work includes designing new neural network architectures that automatically satisfy the marginalization self-consistency. Figure 5: Ising model: \(2000\) samples are generated for each method. Figure 6: Target property matching: \(2000\) samples are generated for each method. Figure 7: Conditionally generate towards low lipophilicity from a user-defined substructure in any given order Left: Masking out the left 4 SELFIES characters. Right: Masking the right 4-20 SELFIES characters. #### Acknowledgments We thank members of the Princeton Laboratory for Intelligent Probabilistic Systems and anonymous reviewers for valuable discussions and feedback. We also want to thank Andrew Novick and Eric Toberer for valuable discussions on energy-based training in scientific applications. This work is supported in part by NSF grants IIS-2007278 and OAC-2118201.
2304.10070
SBFT Tool Competition 2023 -- Fuzzing Track
This report outlines the objectives, methodology, challenges, and results of the first Fuzzing Competition held at SBFT 2023. The competition utilized FuzzBench to assess the code-coverage performance and bug-finding efficacy of eight participating fuzzers over 23 hours. The competition was organized in three phases. In the first phase, participants were asked to integrate their fuzzers into FuzzBench and allowed them to privately run local experiments against the publicly available benchmarks. In the second phase, we publicly ran all submitted fuzzers on the publicly available benchmarks and allowed participants to fix any remaining bugs in their fuzzers. In the third phase, we publicly ran all submitted fuzzers plus three widely-used baseline fuzzers on a hidden set and the publicly available set of benchmark programs to establish the final results.
Dongge Liu, Jonathan Metzman, Marcel Böhme, Oliver Chang, Abhishek Arya
2023-04-20T03:41:01Z
http://arxiv.org/abs/2304.10070v2
# SBFT Tool Competition 2023 - Fuzzing Track ###### Abstract This report outlines the objectives, methodology, challenges, and results of the first Fuzzing Competition held at SBFT 2023. The competition utilized FuzzBench to assess the code-coverage performance and bug-finding efficacy of eight participating fuzzers over 23 hours. The competition was organized in three phases. In the first phase, participants were asked to integrate their fuzzers into FuzzBench and allowed them to privately run local experiments against the publicly available benchmarks. In the second phase, we publicly ran all submitted fuzzers on the publicly available benchmarks and allowed participants to fix any remaining bugs in their fuzzers. In the third phase, we publicly ran all submitted fuzzers plus three widely-used baseline fuzzers on a hidden set and the publicly available set of benchmark programs to establish the final results. fuzzing, evaluation, open-source. ## I Introduction We report on the organization of the first fuzzing competition at the 16th International Workshop on Search-Based and Fuzz Testing (SBFT) held on the 14th of May 2023 in Melbourne, Australia. The objectives of this competition were (i) to evaluate the performance of the fuzzers submitted to this competition in terms of coverage and bug finding ability, (ii) to gather experience and feedback on the sound benchmarking of fuzzing tools, and (iii) to stress test the FuzzBench benchmarking platform which has been built particularly for this purpose. Throughout the competition we paid particular attention to the mitigation of different forms of bias. For instance, in order to avoid overfitting to a particular set of benchmarks (confirmation bias), we allowed participants to develop, integrate, and evaluate their fuzzers privately on a publically available set of benchmarks while conducting the actual competition on a set of benchmarks that included a large number of hidden benchmarks. In order to avoid survivorship bias, we do _not_ evaluate their bug finding ability on a _given_ set of bugs that we already know how to find. Instead, we evaluate their bug finding ability in terms bugs found by any fuzzer. We make sure to use the same AddressSanitizer (ASAN) instrumented binaries across all fuzzers. In summary, we found that the AFLRrustrust fuzzer performed well in terms of both, the coverage achieved and bugs found. The fuzzers libAFLlibFuzzer, HasteFuzz, and AFLL+++ excelled on coverage-based benchmarks, while Pasts and AFLSmart++ found more bugs than the average fuzzer. We present the final ranking and more concrete results live at the tool competition. ## II FuzzBench: Fuzzer Benchmarking Platform FuzzBench[1] is a free, open source fuzzer benchmarking service built to make fuzzer benchmarking easy and rigorous. It allows researchers, who are interested in evaluating their fuzzers against other state-of-the-art fuzzers, to launch large-scale experiments in a free and reproducible manner. The FuzzBench infrastructure consists of a large number of publicly available benchmark programs taken from OSS-Fuzz1. The benchmark programs are open source C/C++ programs carefully integrated by their maintainers, and include programs like Curl2, OpenSSL3, PHP4, and systemd5. Because the source code for most FuzzBench experiments is made public and the specific FuzzBench version can be pinned, reproducing FuzzBench experiments is often much easier than reproducing bespoke experiments used in other research. Footnote 1: [https://google.github.io/css-fuzz/](https://google.github.io/css-fuzz/) Footnote 2: [https://github.com/curl/curl](https://github.com/curl/curl) Footnote 3: [https://github.com/openssl/openssl](https://github.com/openssl/openssl) Footnote 4: [https://github.com/php/php-src](https://github.com/php/php-src) Footnote 5: [https://github.com/systemd/systemd](https://github.com/systemd/systemd) FuzzBench can conduct bug-based or code coverage-based experiments [2]. Throughout out the course of an experiment, and upon its completion, FuzzBench generates a report detailing the performance of each fuzzer. The report compares fuzzers based on their performance across all benchmarks as well as on individual benchmarks and shows effect size (Vargha Delaney \(\hat{A}_{12}\)) and statistical significance (Mann Whitney \(U\) test). The comparison across all benchmarks contains two rankings, one based on their average rank on each individual benchmark and one based on their performance relative to the best performing fuzzer on each individual benchmark. FuzzBench reports include a critical difference diagram so that users can see if differences between fuzzers based on average rank is statistically significant. The report's comparison on individual benchmarks consists of graphs and data showing, the number of crashes found and the growth of code coverage throughout the experiment. To request an experiment, the interested researcher submits a pull request to the Github repository where the fuzzer is integrated or privately emails [email protected]. A typical experiment in FuzzBench involves about 20 trials of 10 fuzzers running on 20 benchmarks for 23 hours. This is about 10-CPU years, which is cost prohibitive for most researchers. Researchers can use FuzzBench by integrating with a simple Python and Docker based API. This integration usually is less than 100 lines of code. FuzzBench has had an enormous impact on fuzzer development and research. Over 900 experiments have been conducted using the FuzzBench service. FuzzBench has been discussed in over 100 academic papers. And FuzzBench has been used to guide the development of popular fuzzers such as AFL++, HongGFuzz and libFuzzer. FuzzBench experiments have most desirable qualities that Klee et al. [3] described most evaluations as lacking, including: statistically sound comparisons and statistical tests, long timeouts and real-world programs. ## III Competition Setup **Phases**. The competition was organized in three phases. In the first phase, participants were asked to integrate their fuzzers into FuzzBench and allowed them to privately run local experiments against the publicly available benchmarks. In the second phase, we publicly ran all submitted fuzzers on the publicly available benchmarks and allowed participants to fix any remaining bugs in their fuzzers. In the third phase, we publicly ran all submitted fuzzers plus three widely-used baseline fuzzers on a hidden set and the publicly available set of benchmark programs to establish the final results. **Performance metrics**. In our competition, we measure both the code coverage achieved and the bug-finding capacity to compare the performance of the submitted fuzzers [3, 4]. As benchmarking platform, we use FuzzBench which measures line coverage across all coverage-based benchmarks and the time it takes to generate the first crashing input across all bug-based benchmarks. To facilitate a more intuitive comparison of fuzzer performance in both categories, we present a _relative median score_ for each fuzzer. We compute the coverage-based score for each fuzzer as follows. As it is impractical to determine the total number of reachable lines in each coverage-based benchmark \(bc\)[5], we compute the relative coverage score \(score(bc,f)\) for a fuzzer \(f\) by dividing the median value of its line coverage over \(20\) trials (i.e., \(\mathrm{cov}(bc,f,n)\) where \(n=1..20\)) by the maximum line coverage attained by all fuzzers \(F\) on that specific benchmark: \[score(bc,f)=\frac{\mathrm{cov}(bc,f)}{\max\limits_{i\in F}\max \limits_{n=1..20}\mathrm{cov}(bc,i,n)} \tag{1}\] \[\mathrm{cov}(bc,f)=\frac{\mathrm{Med}}{n=1..20}(\mathrm{cov}(bc,f,n)) \tag{2}\] We compute the bug-based score for each fuzzer as follows. Many fuzzer-generated crashing inputs may expose the same bug, and the same bug may yield different stack traces [6, 7]. In order to circumvent challenges of bug deduplication, we include only one reproducible bug in each benchmark and measure the time it takes to generate the first input that causes the benchmark binary to crash. Therefore, considering that each bug-based benchmark \(bb\) comprises only one bug, we calculate the relative score \(score(bb,f)\) of a fuzzer \(f\) using the following method: \[score(bb,f)=\operatorname*{\stackrel{{\mathrm{Med}}}{{n=1..20}}}( \mathrm{bug}(bb,f,n)) \tag{3}\] \[\mathrm{bug}(bb,f,n)=\begin{cases}1&\text{if $f$ finds a bug in $bb$ in trial $n$}\\ 0&\text{otherwise}\end{cases} \tag{4}\] In instances where multiple fuzzers detect an equal number of bugs across all benchmarks, we additionally provide their average time required for bug discovery as an auxiliary metric. **Benchmarks.** The \(53\) benchmarks employed in this study were selected from a diverse range of real-world open-source projects integrated into OSS-Fuzz. This approach ensures that researchers can evaluate their fuzzers on the latest, popular, and actively maintained real-world open-source programs. Meanwhile, project maintainers can benefit from state-of-the-art fuzzers. To guarantee the reproducibility of fuzzer performance, each benchmark is anchored to a specific commit. In particular, the commit for each bug-based benchmark are carefully chosen such that the bug present have been fixed or published within one year. This approach prevents security vulnerability leakage while maintaining benchmarks up-to-date for research evaluation purposes. Benchmarks are divided into public and private sets. The _public benchmark set_, consisting of 5 bug-based and 24 coverage-based benchmarks, is made available to participants for build and runtime errors identification upon joining the competition. In contrast, the _private benchmark set_, comprising of 10 bug-based and 14 coverage-based benchmarks, is withheld until the final evaluation to mitigate overfitting. Preventing overfitting in fuzzing competitions is typically challenging since participants usually require access to the benchmark source code to identify and resolve compatibility issues. However, FuzzBench's design effectively addresses this issue by separating the benchmarks and fuzzers. This allows fuzzers to be built and run on private benchmarks using the same code that was tested on the public ones, contributing to a fair and impartial evaluation of fuzzer performance. **Fuzzers.** The competition evaluates a total of 12 fuzzers, including 8 fuzzers submitted by participants and 4 fuzzers used as baseline. The participant-submitted fuzzers are AFL++6, AFLrustrustrust7, AFLSmart++8, HasteFuzz9, LearnPerfFuzz10, libAFLlibFuzzer11, Pastis12, and Symsan13. The four baseline fuzzers encompass AFL14, AFL++15, HongFuzz16, and libFuzzer17. We selected AFL and AFL++ as baselines, as most participants extended them to construct their own. The fuzzers HongFuzz and libFuzzer were chosen due to their contribution to the discovery of bugs in the bug-based benchmarks under OSS-Fuzz production environment. Footnote 15: [https://github.com/google/fuzzbench/tree/SBFT](https://github.com/google/fuzzbench/tree/SBFT) 23/fuzzers/afplplusplus [MISSING_PAGE_POST] Footnote 25: [https://github.com/google/fuzzbench/tree/SBFT](https://github.com/google/fuzzbench/tree/SBFT) 25: https://github. patterns; AFLrustrust covered a comparatively rare bug in file but missed a more commonly found bug in ffmpeg. While half of the fuzzers found more than \(4\) bugs overall, the symbolic-based fuzzer Symsat discovered only \(1\) bug in assimp. Interestingly, libAFLlibFuzzer, which performed well across coverage-based benchmarks and found bugs in \(5\) benchmarks, was the only fuzzer that missed the bug in assimp. This result could be attributed to its relatively low coverage on this specific benchmark. We also examined the average time required for fuzzers to discover a bug. Pasts proved to be the fastest in detecting bugs on average, with AFLrustrust and AFLSmart++ following closely behind. Notably, the cosine similarity between AFL++ and AFL+++ exceeds \(0.98\), suggesting that they frequently identify bugs at approximately the same time. Likewise, the cosine similarity between HongGuzz and Pasts surpasses \(0.9\), indicating a comparable speed in causing crashes within the benchmark. libAFLlibFuzzer appears to possess a distinct design, resulting in the lowest similarity score when compared to any other fuzzers. The bug-based benchmarks in this competition also underscore the "asymmetry" between coverage-based and bug-based rankings, as highlighted by Bohme et al. [4]. For instance, HasteFuzz excelled in coverage-based benchmarks yet discovered fewer bugs. Conversely, AFL identified more bugs than AFL++, despite covering less code. Although code coverage is a well-established and easily measurable benchmarking metric, these findings stress the significance of taking bug-finding capabilities into consideration when optimizing for higher coverage and evaluating fuzzers. Essentially, fuzzers are intended to detect bugs, with coverage serving as a heuristic to estimate their bug-finding potential. Bug-based benchmarking presents several challenges that we tackled in different ways. Firstly, acquiring the source code of real-world bugs is arduous, and the performance measured by artificial bugs might not accurately reflect reality. FuzzBench addresses this issue by using bugs filed by OSS-Fuzz when fuzzing actual open-source projects, providing a ground truth for bugs that had been and need to be discovered in production. Secondly, a systematic approach for selecting appropriate bug benchmarks for evaluation remains absent. For instance, if all fuzzers exhibit similar performance on certain benchmarks, those bugs offer limited value into fuzzer assessment. To mitigate this concern, we incorporated benchmarks that were hidden during development and only revealing during final evaluation, culminating in nine benchmarks that demonstrate varying bug-discovery performances among fuzzers in this competition. Thirdly, determining the superior fuzzer performance becomes difficult when multiple fuzzers can discover the same bug. To address this, we employ an auxiliary metric, i.e., measuring the average time required by each fuzzer to discover a bug. While FuzzBench evaluates this metric at 15-minute intervals, which may occasionally compromise accuracy, we highlight that this potential risk does not unfairly benefit any specific fuzzer. Finally, ascertaining whether multiple crashes correspond to the same bug by grouping backtraces poses a considerable challenge. To tackle this issue, the competition restricts each benchmark to include only one known bug. Each associated open-source project is subjected to rigorous testing using multiple fuzzers over an extended period to minimize the likelihood of multiple reproducible bugs coexisting within a single benchmark. ## V Conclusion and Future Work In this competition, FuzzBench evaluates participant fuzzers and common baselines, comparing them using a variety of statistical tools. The assessment encompasses two key metrics: code coverage and bug-finding. Benchmarks for both metrics are derived from real-world open-source projects, and all fuzzers are tested under uniform production-like environment. Moving forward, FuzzBench aims to enhance the statistical analysis by providing more detailed information, particularly concerning lines or bugs that fuzzers failed to cover. Additionally, FuzzBench plans to incorporate a larger collection of bug-based benchmarks to facilitate more comprehensive statistical reasoning.
2306.10609
Realizations of the Extended Snyder Model
We present the exact realization of the extended Snyder model. Using similarity transformations, we construct realizations of the original Snyder and the extended Snyder models. Finally, we present the exact new realization of the $\kappa$-deformed extended Snyder model.
Tea Martinić Bilać, Stjepan Meljanac
2023-06-18T18:01:29Z
http://arxiv.org/abs/2306.10609v3
# Realizations of the extended Snyder model ###### Abstract We present the exact realization of the extended Snyder model. Using similarity transformations we construct realizations of the original Snyder and the extended Snyder models. Finally, we present the exact new realization of the \(\kappa\)-deformed extended Snyder model. ## 1 Introduction The first example of NC geometry was presented in [1]. Fundamental lenght scale could be identified in natural way with Planck length \(L_{p}=\sqrt{G\hbar/c^{3}}\approx 1.62\times 10^{-35}m\)[2]. The lenght scale enters the theory through commutators of spacetime coordinates in Refs. [3]-[4]. Coproduct and star product in the Snyder model were calculated in [5]-[6] using ideas from development of NC geometry [7]. However, in the Snyder model, the algebra generated by position operators is not closed and the bialgebra resulting from implementation of the coproduct is not a Hopf algebra. In particular, the coproduct is noncoassociative and the star product is nonassociative as well [5]. A closed Lie algebra can be obtained if one adds generators of Lorentz algebra [6] to position generators. In this way one can define a Hopf algebra with a coassoaciative coproduct. If Lorentz generators are added as extended coordinates, we call this algebra extended Snyder algebra, and the theory based on this the extended Snyder model [8]. Some recent advances in the Snyder model are presented in Refs. [5], [6], [9] and [10]. Construction of field theory was addresed in [5], [6] and [11] and different applications to phenomenology were considered in [12]-[13]. Extensions in curved background were given in [14]-[21]. The Snyder model is defined as a Lie algebra generated by noncommutative coordinates \(\hat{x}_{\mu}\) and Lorentz generators \(M_{\mu\nu}\), \(\left(M_{\mu\nu}=-M_{\nu\mu}\right),\;\) satisfying the commutation relations \[[\hat{x}_{\mu},\hat{x}_{\nu}] =i\beta^{2}M_{\mu\nu},\quad\mu,\nu=0,1,2,3,\;\beta\in\mathbb{R} \tag{1}\] \[\left[M_{\mu\nu},\hat{x}_{\lambda}\right] =-i\left(\hat{x}_{\mu}\eta_{\nu\lambda}-\hat{x}_{\nu}\eta_{\mu \lambda}\right)\] (2) \[\left[M_{\mu\nu},M_{\rho\sigma}\right] =i\left(\eta_{\mu\rho}M_{\nu\sigma}-\eta_{\mu\sigma}M_{\nu\rho}- \eta_{\nu\rho}M_{\mu\sigma}+\eta_{\nu\sigma}M_{\mu\rho}\right) \tag{3}\] where \(\eta=diag(-1,1,1,1)\) is the Minkowski metric. Our goal is to construct realizations of the Snyder algebra (1)-(3) in terms of the Heisenberg algebra generated by coordinates \(x_{\mu}\) and momenta \(p_{\mu}\) satisfying the commutation relations \[[x_{\mu},x_{\nu}]=[p_{\mu},p_{\nu}]=0,\qquad[x_{\mu},p_{\nu}]=i\eta_{\mu\nu}. \tag{4}\] In Section (2) we start with the original Snyder realization with \(M_{\mu\nu}=x_{\mu}p_{\nu}-x_{\nu}p_{\mu}\) and use similarity transformations to construct a family of realizations of Snyder model. In Section (3) we apply this method to construct realizations of the extended Snyder model in which the Lorentz generators are realized by \(M_{\mu\nu}=\hat{x}_{\mu\nu}+x_{\mu}p_{\nu}-x_{\nu}p_{\mu}\), where \(\hat{x}_{\mu\nu}\) are additional tensorial generators. Finally, in Section (4) we present the exact new realization of the \(\kappa\)-deformed extended Snyder model. ## 2 Realizations of the Snyder model The original Snyder realization in terms of \(x_{\mu}\) and \(p_{\nu}\) is given by \[\hat{x}_{\mu} =x_{\mu}+\beta^{2}(x\cdot p)p_{\mu} \tag{5}\] \[M_{\mu\nu} =x_{\mu}p_{\nu}-x_{\nu}p_{\mu} \tag{6}\] where \(x\cdot p=x_{\alpha}p_{\alpha}\). 1 Further realizations of the Snyder model can be obtained by similarity transformations by the operator \(S=e^{iG}\) where Footnote 1: We denote \(x_{\alpha}p_{\alpha}=\sum_{\alpha,\beta=0}^{3}\eta_{\alpha\beta}x_{\alpha}p_{\alpha}\), and generally summation over pair of repeated indices is assumed. \[G=F_{0}(u)+(x\cdot p)F(u),\quad u=\beta^{2}p^{2},\quad F_{0}(0)=0,\quad F(0)= 0,\quad\beta\in\mathbb{R},\quad p^{2}=p_{\alpha}p_{\alpha}. \tag{7}\] Note that for \(\beta^{2}=0\) we have \(G=0\) and \(S=\mathrm{id}\) and \(G\) is Lorentz invariant and linear in the coordinates \(x_{\alpha}\). **Theorem 1**: _Using similarity transformation defined by \(S=e^{iG}\), where \(G\) is given by (7), we obtain the corresponding realizations of Snyder model_ \[\hat{x}_{\mu} =S\left(x_{\mu}+\beta^{2}(x\cdot p)p_{\mu}\right)S^{-1}\] \[=x_{\mu}\varphi_{1}(u)+\beta^{2}(x\cdot p)p_{\mu}\varphi_{2}(u)+ \beta^{2}p_{\mu}\varphi_{3}(u)\] _where_ \[\varphi_{2}(u)=\frac{1+\dot{\varphi}_{1}(u)\varphi_{1}(u)}{\varphi_{1}(u)-2u \dot{\varphi}_{1}(u)},\quad\dot{\varphi}_{1}=\frac{d\varphi_{1}(u)}{du}\quad \mbox{and}\quad u=\beta^{2}p^{2}. \tag{8}\] In order to prove the above theorem, first we prove the following propositions. Note that if \(F_{0}(u)=0\), then \(\varphi_{3}(u)=0\), hence, for simplicity in what follows we assume that \(F_{0}(u)=0\) and \(G=(x\cdot p)F(u)\). **Proposition 1**: _Let \(x_{\mu}^{\prime}=Sx_{\mu}S^{-1}\), where \(S=e^{iG}\) and \(G=(x\cdot p)F(u)\). Then_ \[x_{\mu}^{\prime}=x_{\mu}g_{1}(u)+\beta^{2}(x\cdot p)p_{\mu}g_{2}(u) \tag{9}\] _where_ \[g_{1}(u)=\left(e^{F(1-2u\frac{d}{du})}\right)(1). \tag{10}\] _Proof._ By defining the iterated commutator \[(\mbox{ad}_{G})^{n}\left(x_{\mu}\right)=\underbrace{[G,\ldots,[G,[G,x_{\mu}]] \ldots]}_{n},\quad(\mbox{ad}_{G})^{0}\left(x_{\mu}\right)=x_{\mu}.\] and using the Hadamard formula, we have \[x_{\mu}^{\prime}=Sx_{\mu}S^{-1}=e^{iG}x_{\mu}e^{-iG}=x_{\mu}+\sum_{n=1}^{ \infty}\frac{(\mbox{ad}_{iG})^{n}\left(x_{\mu}\right)}{n!}. \tag{11}\] We prove relation (9) by induction on \(n\). Using the Leibniz rule for adjoint representation and \[[F,x_{\mu}]=-i\frac{\partial F}{\partial p_{\mu}}=-i2\beta^{2}p_{\mu}\dot{F}, \quad F\equiv F(u)\quad\mbox{and}\quad\dot{F}=\frac{dF}{du}, \tag{12}\] it is easy to see that for \(n=1\) we have \[(\mbox{ad}_{iG})\left(x_{\mu}\right)=i\left[(x\cdot p)F,x_{\mu}\right]=x_{\mu }g_{11}(u)+\beta^{2}(x\cdot p)p_{\mu}g_{21}(u)\] where \(g_{11}(u)=F\) and \(g_{21}(u)=2\dot{F}\). In following we denote \(g_{ij}\equiv g_{ij}(u)\). Assume that the relation \[(\mbox{ad}_{iG})^{n}\left(x_{\mu}\right)=x_{\mu}g_{1n}+\beta^{2}(x\cdot p)p_{ \mu}g_{2n} \tag{13}\] holds for some \(n>1\). Then by the induction assumption we have \[\left(\mathrm{ad}_{iG}\right)^{n+1}(x_{\mu}) =i\left[(x\cdot p)F,x_{\mu}g_{1n}+\beta^{2}(x\cdot p)p_{\mu}g_{2n}\right]\] \[=x_{\mu}g_{1(n+1)}+\beta^{2}(x\cdot p)p_{\mu}g_{2(n+1)}\] where, using the Leibniz rule and (12), we obtain \[g_{1(n+1)}=Fg_{1n}-2u\dot{g}_{1n}F \tag{14}\] and \[g_{2(n+1)}=2\dot{F}g_{1n}+2u\dot{F}g_{2n}-g_{2n}F-2u\dot{g}_{2n}F. \tag{15}\] Let us denote \[g_{1}(u)=\sum_{n=0}^{\infty}\frac{g_{1n}}{n!}\quad\text{where}\quad g_{10}=1,\] and \[g_{2}(u)=\sum_{n=1}^{\infty}\frac{g_{2n}}{n!}.\] Then, substituting (13) into (11), it follows that (9) holds. Now, we expand \[\left(e^{F(1-2u\frac{d}{du})}\right)(1)=\sum_{n=0}^{\infty}\frac{\left(F(1-2u \frac{d}{du})\right)^{n}(1)}{n!}\] and prove by induction on \(n\) that \[g_{1n}=\left(F(1-2u\frac{d}{du})\right)^{n}(1). \tag{16}\] Note that for \(n=0\) we have \(g_{10}=id(1)=1\) and for \(n=1\) it is easy to verify that \[\left(F(1-2u\frac{d}{du})\right)(1)=F=g_{11}.\] Suppose that relation (16) is true for some \(n>1\). By the induction assumption and from (14) we have \[\left(F(1-2u\frac{d}{du})\right)^{n+1}(1) =\left(\left(F(1-2u\frac{d}{du})\right)\circ\left(F(1-2u\frac{d} {du})\right)^{n}\right)(1)\] \[=\left(F(1-2u\frac{d}{du})\right)(g_{1n})=Fg_{1n}-2u\dot{g}_{1n}F =g_{1(n+1)},\] which proves our claim (16) and consequently (10) holds. \(\blacksquare\) **Proposition 2**: _Let \(p_{\mu}^{\prime}=Sp_{\mu}S^{-1}\), where \(S=e^{iG}\) and \(G=(x\cdot p)F(u)\). Then_ \[p_{\mu}^{\prime}=p_{\mu}g_{3}(u) \tag{17}\] _where_ \[g_{3}(u)=\left(e^{-F(1+2u\frac{d}{du})}\right)(1). \tag{18}\] _Proof._ Analogous to the proof of the previous proposition, first by using the Hadamard formula we find \[p_{\mu}^{\prime}=Sp_{\mu}S^{-1}=e^{iG}p_{\mu}e^{-iG}=p_{\mu}+\sum_{n=1}^{ \infty}\frac{\left(\mathrm{ad}_{iG}\right)^{n}\left(p_{\mu}\right)}{n!}. \tag{19}\] Then by induction on \(n\) we prove that \[\left(\mathrm{ad}_{iG}\right)^{n}\left(p_{\mu}\right)=p_{\mu}g_{3n}. \tag{20}\] After short computation, for \(n=1\) we have \[i[(x\cdot p)F,p_{\mu}]=i[(x\cdot p),p_{\mu}]F=p_{\mu}g_{31}\] where \(g_{31}=-F\). Assume that relation (20) holds for some \(n>1\). Then by the induction assumption we find \[\left(\mathrm{ad}_{iG}\right)^{n+1}(p_{\mu})=i[(x\cdot p)F,p_{\mu}g_{3n}]=p_{ \mu}g_{3(n+1)}\] where \[g_{3(n+1)}=-Fg_{3n}-2u\dot{g}_{3n}F, \tag{21}\] which proves claim (20). Finnaly, if we denote \[g_{3}(u)=\sum_{n=0}^{\infty}\frac{g_{3n}}{n!}\quad\mbox{where}\quad g_{30}=1,\] then (17) holds. Also, we prove by induction on \(n\) that \[g_{3n}=\left(-F(1+2u\frac{d}{du})\right)^{n}(1). \tag{22}\] Note that for \(n=0\) we have \(g_{30}=id(1)=1\) and for \(n=1\) we get \[\left(-F(1+2u\frac{d}{du})\right)(1)=-F=g_{31}.\] Suppose that relation (22) holds for some \(n>1\). Then by the induction assumption and from (21) we have \[\left(-F(1+2u\frac{d}{du})\right)^{n+1}(1)=\left(\left(-F(1+2u\frac{ d}{du})\right)\circ\left(-F(1+2u\frac{d}{du})\right)^{n}\right)(1)\] \[=\left(-F(1+2u\frac{d}{du})\right)(g_{3n})=-Fg_{3n}-2u\dot{g}_{3n }F=g_{3(n+1)}.\] Therefore, (22) holds for every \(n\), which implies that (18) holds. \(\blacksquare\) Now, using results proven in the previous propositions, we can finally prove our main result given by Theorem (1). _Proof._ [Theorem (1)] Let us denote \(x^{\prime}_{\mu}=Sx_{\mu}S^{-1}\) and \(p^{\prime}_{\mu}=Sp_{\mu}S^{-1}\), where \(S=e^{iG}\) and \(G=(x\cdot p)F(u)\). Then \[[x^{\prime}_{\mu},x^{\prime}_{\nu}]=[p^{\prime}_{\mu},p^{\prime}_{\nu}]=0, \quad[x^{\prime}_{\mu},p^{\prime}_{\nu}]=i\eta_{\mu\nu} \tag{23}\] and \[x^{\prime}_{\mu}+\beta^{2}(x^{\prime}\cdot p^{\prime})p^{\prime}_{\mu}=S \left(x_{\mu}+\beta^{2}(x\cdot p)p_{\mu}\right)S^{-1}. \tag{24}\] Inserting (9) and (17) into (23) we get \[i\eta_{\mu\nu}g_{1}g_{3}+ip_{\nu}\frac{\partial g_{3}}{\partial p_{\mu}}g_{1} +i\beta^{2}\left(\frac{\partial p_{\nu}}{\partial p_{\alpha}}p_{\alpha}g_{3}+ \frac{\partial g_{3}}{\partial p_{\alpha}}p_{\alpha}p_{\nu}\right)p_{\mu}g_{2 }=i\eta_{\mu\nu},\quad g_{i}\equiv g_{i}(u),\;i=1,2,3,\] which implies \[g_{3}=\frac{1}{g_{1}} \tag{25}\] and \[2g_{1}\dot{g}_{3}+g_{2}\left(g_{3}+2u\dot{g}_{3}\right)=0. \tag{26}\] Substituting (25) into (26) we find \[g_{2}=\frac{2\dot{g}_{1}g_{1}}{g_{1}-2u\dot{g}_{1}}. \tag{27}\] Finally, using (9) and (17), it follows from (24) that \[S\left(x_{\mu}+\beta^{2}(x\cdot p)p_{\mu}\right)S^{-1}=x_{\mu}g_{1}+\beta^{2} (x\cdot p)p_{\mu}\left(g_{2}+g_{3}+ug_{2}g_{3}^{2}\right). \tag{28}\] If we denote \(\varphi_{1}=g_{1}\) and \(\varphi_{2}=g_{2}+g_{3}+ug_{2}g_{3}^{2}\), then (8) follows from (25) and (27). \(\blacksquare\) **Example:** For \(F_{0}=0\) and \(F=-\frac{1}{2}u\) we get \[\varphi_{1}(u)=\sqrt{1-u}\quad\text{and}\quad\varphi_{2}(u)=0\] hence, \[\hat{x}_{\mu}=x_{\mu}\sqrt{1-u}.\] **Remark 1**: _If \(F=0\) and \(F_{0}\neq 0\) then \(x_{\mu}^{\prime}=x_{\mu}+2\beta^{2}p_{\mu}\hat{F}_{0},\;p_{\mu}^{\prime}=p_{\mu}\) and_ \[\hat{x}_{\mu}=e^{iF_{0}}\left(x_{\mu}+\beta^{2}(x\cdot p)p_{\mu}\right)e^{-iF_ {0}}=x_{\mu}+\beta^{2}(x\cdot p)p_{\mu}+2\beta^{2}p_{\mu}\hat{F}_{0}\left(1+u \right).\] **Remark 2**: _When \(\varphi_{1}(u)\) is fixed and \(\varphi_{2}(u)\) is given with (8), then \(\varphi_{3}(u)\) depends on \(F_{0}\) and can be arbitrary. There is a family of realizations with fixed \(\varphi_{1}(u)\) and arbitrary \(\varphi_{3}(u)\)._ **Remark 3**: _A Hermitian realization can be obtained starting with the hermitian form of (5), that is_ \[\hat{x}_{\mu}=x_{\mu}+\frac{1}{2}\beta^{2}\left((x\cdot p)p_{\mu}+p_{\mu}(p \cdot x)\right)\] _and instead of \(G\) writing \(\frac{1}{2}\left(G+G^{\dagger}\right)\). Then result of Theorem (1) is obtained in hermitian form \(\frac{1}{2}\left(\hat{x}_{\mu}+\hat{x}_{\mu}^{\dagger}\right)\)._ ## 3 Realizations of the extended Snyder model Different realizations of the Snyder algebra can be obtained introducing additional tensorial generators \(\hat{x}_{\mu\nu}=-\hat{x}_{\nu\mu}\). This alternative approach was suggested in [6] and it was studied perturbatively from a different point of view in [8], [10] and [18] based on the results in [22]. The additional generators \(\hat{x}_{\mu\nu}\) are assumed to satisfy the commutation relations \[\left[\hat{x}_{\mu\nu},\hat{x}_{\rho\sigma}\right] =i\left(\eta_{\mu\rho}\hat{x}_{\nu\sigma}-\eta_{\mu\sigma}\hat{x} _{\nu\rho}-\eta_{\nu\rho}\hat{x}_{\mu\sigma}+\eta_{\nu\sigma}\hat{x}_{\mu\rho}\right) \tag{29}\] \[\left[\hat{x}_{\mu\nu},x_{\lambda}\right] =0\] (30) \[\left[\hat{x}_{\mu\nu},p_{\lambda}\right] =0. \tag{31}\] In this case we consider realizations of the Lorentz generators of the form \[M_{\mu\nu}=\hat{x}_{\mu\nu}+x_{\mu}p_{\nu}-x_{\nu}p_{\mu} \tag{32}\] \[M_{\mu\nu}\rhd 1=\hat{x}_{\mu\nu}\rhd 1=x_{\mu\nu}\quad\mbox{and}\quad p_{\mu} \rhd 1=0 \tag{33}\] where \(x_{\mu\nu}\) are commuting variables. **Theorem 2**: _Extension of the Snyder realization (5)-(6) with additional generators \(\hat{x}_{\mu\nu}\) is given by_ \[\hat{x}_{\mu}=x_{\mu}+\beta^{2}(x\cdot p)p_{\mu}-\beta^{2}\hat{x}_{\mu\alpha}p _{\alpha}\frac{1}{1+\sqrt{1+u}} \tag{34}\] \[M_{\mu\nu}=\hat{x}_{\mu\nu}+x_{\mu}p_{\nu}-x_{\nu}p_{\mu}. \tag{35}\] Proof.: In order to prove that we can construct the realization of the Snyder model by (34) and (35), we show that (34) and (35) satisfy Snyder algebra (1)-(3). A short computation using (12) yields \[\left[x_{\mu},\frac{1}{1+\sqrt{1+u}}\right]=\frac{-i\beta^{2}p_{\mu}}{\sqrt{1+u} \left(1+\sqrt{1+u}\right)^{2}}, \tag{36}\] \[\left[\frac{1}{1+\sqrt{1+u}},(x\cdot p)\right]=\frac{iu}{\sqrt{1+u}\left(1+ \sqrt{1+u}\right)^{2}}. \tag{37}\] and \[\left[M_{\mu\nu},\frac{1}{1+\sqrt{1+u}}\right]=0. \tag{38}\] Now, from (29)-(31) and (36)-(37), using bilinearity of the Lie bracket we obtain \[\left[\hat{x}_{\mu},\hat{x}_{\nu}\right] =\left[x_{\mu}+\beta^{2}(x\cdot p)p_{\mu}-\beta^{2}\hat{x}_{\mu \alpha}p_{\alpha}\frac{1}{1+\sqrt{1+u}},x_{\nu}+\beta^{2}(x\cdot p)p_{\nu}- \beta^{2}\hat{x}_{\nu\rho}p_{\rho}\frac{1}{1+\sqrt{1+u}}\right]\] \[=i\beta^{2}(x_{\mu}p_{\nu}-x_{\nu}p_{\mu})+i2\beta^{2}\frac{\hat{ x}_{\mu\nu}}{1+\sqrt{1+u}}+i\beta^{2}u\frac{\hat{x}_{\mu\nu}}{\left(1+\sqrt{1+u} \right)^{2}}=i\beta^{2}(x_{\mu}p_{\nu}-x_{\nu}p_{\mu}+\hat{x}_{\mu\nu})\] \[=i\beta^{2}M_{\mu\nu}.\] Similarly, by using (38), we check that (34) and (35) satisfy (2)-(3), therefore (34) and (35) is a realization of the extended Snyder model. \(\blacksquare\) In order to obtain a familiy of realizations of the extended Snyder model we use similarity transformations from Section (2), by \(S=e^{iG}\) where \(G=(x\cdot p)F(u)\). First, note that \[S\left(\frac{1}{1+\sqrt{1+u}}\right)S^{-1} =S\left(\sum_{m=1}^{\infty}\binom{\frac{1}{2}}{m}u^{m-1}\right)S ^{-1}\] \[=\sum_{m=1}^{\infty}\binom{\frac{1}{2}}{m}(\beta^{2}p^{\prime 2})^{m- 1}=\frac{1}{1+\sqrt{1+\beta^{2}p^{\prime 2}}} \tag{39}\] and \[S\left(\hat{x}_{\mu\nu}\right)S^{-1}=\hat{x}_{\mu\nu}. \tag{40}\] Now (39) and (40) implies that \[\hat{x}_{\mu} =S\left(x_{\mu}+\beta^{2}(x\cdot p)p_{\mu}-\beta^{2}\hat{x}_{\mu \alpha}p_{\alpha}\frac{1}{1+\sqrt{1+u}}\right)S^{-1}\] \[=x_{\mu}^{\prime}+\beta^{2}(x^{\prime}\cdot p^{\prime})p_{\mu}^{ \prime}-\beta^{2}\hat{x}_{\mu\alpha}p_{\alpha}^{\prime}\frac{1}{1+\sqrt{1+ \beta^{2}p^{\prime 2}}}.\] Finally, by using results given in Section (2), (25) and (28), we obtain a family of realizations of the extended Snyder model \[\hat{x}_{\mu}=x_{\mu}\varphi_{1}(u)+\beta^{2}(x\cdot p)p_{\mu}\varphi_{2}(u)- \beta^{2}\hat{x}_{\mu\alpha}p_{\alpha}\frac{1}{\varphi_{1}(u)+\sqrt{\varphi_{ 1}^{2}(u)+u}}, \tag{41}\] where \(\varphi_{1}(u)\) and \(\varphi_{2}(u)\) satisfy (8). Note that realizations (34), (35), (41) and (8) are the exact results written in closed form. ## 4 \(\kappa\)-deformed extended Snyder model In this section we consider a family of Lie algebras containing \(\kappa\)-Poincare and Snyder algebras as special cases. They are generated by the NC coordinates \(\hat{x}_{\mu}\) and Lorentz generators \(M_{\mu\nu}\) satisfying \[[\hat{x}_{\mu},\hat{x}_{\nu}]=i\left(a_{\mu}\hat{x}_{\nu}-a_{\nu}\hat{x}_{\mu}+ \beta^{2}M_{\mu\nu}\right) \tag{42}\] \[[M_{\mu\nu},\hat{x}_{\lambda}]=-i\left(\hat{x}_{\mu}\eta_{\nu\lambda}-\hat{x}_ {\nu}\eta_{\mu\lambda}+a_{\mu}M_{\nu\lambda}-a_{\nu}M_{\mu\lambda}\right) \tag{43}\] \[[M_{\mu\nu},M_{\rho\sigma}]=i\left(\eta_{\mu\rho}M_{\nu\sigma}-\eta_{\mu \sigma}M_{\nu\rho}-\eta_{\nu\rho}M_{\mu\sigma}+\eta_{\nu\sigma}M_{\mu\rho} \right), \tag{44}\] where \(a_{\mu}=\frac{1}{\kappa}u_{\mu},\ u^{2}=(-1,0,1)\) and \(\kappa\) is the mass parameter with \(\frac{1}{\kappa}\neq\beta\). Such models were considered in [23]-[24] and the \(\kappa\)-deformed extended Snyder model was considered in [25]-[27]. If \(\ M_{\mu\nu}\ =\ x_{\mu}p_{\nu}-x_{\nu}p_{\mu}\) and \(\ [M_{\mu\nu},p_{\lambda}]\ =\ i\left(p_{\nu}\eta_{\mu\lambda}-p_{\mu}\eta_{\nu \lambda}\right),\) then one particular realization of above algebra is given in [23]-[24] with \[\hat{x}_{\mu}=x_{\mu}\sqrt{1+(a^{2}-\beta^{2})p^{2}}+M_{\mu\alpha}a_{\alpha}.\] For \(a_{\mu}=0\) we get a realization of the Snyder model \[\hat{x}_{\mu}=x_{\mu}\sqrt{1-u}.\] For \(\beta^{2}=0\) we get the natural realization [28]-[29], i.e., a realization in the classical basis [30] of the \(\kappa\)-Poincare algebra \[\hat{x}_{\mu}=x_{\mu}\sqrt{1+a^{2}p^{2}}+M_{\mu\alpha}a_{\alpha}.\] In the following paper we present the exact new result for the \(\kappa\)-deformed extended Snyder model that is written in closed form and different from the perturbative results discussed in [25]-[26]. **Theorem 3**: _Let_ \[M_{\mu\nu}=\hat{x}_{\mu\nu}+x_{\mu}p_{\nu}-x_{\nu}p_{\mu}. \tag{45}\] _Then one particular realization of the algebra (42)-(44) is given by_ \[\hat{x}_{\mu}=x_{\mu}\sqrt{1+(a^{2}-\beta^{2})p^{2}}+M_{\mu\alpha}a_{\alpha}+( a^{2}-\beta^{2})\hat{x}_{\mu\alpha}p_{\alpha}\frac{1}{1+\sqrt{1+(a^{2}- \beta^{2})\,p^{2}}}. \tag{46}\] _Proof._ We have to show that realization (46) satisfies the algebra (42)-(44). By using (29)-(31), it is easy to see that \[\left[M_{\mu\nu},p_{\lambda}\right]=i\left(p_{\nu}\eta_{\mu\lambda}-p_{\mu}\eta_{ \nu\lambda}\right), \tag{47}\] \[\left[M_{\mu\nu},x_{\lambda}\right]=i\left(x_{\nu}\eta_{\mu\lambda}-x_{\mu} \eta_{\nu\lambda}\right) \tag{48}\] and \[\left[M_{\mu\nu},\hat{x}_{\rho\sigma}\right]=i\left(\eta_{\mu\rho}\hat{x}_{\nu \sigma}-\eta_{\mu\sigma}\hat{x}_{\nu\rho}-\eta_{\nu\rho}\hat{x}_{\mu\sigma}+ \eta_{\nu\sigma}\hat{x}_{\mu\rho}\right). \tag{49}\] Furthermore, from (12) we get \[\left[x_{\mu},\frac{1}{1+\sqrt{1+\left(a^{2}-\beta^{2}\right)p^{2}}}\right]= \frac{-i\left(a^{2}-\beta^{2}\right)p_{\mu}}{\sqrt{1+\left(a^{2}-\beta^{2} \right)p^{2}}\left(1+\sqrt{1+\left(a^{2}-\beta^{2}\right)p^{2}}\right)^{2}}, \tag{50}\] \[\left[x_{\mu},\sqrt{1+\left(a^{2}-\beta^{2}\right)p^{2}}\right]=\frac{i\left( a^{2}-\beta^{2}\right)p^{2}}{\sqrt{1+\left(a^{2}-\beta^{2}\right)p^{2}}}. \tag{51}\] and \[\left[M_{\mu\nu},\frac{1}{1+\sqrt{1+\left(a^{2}-\beta^{2}\right)p^{2}}}\right] =\left[M_{\mu\nu},\sqrt{1+\left(a^{2}-\beta^{2}\right)p^{2}}\right]=0. \tag{52}\] Now, from (47)-(52) we have \[\left[\hat{x}_{\mu},\hat{x}_{\nu}\right] =\left[x_{\mu}\sqrt{1+\left(a^{2}-\beta^{2}\right)p^{2}}+M_{\mu \alpha}a_{\alpha}+(a^{2}-\beta^{2})\hat{x}_{\mu\alpha}p_{\alpha}\frac{1}{1+ \sqrt{1+\left(a^{2}-\beta^{2}\right)p^{2}}},\right.\] \[\left.x_{\nu}\sqrt{1+\left(a^{2}-\beta^{2}\right)p^{2}}+M_{\nu \rho}a_{\rho}+(a^{2}-\beta^{2})\hat{x}_{\nu\rho}p_{\rho}\frac{1}{1+\sqrt{1+ \left(a^{2}-\beta^{2}\right)p^{2}}}\right]\] \[=-i\left(a^{2}-\beta^{2}\right)\left(x_{\mu}p_{\nu}-x_{\nu}p_{\mu }\right)+i(a_{\mu}x_{\nu}-a_{\nu}x_{\mu})\sqrt{1+(a^{2}-\beta^{2})p^{2}}\] \[+i(a_{\mu}M_{\nu\rho}a_{\rho}-a_{\nu}M_{\mu\alpha}a_{\alpha}+a^{2 }M_{\mu\nu})-i\left(a^{2}-\beta^{2}\right)\hat{x}_{\mu\nu}\] \[=i\left(a_{\mu}\hat{x}_{\nu}-a_{\nu}\hat{x}_{\mu}+\beta^{2}M_{\mu \nu}\right).\] In similar way, by using (47)-(52), we show that (45) and (46) satisfy (43)-(44). \(\blacksquare\) For \(a_{\mu}=0\) we get the realization of the extended Snyder model found in Section (3) \[\hat{x}_{\mu}=x_{\mu}\sqrt{1-u}-\beta^{2}\hat{x}_{\mu\alpha}p_{\alpha}\frac{1 }{1+\sqrt{1-u}}.\] For \(\beta^{2}=0\) we find \[\hat{x}_{\mu}=x_{\mu}\sqrt{1+a^{2}p^{2}}+M_{\mu\alpha}a_{\alpha}+a^{2}\hat{x} _{\mu\alpha}p_{\alpha}\frac{1}{1+\sqrt{1+a^{2}p^{2}}}.\] This is a new result corresponding to the \(\kappa\)-Poincare algebra with additional tensorial generators \(\hat{x}_{\mu\nu}\). The most general realizations of \(\hat{x}_{\mu}\) in all cases in this section are obtained by using the most general corresponding similarity transformations. Construction of Hermitian realizations in Section (3) and (4) can be obtained simply by changing \(\hat{x}_{\mu}\) with \(\frac{1}{2}(\hat{x}_{\mu}+\hat{x}_{\mu}^{\dagger})\), as in Remark (3) in Section (2). ### Acknowledgement We thank S. Kresic Juric, Z. Skoda and S. Mignemi for useful comments.
2304.06612
Chiral magnetohydrodynamics with zero total chirality
We study the evolution of magnetic fields coupled with chiral fermion asymmetry in the framework of chiral magnetohydrodynamics with zero initial total chirality. The initial magnetic field has a turbulent spectrum peaking at a certain characteristic scale and is fully helical with positive helicity. The initial chiral chemical potential is spatially uniform and negative. We consider two opposite cases where the ratio of the length scale of the chiral plasma instability (CPI) to the characteristic scale of the turbulence is smaller and larger than unity. These initial conditions might be realized in cosmological models such as certain types of axion inflation. The magnetic field and chiral chemical potential evolve with inverse cascading in such a way that the magnetic helicity and chirality cancel each other at all times. The CPI time scale is found to determine mainly the time when the magnetic helicity spectrum attains negative values at high wave numbers. The turnover time of the energy-carrying eddies, on the other hand, determines the time when the peak of the spectrum starts to shift to smaller wave numbers via an inverse cascade. The onset of helicity decay is determined by the time when the chiral magnetic effect becomes efficient at the peak of the initial magnetic energy spectrum. When spin flipping is important, the chiral chemical potential vanishes and the magnetic helicity becomes constant, which leads to a faster increase of the correlation length, as expected from magnetic helicity conservation. This also happens when the initial total chirality is imbalanced. Our findings have important implications for baryogenesis after axion inflation.
Axel Brandenburg, Kohei Kamada, Kyohei Mukaida, Kai Schmitz, Jennifer Schober
2023-04-13T15:26:22Z
http://arxiv.org/abs/2304.06612v2
# Chiral magnetohydrodynamics with zero total chirality ###### Abstract We study the evolution of magnetic fields coupled with chiral fermion asymmetry in the framework of chiral magnetohydrodynamics with zero initial total chirality. The initial magnetic field has a turbulent spectrum peaking at a certain characteristic scale and is fully helical with positive helicity. The initial chiral chemical potential is spatially uniform and negative. We consider two opposite cases where the ratio of the length scale of the chiral plasma instability (CPI) to the characteristic scale of the turbulence is smaller and larger than unity. These initial conditions might be realized in cosmological models such as certain types of axion inflation. The magnetic field and chiral chemical potential evolve with inverse cascading in such a way that the magnetic helicity and chirality cancel each other at all times. The CPI time scale is found to determine mainly the time when the magnetic helicity spectrum attains negative values at high wave numbers. The turnover time of the energy-carrying eddies, on the other hand, determines the time when the peak of the spectrum starts to shift to smaller wave numbers via an inverse cascade. The onset of helicity decay is determined by the time when the chiral magnetic effect becomes efficient at the peak of the initial magnetic energy spectrum. When spin flipping is important, the chiral chemical potential vanishes and the magnetic helicity becomes constant, which leads to a faster increase of the correlation length, as expected from magnetic helicity conservation. This also happens when the initial total chirality is imbalanced. Our findings have important implications for baryogenesis after axion inflation. + Footnote †: preprint: NORDITA-2023-014, RESCEU-5/23, KEK-TH-2504, MS-TP-23-13 ## I Introduction Relativistic plasmas are described by the evolution equations of chiral magnetohydrodynamics (MHD) [1; 2; 3; 4; 5; 6; 7; 8; 9]. Chirality enters in two distinct ways: first, through a nonvanishing chiral chemical potential, \(\tilde{\mu}_{5}\), and second, through nonvanishing magnetic helicity density, \(\mathbf{A}\cdot\mathbf{B}\), where \(\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}\) is the magnetic field expressed in terms of the vector potential \(\mathbf{A}\). It has been known for some time that fermion chirality can be transferred into magnetic helicity and vice versa through the chiral anomaly [10; 11]. The transfer of fermion chirality to magnetic helicity occurs through an instability [12] known as the chiral plasma instability (CPI) [13]. This instability is the fastest at a specific wave number, whose value depends on the chiral chemical potential. The transfer from magnetic helicity to chiral chemical potential does not involve any instability, but occurs just through a nonvanishing nonlinear source term in the evolution equation for the chiral chemical potential [14; 15; 3]. These differences in the evolutions of the chiral chemical potential and magnetic field can lead to nontrivial dynamics, which has triggered a lot of research [16; 17; 18]. Since fermion chirality is tightly related to the baryon and lepton asymmetries at high temperature in the early Universe, their co-evolution with magnetic helicity in the context of cosmology has also extensively studied [19; 20; 21; 22; 23; 24; 25; 26; 27]. Previous investigations mostly assumed an initial imbalance between fermion chirality and magnetic helicity. This leads to a conversion of fermion chirality to a maximally helical magnetic field [3]. Also just spatial fluctuations can lead to magnetic field production [28; 29]. In many investigations, however, the initial fermion chirality is nonvanishing while initial magnetic helicity is zero or vice versa. Such chiral asymmetry, which can trigger the CPI, could be generated [30; 31; 32] in GUT baryogenesis in the early Universe [33; 34; 35; 36; 37] or weak interactions in compact stars [38; 39; 40; 41; 42] (see also Ref. [43] and references therein). However, numerical studies on other interesting initial conditions are still lacking, where fermion chirality is exactly opposite to magnetic helicity. Such an initial condition is expected if the chiral symmetry in the fermion sector is only broken through the topologi cal density, \(\partial_{\mu}J_{5}^{\mu}=-e^{2}F_{\mu\nu}\tilde{F}^{\mu\nu}/(8\pi^{2}\hbar^{2}c)\), or the chiral anomaly [10; 11], with \(J_{5}^{\mu}\) being the chiral current and \(e^{2}F_{\mu\nu}\tilde{F}^{\mu\nu}/(8\pi^{2}\hbar^{2}c)\) being the topological density. Since the topological density can be written as a total derivative of the magnetic helicity density, the sum of chiral asymmetry and magnetic helicity vanishes when they are generated [44]. Configurations with vanishing total chirality are interesting not only in the context of chiral MHD, but also in particle physics and cosmology. At a high enough temperature realized in the early Universe, the electron Yukawa interaction becomes inefficient for \(T\gtrsim 10^{5}\,\text{GeV}\)[45; 46]. There we find the conservation of the total chirality because of \(\partial_{\mu}J_{e_{\mu}}^{\mu}=-g_{Y}^{2}Y_{\mu\nu}\tilde{Y}^{\mu\nu}/(16\pi^ {2}\hbar^{2}c)\) with \(J_{e_{\mu}}^{\mu}\) being the right-handed electron current and \(Y^{\mu\nu}\) being the field strength of the hypercharge gauge field with gauge coupling \(g_{Y}\). For instance, in a certain class of axion inflation, configurations with zero net chirality are generated during inflation [44], which can be the origin of the observed baryon asymmetry of the Universe [47; 48; 49] and it could explain the proposed intergalactic magnetic field; see, however, Ref. [26] for the baryon overproduction problem and Ref. [50] for the too large baryon isocurvature problem. The main purpose of this paper is to perform a full numerical chiral MHD simulation under the initial condition of vanishing total chirality and provide a better understanding of the nonlinear dynamics in this case. Before we begin our investigations, it is useful to recall the main findings of earlier work where the total chirality was mostly different from zero. Following the work of Ref. [14], who studied a system consisting of the gauge field and the chiral chemical potential, but without fluid velocity fields, and with the initial condition \(\langle\mathbf{A}\cdot\mathbf{B}\rangle\neq 0,\tilde{\mu}_{5}=0\), three stages can be identified: (i) exponential decline of the magnetic helicity together with an increase of \(\tilde{\mu}_{5}\), followed by (ii) a continued decrease of the typical peak wave number \(k_{\rm p}\), while \(\tilde{\mu}_{5}\) stays at its maximum value with \(\langle\mathbf{A}\cdot\mathbf{B}\rangle\) being essentially zero, and (iii) a phase when all the fermion chirality \(\tilde{\mu}_{5}\) gets transferred back to magnetic helicity. As expected, owing to magnetic helicity conservation, and because the magnetic field from the CPI is maximally helical, the magnetic energy density \(\langle\mathbf{B}^{2}\rangle/2\) decays at late times such that \(\langle\mathbf{B}^{2}\rangle\xi_{\rm M}\approx\text{const}\), where \(\xi_{\rm M}\equiv k_{\rm p}^{-1}\) is the magnetic correlation length. In other words, both \(\langle\mathbf{B}^{2}\rangle\) and \(k_{\rm p}\) decay in the same fashion, but, unlike the expected \(t^{-2/3}\) scaling found previously for helical turbulence [51; 52; 53; 54], they find a \(t^{-1/2}\) scaling both for \(\langle\mathbf{B}^{2}\rangle\) and \(k_{\rm p}\). For sufficiently strong initial magnetic fields, the magnetic Reynolds number can be much larger than unity and the eddy turnover scale much longer than the estimated inverse peak momentum scale, if equipartition between the magnetic fields and fluid velocity fields is established. This suggests that the effect of the fluid velocity cannot be negligible in general. The earlier analytic study of Ref. [14] was revisited using direct numerical simulations of chiral MHD [15]. At large magnetic Reynolds numbers, the authors found clear evidence for a \(t^{-2/3}\) scaling of both \(\langle\mathbf{B}^{2}\rangle\) and \(k_{\rm p}\) at late times. They also found that the initial evolution is not exponential, as suggested in Ref. [14], but linear in time. However, they only considered the case where the initial fermion chirality was zero. When it is finite and balancing exactly the magnetic helicity, the magnetic field decays in a way similar to the case of a strong, nonhelical field [55], where the decay is governed by the conservation of the Hosking integral [56; 57; 58]. This integral describes the strength of magnetic helicity fluctuations on different length scales and has the dimensions of \(\,\text{cm}^{9}\,\text{s}^{-4}\), which implies the scalings \(\xi_{\rm M}\propto t^{4/9}\) and \(\langle\mathbf{B}^{2}\rangle\propto t^{-10/9}\)[56]. The general validity of the Hosking integral was further demonstrated by applying a corresponding analysis to the decay of a nonhelical magnetic field in neutron star crusts [59], where the magnetic field evolution is covered by the Hall effect [60]. Our goal here is to bridge the gap between the two extremes, where the initial chirality is either only in the fermions or only in the magnetic field, and to consider the intermediate case where fermion chirality and magnetic helicity balance to zero, extending the study of the present authors [55]. This is another case where the decay of \(\langle\mathbf{B}^{2}\rangle\) and \(k_{\rm p}\) are described by a correspondingly adapted Hosking integral of the total chirality. In the following, we therefore refer to the Hosking integral with the chiral chemical potential included as the "adapted" Hosking integral; see Ref. [55] for detail. As mentioned, our findings on the evolution of the system with vanishing total chirality has a significant impact on the present baryon asymmetry of the Universe. Another goal of the present paper is then to clarify how the non-trivial co-evolution of the magnetic field and fermion chirality affect the model space of axion inflation consistent with the present Universe, which has not been explored before. We begin, by presenting the basic equations and the mathematical setup of our simulations in Sect. II. We then discuss the parameter dependence of characteristic time scales, consider also the effect of spin flipping, and finally cases where the perfectly vanishing chirality balance is relaxed in Sect. III. Applications to the early Universe are discussed in Sect. IV. Conclusions are presented in Sect. V. ## II Chiral Magnetohydrodynamics ### Chiral magnetic effect Using Lorentz-Heaviside units, the Ampere-Maxwell equation for the QED-like model in the MHD limit (omitting the displacement current) reads \[\mathbf{\nabla}\times\mathbf{B}=\frac{1}{c}\mathbf{J}. \tag{1}\] The electric current \(\mathbf{J}\) is the sum of the Ohmic current and the chiral magnetic effect (CME) [61; 62; 63], \[\mathbf{J}=\frac{\sigma}{c}\left(c\mathbf{E}+\mathbf{u}\times\mathbf{B}\right)+\frac{e^{2}}{2 \pi^{2}\hbar^{2}c}\tilde{\mu}_{5}\mathbf{B}, \tag{2}\] where we consider the case with \(\tilde{\mu}_{5}\ll(e^{2}/\hbar c)k_{B}T\). By rewriting \(c\mathbf{E}=-\partial\mathbf{A}/\partial t\) in the Weyl gauge, \(e^{2}/4\pi\hbar c\equiv\alpha\), Eq. (2) is rewritten as \[\frac{\partial\mathbf{A}}{\partial t}=\frac{c^{2}}{\sigma}\left(\mu_{5}\mathbf{B}-\mathbf{ \nabla}\times\mathbf{B}\right)+\mathbf{u}\times\mathbf{B}, \tag{3}\] where we defined [6] \[\mu_{5}\equiv\frac{2\alpha}{\pi\hbar c}\tilde{\mu}_{5}. \tag{4}\] This expression agrees with Eq. (32) of Ref. [6], except for a factor of 2 resulting from our definition \(\tilde{\mu}_{5}=(\tilde{\mu}_{\rm R}-\tilde{\mu}_{\rm L})/2\) in terms of the chemical potentials for right- and left-handed fermions [43]. The additional \(4\pi\) factor in the numerator of the expression in Ref. [6] is a consequence of their use of cgs units. ### Model description and basic equations We perform simulations in a cubic domain of size \(L^{3}\) with side lengths \(L\) and triply-periodic boundary conditions. The mass in the domain is therefore constant, so the mean density \(\overline{\rho}\) is always equal to its initial value \(\rho_{0}\) and put to unity in all cases. The lowest wave number in the domain is \(k_{1}=2\pi/L\). Using \(N^{3}\) mesh points, the largest wave number in the simulations is the Nyquist wave number \(k_{\rm Ny}=k_{1}N/2\). In the following, we set \(c=1\), so \(\mathbf{J}=\mathbf{\nabla}\times\mathbf{B}\). To include the effects of the cosmic expansion with scale factor \(a(t)\propto t^{1/2}\) in the radiation-dominated era, which we assume to be a spatially flat Friedmann Universe, we use correspondingly scaled quantities and conformal time, \(\eta(t)=\int\mathrm{d}t/a(t)\), in which the evolution equations of MHD are the same as in the absence of expansion [64]. In order to obtain the physical quantities, we can simply normalize the corresponding comoving quantities with the appropriate powers of the scale factor \(a\). Furthermore, using \(\lambda=3\hbar(2\alpha/\pi k_{\rm B}T)^{2}\) and including spin flipping and spatial diffusion, our chiral anomaly equation is \[\frac{\partial\mu_{5}}{\partial\eta}+\mathbf{\nabla}\cdot(\mu_{5}\mathbf{u})=\frac{ \lambda}{\sigma}\left(\mathbf{J}-\mu_{5}\mathbf{B}\right)\cdot\mathbf{B}+D_{5}\nabla^{2} \mu_{5}-\Gamma\mu_{5}, \tag{5}\] where \(D_{5}\) is an empirical diffusion coefficients for the chiral chemical potential. Here we used the relationship between the chiral chemical potential and the number density, \[n_{5}\equiv n_{R}-n_{L}=2\times\frac{\tilde{\mu}_{5}}{6\hbar^{3}}(k_{B}T)^{2} =\frac{\pi\mu_{5}}{6\alpha\hbar^{2}}(k_{B}T)^{2}, \tag{6}\] and used \(J_{5}^{\mu}=(n_{5},n_{5}\mathbf{u}-D_{5}\mathbf{\nabla}n_{5})\) for the chiral 4-current. Owing to the chiral anomaly [10; 11], the total chirality is conserved in the absence of spin flipping interaction [3; 6]. It is then convenient to introduce the mean magnetic chirality equivalent as \[\langle\mu_{\rm M}\rangle\equiv\tfrac{1}{2}\lambda\langle\mathbf{A}\cdot\mathbf{B}\rangle, \tag{7}\] so that the conservation law derived from Eqs. (3) and (5) can be stated in the form \[\mu_{\rm tot}=\langle\mu_{5}\rangle+\langle\mu_{\rm M}\rangle={\rm const}. \tag{8}\] We complement Eqs. (3) and (5) by the momentum and continuity equations [6; 7; 65] \[\frac{\mathrm{D}\mathbf{u}}{\mathrm{D}\eta} =\frac{2}{\rho}\mathbf{\nabla}\cdot(\rho\nu\mathbf{\mathsf{S}})-\frac{1} {4}\mathbf{\nabla}\ln\rho+\frac{\mathbf{u}}{3}\left(\mathbf{\nabla}\cdot\mathbf{u}+\mathbf{u} \cdot\mathbf{\nabla}\ln\rho\right)\] \[-\frac{\mathbf{u}}{\rho}\left[\mathbf{u}\cdot(\mathbf{J}\times\mathbf{B})+\eta \mathbf{J}^{2}\right]+\frac{3}{4\rho}\mathbf{J}\times\mathbf{B}, \tag{9}\] \[\frac{\partial\ln\rho}{\partial\eta} =-\frac{4}{3}\left[\mathbf{\nabla}+(\mathbf{\nabla}\ln\rho)\right]\cdot \mathbf{u}+\frac{1}{\rho}\left[\mathbf{u}\cdot(\mathbf{J}\times\mathbf{B})+\eta\mathbf{J}^{2} \right],\] where \(\mathrm{D}/\mathrm{D}\eta\equiv\partial/\partial\eta+\mathbf{u}\cdot\mathbf{\nabla}\) is the advective derivative, \(\mathsf{S}_{ij}=(\partial_{i}u_{j}+\partial_{j}u_{i})/2-\delta_{ij}\mathbf{\nabla} \cdot\mathbf{u}/3\) are the components of the rate-of-strain tensor, \(\nu\) is the viscosity, and \(p\) is the pressure, which is assumed to be proportional to the density, i.e., \(p=\rho c_{\rm s}^{2}\), with \(c_{\rm s}=1/\sqrt{3}\) being the sound speed for the ultrarelativistic fluid. For all our simulations, we use the Pencil Code [66], where the relevant equations are readily implemented. We use \(N^{3}=1024^{3}\) mesh points for most of the runs, and \(N^{3}=2048^{3}\) mesh points for one particular run. In a small number of cases, we have included the slope-limited diffusion (SLD) scheme of Ref. [67; 68]. In those cases, SLD acts in addition to the ordinary viscous and diffusive processes stated in the equations above, but prevents the code from crashing during an early more violent phase when the mesh resolution is insufficient to dissipate the energy at high wave numbers. At later times, however, this additional numerical device has little effect. Below, we demonstrate in one case that the solutions with and without SLD yield the same result. ### Diagnostic quantities We introduce two characteristic times in our simulations, which are the time scale of the CPI and the magnetic diffusion time, \[\eta_{\rm CPI}=\sigma\mu_{50}^{-2}\quad\text{and}\quad\eta_{\rm diff}=\sigma k_{ 0}^{-2}, \tag{10}\] respectively. The ratio \((\eta_{\rm diff}/\eta_{\rm CPI})^{1/2}=|\mu_{50}|/k_{0}\) characterizes the degree of scale separation between the scales of magnetic helicity and fermion chirality. We also define the turnover time of the energy-carrying eddies, which would determine the onset of the turbulence, \[\eta_{\rm turb}=\left(u_{\rm rms}^{\rm max}k_{0}\right)^{-1}, \tag{11}\] where \(u^{\rm max}_{\rm rms}\) is the maximum value (in time) of the rms velocity. Next, we introduce several parameters with a dimension of velocity. The nature of the CPI is characterized by the following parameters [7] \[v_{\lambda}=|\mu_{50}|/(\overline{\rho}\lambda)^{1/2}\quad\text{and}\quad v_{\mu }=|\mu_{50}|/\sigma. \tag{12}\] The former represents the ratio of the length scale of the magnetic field at the saturation of the CPI to the CPI time scale, while the latter represents the ratio of the length scale of the initial instability to the CPI time scale. The ratio \(v_{\lambda}/v_{\mu}=\sigma/(\overline{\rho}\lambda)^{1/2}\) characterizes the length of the \(k^{-2}\) spectrum that develops if the CPI operates without a strong pre-existing field [7]. In the unbalanced case, \(u^{\rm max}_{\rm rms}\) is approximated by \(v_{\lambda}\). In the present case, however, it does not seem to play any role. Instead, to compute \(u^{\rm max}_{\rm rms}\), we approximate the velocity field by the initial magnetic field such that \(B^{2}_{\rm rms}\simeq\bar{\rho}u^{2}_{\rm rms}\). Using Eqs. (7) and (8), we estimate \[B^{(0)}_{\rm rms}\approx(k_{0}|\langle\mathbf{A}\cdot\mathbf{B}\rangle|)^{1/2}\approx \left(\frac{2k_{0}|\mu_{50}|}{\lambda}\right)^{1/2}, \tag{13}\] which thus defines a new quantity \(\tilde{v}_{\lambda}\) as \[\tilde{v}_{\lambda}\equiv\left(\frac{2k_{0}|\mu_{50}|}{\bar{\rho}\lambda} \right)^{1/2}\quad\left(\approx\frac{B^{(0)}_{\rm rms}}{\bar{\rho}^{1/2}} \right). \tag{14}\] A predictive estimate for the turnover time of the energy-carrying eddies is thus \[\eta_{\lambda}=(\tilde{v}_{\lambda}k_{0})^{-1}=\left(\frac{\bar{\rho}\lambda} {2k_{0}^{3}|\mu_{50}|}\right)^{1/2}, \tag{15}\] which is later used to predict the time when the inverse cascade sets in. In this work, an important diagnostics is the magnetic energy spectrum, \(E_{\rm M}(k)\). It is normalized such that \(\int E_{\rm M}(k)\,{\rm d}k=\langle\mathbf{B}^{2}\rangle/2\equiv\mathcal{E}_{\rm M}\) where \(\mathcal{E}_{\rm M}\) is the magnetic energy density1. The kinetic energy spectrum \(E_{\rm K}(k)\) is defined similarly, i.e., \(\int E_{\rm K}(k)\,{\rm d}k=\langle\rho\mathbf{u}^{2}\rangle/2\equiv\mathcal{E}_ {\rm K}\). We also define the magnetic helicity spectrum \(H_{\rm M}(k)\), which is normalized such that \(\int H_{\rm M}(k)\,{\rm d}k=\langle\mathbf{A}\cdot\mathbf{B}\rangle\). In our simulations, \(k|H_{\rm M}(k)|/2\) approaches \(E_{\rm M}(k)\) near the maximum. In fact, the spectra \(H_{\rm M}(k)\) and \(E_{\rm M}(k)\) satisfy the realizability condition [69], Footnote 1: In terms of the mode function in the polarization basis, \(\mathbf{A}(\mathbf{x},t)\equiv\int d^{3}k/(2\pi)^{3/2}\sum_{\lambda=\pm}A_{\lambda}( \mathbf{k},t)\mathbf{e}^{\lambda}(\mathbf{k})e^{i\mathbf{k}\mathbf{x}}\), \(E_{\rm M}\) is given as \(E_{\rm M}(k)=\sum_{s=\pm}(k^{4}/4\pi^{2})|A_{s}(k)|^{2}\). We also have \(H^{s}_{\rm M}(k)=(k^{3}/2\pi^{2})|A_{s}(k)|^{2}\) and \(H_{\rm M}(k)=(k^{3}/2\pi^{2})\sum_{s=\pm}s|A_{s}(k)|^{2}\). \[k|H_{\rm M}(k)|/2\leq E_{\rm M}(k). \tag{16}\] When this inequality is saturated for specific wave numbers, we say that the magnetic field is locally fully helical. After some time, the magnetic helicity spectrum is characterized by two subranges, one with positive and one with negative values of \(H_{\rm M}(k)\), which are separated by the wave number \(k_{\pm}(\eta)\), where the sign changes. In addition to the evolution of \(k_{\pm}(\eta)\), we characterize the spectrum and its evolution by the numbers \(k_{\rm I}(\eta)\) and \(k_{\rm II}(\eta)\), which are the wave numbers of the first positive and second negative peak of \(H_{\rm M}(k)\). The intermediate wave number \(k_{\pm}(\eta)\) is often better determined than \(k_{\rm II}(\eta)\), especially at early times. The wave number of the first peak of the spectrum is close to the initial inverse correlation length, \[\xi_{\rm M}=\mathcal{E}_{\rm M}^{-1}\int k^{-1}E_{\rm M}(k)\,{\rm d}k. \tag{17}\] In fully helical turbulence, the value of \(\xi_{\rm M}(\eta)\) tends to increase with time in a power law fashion, \(\xi_{\rm M}\propto\eta^{q}\), where \(q=4/9\) in our cases of balanced chirality [55]; see also Sec. II.5. Note that in our setup the positive helicity modes always dominate the energy density of the magnetic field, and hence approximately we have \(\xi_{\rm M}\simeq k_{\rm I}^{-1}\). It is convenient to introduce the mean magnetic chirality for the positive helicity modes for \(k<k_{\pm}\) and the negative ones for \(k>k_{\pm}\) as \[\langle\mu^{+}_{\rm M}\rangle =\frac{\lambda}{2}\int_{0}^{k_{\pm}}H_{\rm M}(k)\,{\rm d}k, \tag{18}\] \[\langle\mu^{-}_{\rm M}\rangle =-\frac{\lambda}{2}\int_{k_{\pm}}^{\infty}H_{\rm M}(k)\,{\rm d}k. \tag{19}\] The conservation law takes then the form \[\langle\mu_{5}\rangle+\langle\mu^{+}_{\rm M}\rangle-\langle\mu^{-}_{\rm M} \rangle=\mu_{\rm tot}, \tag{20}\] where \(\mu_{\rm tot}=\mu_{50}+\mu_{\rm M0}=\mu_{50}+\mu^{+}_{\rm M0}-\mu^{-}_{\rm M0}\) is given by the initial values. When we study the effect of spin flipping, we invoke a nonvanishing flipping rate with \[\Gamma=\begin{cases}\Gamma_{\rm f0}&\text{for}\quad\eta_{\rm flip}\leq\eta\leq \eta_{\rm off}\\ 0&\text{otherwise},\end{cases} \tag{21}\] where \(\eta_{\rm flip}\) denotes the time when spin flipping is turned on, and in a few cases we allow for a finite value of \(\eta_{\rm off}\), which denotes the time when spin flipping is later turned off again. ### Initial conditions In our numerical experiments, the initial magnetic field is fully helical with positive magnetic helicity and random phases. The initial magnetic energy spectrum is a broken power law \[E_{\rm M}(k,\eta_{0})\propto\begin{cases}k^{4}&\text{for}\;k<k_{0},\\ k^{-5/3}&\text{for}\;k>k_{0},\end{cases} \tag{22}\] where the initial peak is identified as \(k_{0}=k_{\rm I}(\eta_{0})\). The IR spectrum is motivated by causality constraints [70], while the UV spectrum is taken as a Kolmogorov-type spectrum. The strength of the magnetic field is adjusted such that the initial magnetic chirality obeys \(\mu_{\rm M0}=-\mu_{50}\) such that \(\mu_{\rm tot}=0\). The chiral chemical potential is initially assumed to be uniformly distributed in space. Its initial value is always negative, i.e., \(\mu_{50}<0\). However, even for an initially uniform chiral chemical potential, there is a specific length scale associated with the value of \(\mu_{5}\) through the wave number of the most unstable mode of the CPI, \(k=|\mu_{50}|/2\). The initial velocity is assumed vanishing in all cases. ### Theoretical predictions As was recently shown in Ref. [55], the present case of zero total chirality, where the magnetic helicity is canceled by fermion chirality, is remarkably similar to the case of ordinary MHD without chemical potential and zero magnetic helicity. In both cases, as already alluded to in the introduction, one can define a correlation integral of the total chirality, which is a quantity with dimensions \(\,{\rm cm}^{9}\,{\rm s}^{-4}\) and is dubbed the (adapted) Hosking integral. The evolution of the system can be explained by the conservation of this quantity. With self-similar evolution of the magnetic spectrum being assumed, this yields the scalings \(\xi_{\rm M}\propto\eta^{4/9}\) and \(\langle\mathbf{B}^{2}\rangle\propto\eta^{-10/9}\) for typical length scales and the magnetic energy density, respectively [56]. Note that the conservation of the adapted Hosking integral suggests \[\xi_{M}^{5}\langle\mathbf{B}^{2}\rangle^{2}={\rm const},\quad{\rm or}\quad k_{ \rm I}^{-3}E_{M}(k_{\rm I})^{2}={\rm const}, \tag{23}\] if the magnetic energy density is dominated by the positive helicity mode, which is peaked at \(k=k_{\rm I}\). For the magnetic field with a IR spectrum \(\propto k^{4}\), as motivated from the causality constraints, the evolution of the magnetic field exhibits inverse cascading. The big difference between ordinary MHD without helicity on the one hand and chiral MHD with helicity balanced by fermion chirality on the other hand is that in the latter, both the magnetic helicity and the fermion chirality are decaying, which we shall call the anomalous chirality cancellation (ACC). In the former, by contrast, the Hosking integral based just on the ordinary magnetic helicity density is conserved. In the latter, contrary to the naively expected exponential decay of fermion chirality due to the CPI in chiral MHD, we actually have a much slower power-law decay proportional to \(\eta^{-2/3}\), since the magnetic helicity is roughly estimated by \(\xi_{M}\langle\mathbf{B}^{2}\rangle\), and likewise for \(|\langle\mu_{5}\rangle|\)[55]. Here we have used the fact that Figure 1: Visualizations of \(\mathbf{A}\cdot\mathbf{B}\,\lambda/2\) (upper row) and \(\mu_{5}\) (lower row) on the periphery of the computational domain for Run O at \(\eta\approx 4600\) (left) 15,000 (middle), and 46,000 (right). the real space realizability condition of magnetic helicity [53], \(|\mathcal{H}_{\rm M}|\leq 2\mathcal{E}_{\rm M}\xi_{\rm M}\), is nearly saturated. Once this power law decay of the chirality starts, the CPI rate, \(\langle\mu_{5}\rangle^{2}/\sigma\), decays faster than \(\eta^{-1}\), which suggests that the CPI does not grow anymore. Hence the magnetic energy is always dominated by helicity modes with the same sign as the initial ones, which, in our case, are positive helicity modes. The adapted Hosking integral makes sense only when the communication between the helicity and chirality through the CME becomes effective at the characteristic scale. Therefore we expect that the scaling evolution discussed above starts at the time scale of the CME at the initial peak scale. With the evolution equation for the magnetic field, equivalent to Eq. (3), \[\frac{\partial\mathbf{B}}{\partial\eta}=\frac{1}{\sigma}\left[\nabla^{2}\mathbf{B}+ \mathbf{\nabla}\times(\mu_{5}\mathbf{B})\right]+\mathbf{\nabla}\times(\mathbf{u}\times\mathbf{B}) \tag{24}\] (where second term in the right-hand side represents the CME), we estimate \(\eta_{\rm ACC}\) as the time when the following condition is satisfied: \[\eta_{\rm ACC}\simeq\frac{\sigma}{\mu_{5}(\eta_{\rm ACC})k_{\rm I}(\eta_{\rm ACC })}. \tag{25}\] Note that from Eq. (24) we can also confirm that the magnetic field has an instability (the CPI) for one of the two circular polarization modes with \(k=|\mu_{50}|/2\) being the most unstable mode. The instability rate is roughly given as \(\mu_{50}^{2}/\sigma\), which determines \(\eta_{\rm CPI}\). The evolution of the system is classified into two cases, determined by the comparison between \(\eta_{\lambda}\) and \(\eta_{\rm ACC}\) estimated by the initial conditions of \(k_{\rm I}\) and \(\mu_{5}\). Since \(\bar{\rho}\simeq(\pi^{2}g_{*}/30)T^{4}\), where \(g_{*}\) is number of the relativistic degrees of freedom, and \(\sigma\simeq 10^{2}T\)[71; 72], we have \(\eta_{\rm ACC}<\eta_{\lambda}\) for \(k_{0}\ll|\mu_{50}|\) [more precisely, \(k_{0}\ll(\bar{\rho}\lambda/4\sigma^{2})|\mu_{50}|\), which is independent of temperature], and vice versa. For \(k_{0}\ll|\mu_{50}|\), we have the following estimate for the evolution of the system, 1. The system is frozen when \(\eta<\eta_{\rm CPI}\). 2. The CPI starts to grow at \(\eta\simeq\eta_{\rm CPI}\). 3. If the CPI does not sufficiently amplify the negative helicity modes, the chiral chemical potential starts to decay at \(\eta=\eta_{\rm ACC}(>\eta_{\rm CPI})\) with \[\eta_{\rm ACC}\simeq\frac{\sigma}{|\mu_{50}|k_{0}}\] (26) in a mild way. 4. When \(\eta\simeq\eta_{\lambda}(>\eta_{\rm ACC})\), the system starts to evolve according to the scaling law found in Ref. [55], \[k_{\rm I}\propto\eta^{-4/9},\quad\mathcal{E}_{\rm M}\propto\eta^ {-10/9},\quad\text{and}\] (27) \[\langle\mu_{5}\rangle=-\langle\mu_{\rm M}^{+}\rangle+\langle\mu_{ \rm M}^{-}\rangle\propto\eta^{-2/3}.\] (28) For \(k_{0}\gg|\mu_{50}|\), on the other hand, we expect the following evolution of the system. 1. The system is frozen at \(\eta<\eta_{\lambda}\). 2. The magnetic field evolve according the the inverse cascade at \(\eta\simeq\eta_{\lambda}\) in a similar way to the usual inverse cascade for nonchiral helical magnetic field, \[k_{\rm I}\propto\eta^{-2/3},\quad\mathcal{E}_{\rm M}\propto\eta^ {-2/3},\quad\text{and}\] (29) \[\langle\mu_{5}\rangle=-\langle\mu_{\rm M}^{+}\rangle+\langle\mu_{ \rm M}^{-}\rangle=\text{const},\] (30) since the CME is not effective at \(k\simeq k_{\rm I}\) so that the magnetic helicity and chirality are individually conserved. 3. The CME becomes effective at \(\eta\simeq\eta_{\rm ACC}\) (\(>\eta_{\lambda}\)), which is now evaluated as \[\eta_{\rm ACC}\equiv\frac{\sigma^{3}}{|\mu_{50}|^{3}k_{0}^{3}}\eta_{ \lambda}^{-2}\simeq\frac{2\sigma^{3}}{\bar{\rho}\lambda\mu_{50}^{2}}.\] (31) Here we have used Eq. (25) and \(k_{\rm I}(\eta)=k_{0}(\eta/\eta_{\lambda})^{-2/3}\), as well as Eq. (14). When \(\eta>\eta_{\rm ACC}\) we have the inverse cascade with the conservation of the adapted Hosking integral, \[k_{\rm I}\propto\eta^{-4/9},\quad\mathcal{E}_{\rm M}\propto\eta^ {-10/9},\quad\text{and}\] (32) \[\langle\mu_{5}\rangle=-\langle\mu_{\rm M}^{+}\rangle+\langle\mu_{ \rm M}^{-}\rangle\propto\eta^{-2/3}.\] (33) Note that in this case the CPI would not grow much due to the earlier onset of the chirality decay. In Table 1, we summarize the characteristic time scales relevant for the evolution of the system. The features described above will be confirmed by direct numerical simulations in the next section. They can have important consequences for baryon production, as will discussed at the end of the paper. ## III Results In this section, we show the results of the direct numerical simulation. We first study the case with \(k_{0}\ll|\mu_{50}|\) until Sec. III.7. In Sec. III.8 we study the case with \(k_{0}\ll\mu_{50}\). Some of our observations will turn out to be consistent with the theoretical prediction discussed in Sec. II.5. We will also see some other features, which have not been addressed there. ### Visualization of magnetic and fermion chiralities We begin by discussing the simulation of Ref. [55] with \(k_{0}\ll|\mu_{50}|\), which we refer to as Run O. In Fig. 1, we present visual impressions of magnetic and fermion chiralities in Run O at different times. We see that the turbulent structures gradually grow in size and the extreme values away from zero decrease as time goes on. Furthermore, \(\mu_{5}\) and \(\mathbf{A}\cdot\mathbf{B}\,\lambda/2\) have predominantly opposite signs, as expected. Locally, however, there is no correspondence between the two fields. This is because the vanishing total chirality is only a statistical property. ### Evolution of characteristic scales As discussed in Ref. [55], it is important to allow for sufficient scale separation between the smallest available wave number \(k_{1}\equiv 2\pi/L\) and the initial wave number of the peak, \(k_{0}\). It is also important that there is enough separation between \(k_{0}\) and the initial wave number of the CPI, \(|\mu_{50}|/2\), to confirm distinct features of the evolution of the system. Both, \(k_{0}\) and \(|\mu_{50}|/2\), in turn, must be much smaller than the largest available wave number \(k_{\rm Ny}=k_{1}N/2\). Sufficient scale separation between \(k_{1}\) and \(k_{0}\) is particularly important for obtaining the theoretically expected increase of \(\xi_{\rm M}\propto\eta^{4/9}\) along with the decay of \(\mathcal{E}_{\rm M}\propto\eta^{-10/9}\), based on the conservation of the Hosking integral adapted to the total chirality. Indeed, in Run O, an optimized balance between the two scale separation requirements has been achieved. With the start of the simulation, the helical random magnetic field, which is present initially, drives turbulent motions through the Lorentz force. Those motions are in approximate equipartition with the magnetic field at high wave numbers; see Fig. 2, where we compare kinetic and magnetic energy spectra at different times. In this figure, we also mark the two scale separation ratios. This observation supports the estimate of \(u_{\rm rms}^{\rm max}\) to \(\tilde{v}_{\lambda}\); see Eq. (14). As already discussed in Ref. [55], even though there is vanishing net chirality, \(\langle\mu_{\rm M}\rangle+\langle\mu_{5}\rangle=0\), there is still some degree of inverse cascading, just like in nonhelical magnetically dominated turbulence [73; 54]. We see this clearly in Fig. 3, where the position of the magnetic peak, \(k_{\rm I}(\eta)\), gradually moves to smaller values. At the same time, the height of the peak decreases, following an approximate power law \(\propto k^{\beta}\), with \(\beta=3/2\); see Fig. 3. This can be explained by the conservation of the Hosking integral [56; 58]; see also Eq. (23). The exponent \(\beta=3/2\) is characteristic of the fact that the net chirality vanishes, even though near the peak itself the field is locally fully helical, as we see from the proximity of \(k|H_{\rm M}(k)|/2\) and \(\mathcal{E}_{\rm M}(k)\); see Eq. (16). The newly injected magnetic helicity from the CPI leads to a growth of the magnetic field at large wave numbers. It manifests itself mostly through the build-up of negative magnetic helicity at high wave numbers. At some point, we also see a gradual propagation of the secondary peak \(k_{\rm II}\) toward smaller \(k\), which has not been addressed in Sec. II.5. It lies underneath an envelope with an approximate \(k^{8/3}\) slope; see Fig. 3. At present, the exponent \(8/3\) is just empirical and there is no theory for it. It should be noted, however, that in other cases with a shorter inertial range, we have found larger exponents. Thus, the exponent could also be smaller when the inertial range is larger, i.e., when there is more scale separation and \(\langle\mu_{5}\rangle\xi_{\rm M}\) is larger. Another characteristic wave number is \(k_{\pm}\), where the sign of the spectral magnetic helicity changes. It is used in the definitions of \(\langle\mu_{\rm M}^{+}\rangle\) and \(\langle\mu_{\rm M}^{-}\rangle\) in Eqs. (18) and (19). In Fig. 4, we plot the evolution of the characteristic wave numbers \(k_{\rm I}\), \(k_{\pm}\), and \(k_{\rm II}\). We clearly see the \(k_{\rm I}\propto\eta^{-4/9}\) decay predicted by the conservation of the Hosking integral adapted to the total chirality [55]. It emerges after a time \(\eta_{\rm I}\), which is expected to be close to \(\eta_{\lambda}\) (and also \(\eta_{\rm turb}\)); see Eq. (15). In Run O we find \(\eta_{\rm I}\approx 100\). The evolution of \(k_{\pm}\) and \(k_{\rm II}\) can be seen more clearly when the Nyquist wave number is larger. We therefore discuss in Fig. 5 another run, also with \(N=1024^{3}\) mesh points, but now with \(k_{1}=0.05\) (instead of 0.02), so \(k_{\rm Ny}=25.6\), which is now a little over five times larger than \begin{table} \begin{tabular}{l l l l} Time scale & Expression & Equation & Explanation \\ \hline \(\eta_{\rm CPI}\) & \(\sigma\mu_{50}^{-2}\) & Eq. (10) & time scale of the CPI \\ \(\eta_{\rm diff}\) & \(\sigma k_{0}^{-2}\) & Eq. (10) & magnetic diffusion time \\ \(\eta_{\rm turb}\) & \((u_{\rm rms}^{\rm max}\,k_{0})^{-1}\) & Eq. (11) & turnover time of the energy-carrying eddies \\ \(\eta_{\lambda}\) & \((\tilde{v}_{\lambda}k_{0})^{-1}=[\tilde{\rho}\lambda/(2k_{0}^{3}|\mu_{50}|)]^{ 1/2}\) & Eq. (15) & _predicted_ turnover time of the energy-carrying eddies \\ \(\eta_{\rm ACC}\) & \(\sigma/[\mu_{5}(\eta_{\rm ACC})k_{\rm I}(\eta_{\rm ACC})]\) & Eq. (25) & onset time of the ACC \\ \end{tabular} \end{table} Table 1: Relevant time scales defined in this paper. Figure 2: Magnetic energy (red lines) and kinetic energy (blue lines) spectra for Run O at times \(\eta=460\), \(4600\), and \(46{,}000\). The dotted lines denote the earliest outputted time \(\eta=0.3\). In the upper part, the two-sided arrows indicate the requirements for scale separation at small and large \(k\) to obtain the \(k_{\rm p}\propto\eta^{-4/9}\) decay and to resolve \(|\mu_{50}|\), respectively. \(|\mu_{50}|/2=5\). In Table 2, this run is referred to as Run I, which differs from the previously discussed Run O mainly in the value of \(k_{1}\). It also has a shallower scaling of the correlation length, \(\xi_{\rm M}\propto k_{\rm I}^{-1}\propto\eta^{1/3}\), which seems to be an artifact caused by insufficient scale separation, i.e., the value of \(k_{1}\) is not sufficiently small. Empirically, we find that if \(k_{0}/k_{1}\gg 20\), there is an inverse cascade with \(\xi_{\rm M}\propto k_{1}^{-1}\propto\eta^{4/9}\). The parameters \(\eta_{\rm i}\), \(\eta_{\pm}^{\rm(i)}\), and \(\eta_{\pm}^{\rm(ii)}\), listed in Table 2, are discussed below. We also give here the values of \(v_{\mu}\) and \(v_{\lambda}/v_{\mu}\), as well as the number of mesh points and the length of the run, \(\eta_{\rm max}\), which can be useful in assessing the reliability of the numerical results. Run O' is similar to Run O, except that here, SLD has been added. The two runs are virtually indistinguishable. The evolution of the peaks of the spectrum can be summarized as follows. (i) After the start of the run, the CPI induces a growth of the negative helicity modes at the secondary peak \(k_{\rm II}\), which stays constant until \(\eta=\eta_{\pm}^{\rm(i)}\), and then starts to decrease with time in a power law fashion, \(k_{\rm II}\propto\eta^{-q_{\rm IIa}}\) with \(q_{\rm IIa}\approx 1\) in all cases. (ii) The original large-scale spectrum is unchanged until some time \(\eta=\eta_{\rm I}\) and starts to decrease via an inverse cascade with \(k_{\rm I}(\eta)\propto\eta^{-q_{\rm I}}\), where \(q_{\rm I}\) is expected to be equal to the exponent \(q=4/9\) found in Ref. [55]. (iii) At time \(\eta=\eta_{\pm}^{\rm(ii)}\), the decay of the secondary peak becomes slower with a smaller index, \(k_{\rm II}\propto\eta^{-q_{\rm IIb}}\), with \(q_{\rm IIb}<q_{\rm IIa}\approx 1\). Those parameters are summarized in Table 2. The plot of characteristic wave numbers \(k_{\rm I}\), \(k_{\pm}\), and \(k_{\rm II}\) in Fig. 5 shows three distinct times \(\eta_{\pm}^{\rm(i)}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt \hbox{$<$}}\eta_{\rm I}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt \hbox{$<$}}\eta_{\pm}^{\rm(ii)}\), where \(k_{\pm}\) begins to decrease first rapidly, at \(\eta=\eta_{\pm}^{\rm(i)}\), and later, at \(\eta=\eta_{\pm}^{\rm(ii)}\), more slowly, approximately like \(\eta^{-4/9}\), i.e., \(q_{\rm IIb}\approx q=4/9\). The decay of \(k_{\rm II}\) closely follows that of \(k_{\pm}\). The decay of \(k_{\rm I}\), on the other hand, does not show the rapid decay phase that we see in \(k_{\pm}\) and \(k_{\rm II}\), but turns directly into the approximate \(\eta^{-4/9}\) decay at \(\eta=\eta_{\pm}^{\rm(i)}\). ### Onset of inverse cascading It is of interest to vary the separation between \(|\mu_{50}|/2\) and \(k_{0}\) to see the dependence of the relevant characteristic times on these wavenumbers. We have performed simulations for different values and consider runs where \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c c c} Run & \(k_{0}\) & \(-\mu_{50}\) & \(\sigma^{-1}\) & SLD & \(\eta_{\rm CPI}\) & \(\eta_{\rm diff}\) & \(\eta_{\mu_{\pm}^{\rm(i)}}\) & \(q\) & \(\eta_{\rm flip}\) & \(\eta_{\pm}^{\rm(i)}\) & \(\eta_{\mu_{\pm}^{\rm(i)}}\) & \(\eta_{\mu_{\pm}^{\rm(i)}}\) & \(\eta_{\mu_{\pm}^{\rm(i)}}\) & \(|\mu_{\rm M}^{-}\rangle|^{\rm max}\) & \(k_{\pm}/k_{1}\) & \(k_{\rm II}/k_{\rm I}\) & \(v_{\lambda}\) & \(\eta_{\rm rms}^{\rm max}\) & \(k_{1}\) \\ \hline VI & 1 & 160 & \(5\times 10^{-4}\) & no & 0.08 & \(2\times 10^{3}\) & 3 & 1/3 & — & 0.3 & 45 & 14 & 7 & 43 & 1.4 & 2.0 & 1.13 & 0.095 & 0.2 \\ V & 1 & 80 & \(5\times 10^{-4}\) & no & 0.3 & \(2\times 10^{3}\) & 4 & 1/3 & — & 0.5 & 55 & 40 & 4 & 13 & 1.5 & 1.8 & 0.57 & 0.050 & 0.2 \\ IV & 1 & 50 & \(5\times 10^{-4}\) & no & 0.8 & \(2\times 10^{3}\) & 6 & 1/3 & — & 1.6 & 50 & 55 & 50 & 5.1 & 1.5 & 2.0 & 0.35 & 0.050 & 0.1 \\ III & 1 & 30 & \(5\times 10^{-4}\) & no & 2.2 & \(2\times 10^{3}\) & 14 & 1/3 & — & 1.7 & 80 & 108 & 150 & 1.5 & 1.8 & 2.3 & 0.21 & 0.018 & 0.2 \\ II+ & 1 & 20 & \(5\times 10^{-4}\) & no & 5 & \(2\times 10^{3}\) & 18 & 1/3 & — & 6 & 75 & 160 & 200 & 0.46 & 1.8 & 3.1 & 0.141 & 0.031 & 0.1 \\ II & 1 & 20 & \(2\times 10^{-4}\) & no & 12.5 & \(5\times 10^{3}\) & 60 & 1/3 & — & 9 & 75 & 120 & 200 & 0.009 & 4.4 & 6.7 & 0.141 & 0.032 & 0.1 \\ II\(-\) & 1 & 20 & \(1\times 10^{-4}\) & no & 25 & \(1\times 10^{4}\) & 160 & 1/3 & — & 14 & 75 & 140 & 200 & 0.003 & 6.7 & 10 & 0.141 & 0.031 & 0.1 \\ I & 1 & 10 & \(2\times 10^{-4}\) & no & 50 & \(5\times 10^{3}\) & 125 & 1/3 & — & 30 & 80 & 160 & 250 & 0.009 & 6.7 & 9.6 & 0.071 & 0.030 & 0.05 \\ O & 1 & 10 & \(2\times 10^{-4}\) & no & 50 & \(5\times 10^{3}\) & 125 & 4/9 & — & 20 & 70 & 120 & 300 & 0.008 & 7.3 & 9.5 & 0.071 & 0.0123 & 0.02 \\ O’ & 1 & 10 & \(2\times 10^{-4}\) & yes & 50 & \(5\times 10^{3}\) & 125 & 4/9 & — & 20 & 110 & 120 & 400 & 0.015 & 6.4 & 8.4 & 0.071 & 0.0103 & 0.02 \\ L & 1 & 10 & \(2\times 10^{-4}\) & yes & 50 & \(5\times 10^{3}\) & 125 & 4/9 & — & 180 & 400 & 500 & 1000 & 0.027 & 6.7 & 8.7 & 0.071 & 0.0079 & 0.01 \\ M & 1 & 7 & \(2\times 10^{-4}\) & yes & 102 & \(5\times 10^{3}\) & 165 & 1/3 & — & 260 & 220 & 800 & 800 & 0.006 & 5.8 & 7.2 & 0.049 & 0.0065 & 0.01 \\ N & 1 & 5 & \(2\times 10^{-4}\) & yes & 200 & \(5\times 10^{3}\) & 235 & 1/3 & — & 350 & 200 & 800 & 1000 & 0.0015 & 6.3 & 7.8 & 0.035 & 0.0055 & 0.01 \\ N’ & 1 & 5 & \(2\times 10^{-4}\) & yes & 200 & \(5\times 10^{3}\) & 200 & 4/9 & — & — & 800 & 3000 & 15000 & 0.0004 & 5.5 & 2.4 & 0.035 & 0.0035 & 0.005 & 0.005 \\ F & 1 & 5 & \(2\times 10^{-4}\) & yes & 200 we change \(k_{\rm I}\) and keep \(k_{50}\) fixed, and others where we change \(k_{50}\) and keep \(k_{0}\) fixed. It both cases, of course, since we want to satisfy \(\langle\mu_{5}\rangle+\langle\mu_{\rm M}\rangle={\rm const}\), we need to adjust the amplitude of the initial magnetic field correspondingly. The results are summarized in Table 2 and plotted in Figs. 6 and 7. One may presume that \(\eta_{\pm}^{(\rm i)}\) is roughly estimated by \(\eta_{\rm CPI}\) since the grow of negative helicity modes becomes effective at that time. We see, however, that, while \(\eta_{\pm}^{(\rm i)}\) decreases quadratically with increasing \(|\mu_{50}|\), the dependence on \(\eta_{\rm CPI}=\sigma\mu_{50}^{-2}\) is shallower than linear and follows approximately an \(\eta_{\rm CPI}^{2/3}\) scaling; see Fig. 6. Thus, \(k_{\rm II}\) starts to decline more rapidly when \(|\mu_{50}|\) is large, although it is unclear why this exponent is here \(\approx 2/3\). On the other hand, we see that the five data points with \(k_{1}=0.01\) (Runs L, M, N, J, and J" with smaller \(|\mu_{50}|\)) lie on another \(\eta_{\rm CPI}^{2/3}\) line that is shifted upward by a factor of about 6 relative to the runs with larger \(k_{1}\). The reason for this is that for large values of \(\eta_{\rm CPI}\), it became necessary to decrease the value of \(k_{1}\). This decreased the Nyquist wave number since \(N\) remained unchanged, which can cause artifacts in the values of \(k_{\pm}\). Small values of \(k_{1}\) also facilitates the \(\eta^{4/9}\) scaling of \(\xi_{\rm M}\) and related length scales; see the comparison between Runs N and N' Figure 3: Magnetic energy (solid lines) and normalized helicity spectra \(kH_{\rm M}(k)/2\) (dotted lines with red and blue symbols for positive and negative helicity spectra, respectively) for Run O at times \(\eta=0.3\), 460, 1500, 4600, 15,000, and 46,000. The peaks \(k_{\rm I}\) (peaks of the red curves) and \(k_{\rm II}\) (peaks of the blue curves) evolve underneath the envelopes \(\propto k^{3/2}\) and \(\propto k^{8/3}\), respectively. in Table 2. This shows that \(\eta_{\pm}^{\rm(i)}\) is currently very sensitive to these restrictions which will be alleviated in future with larger computational power. Nevertheless, there is clearly a trend for an uprise in the dependence of \(\eta_{\pm}^{\rm(i)}\) on \(\eta_{\rm CPI}\) for large values. Next, we examine the dependence of \(\eta_{\rm I}\) and \(\eta_{\pm}^{\rm(ii)}\) on \(k_{0}\) and \(\mu_{50}\). Figure 6 shows that the time \(\eta_{\rm I}\) of the onset of the decline of \(k_{\rm I}\) does not strongly depend on the value of \(\mu_{50}\). Likewise, the time \(\eta_{\pm}^{\rm(ii)}\) when the decay of \(k_{\rm II}\) slows down, does not strongly depend on \(\mu_{50}\). Again, however, there is an upward shift of data points for the four runs, for which \(k_{1}=0.01\). As discussed in Sec. II.5, we expect that \(\eta_{\rm I}\) is close to \(\eta_{\rm turb}\) and \(\eta_{\lambda}\). The upper two panels of Fig. 7 show the dependence of \(\eta_{\pm}^{\rm(i)}\), \(\eta_{\rm I}\), and \(\eta_{\pm}^{\rm(ii)}\) on \(\eta_{\rm turb}\) and \(\eta_{\lambda}\), respectively. From these plots, we estimate that \[\eta_{\rm I}\approx 1.4\,\eta_{\rm turb}\approx 2.2\,\eta_{\lambda}. \tag{34}\] In the lowest panel of Fig. 7, we also show the relation between \(\eta_{\rm turb}\) and \(\eta_{\lambda}\), i.e., \[\eta_{\rm turb}\approx 1.6\,\eta_{\lambda}, \tag{35}\] which shows the validity of the estimate of \(u_{\rm rms}^{\rm max}\) in terms of \(\tilde{v}_{\lambda}\). Equation (34) is useful for estimating the properties of magnetic field strength and coherence length at later times. Therefore, we conclude that the numerical results support, at least for a moderate scale separation, \(1<|\mu_{50}|/k_{0}\lesssim\mathcal{O}(10)\), the theoretical estimate for the evolution of the characteristic scales given in Sec. II.5 with a more accurate determination of the time of the onset of the scaling evolution, Eq. (34). ### Evolution of \(\langle\mu_{5}\rangle\) and \(\langle\mu_{\rm M}^{\pm}\rangle\) We now discuss how the chirality of the system evolves. Using Eqs. (18) and (19), we divide the magnetic helicity into \(\langle\mu_{\rm M}^{+}\rangle\) and \(\langle\mu_{\rm M}^{-}\rangle\). The typical evolution of \(\langle\mu_{5}\rangle\) and \(\langle\mu_{\rm M}^{\pm}\rangle\) is as follows. (i) \(\langle\mu_{5}\rangle\) and \(\langle\mu_{\rm M}^{+}\rangle\) stay constant until the time \(\eta=\eta_{\mu_{\rm M}^{+}}\), when the ACC commences exhibiting a power law decay. (ii) \(\langle\mu_{\rm M}^{-}\rangle\) grows until the time \(\eta=\eta_{\mu_{\rm M}^{-}}\) and then decays. As discussed in Sec. II.5, the decay of \(\langle\mu_{5}\rangle\) and \(\langle\mu_{\rm M}^{+}\rangle\) due to the ACC is expected to be like \(\eta^{-2/3}\). In Fig. 8, we have overplotted the asymptotic \(\eta^{-2/3}\) decay laws of magnetic helicity with results of some of the representative numerical runs (Runs II, J, and G), which clearly shows that the numerical results support the theoretical prediction. The decay of \(\langle\mu_{\rm M}^{-}\rangle\) is faster than that of \(\langle\mu_{5}\rangle\) and \(\langle\mu_{\rm M}^{+}\rangle\) and follows an approximate \(\eta^{-4/3}\) law, resulting in a decay of the ratio \(\langle\mu_{\rm M}^{-}\rangle/\langle\mu_{\rm M}^{+}\rangle\propto\eta^{-2/3}\). Therefore, unless \(\langle\mu_{\rm M}^{-}\rangle\) becomes comparable to \(\langle\mu_{\rm M}^{+}\rangle\) when the grow stops, a complete cancellation between \(\langle\mu_{\rm M}^{-}\rangle\) and \(\langle\mu_{\rm M}^{+}\rangle\) never occurs. The production of \(\langle\mu_{\rm M}^{-}\rangle\) is expected to be a result of the CPI. We now address the question of how much \(\langle\mu_{\rm M}^{-}\rangle\) is being produced and what its maximum value depends on. Figure 8 shows that \(\langle\mu_{\rm M}^{-}\rangle\) is generally rather small, Figure 8: Comparison of \(\langle\mu_{\rm M}^{\pm}\rangle\) for Run II (red lines), Run J (blue lines), and Run G (orange lines). The times \(\eta_{\rm ACC}\) are marked by the correspondingly colored filled symbol at the crossing points of the extrapolated \(\eta^{-2/3}\) decay law with the initially constant values, indicated by dotted line. The \(\eta^{-4/3}\) decay law \(\langle\mu_{\rm M}^{-}\rangle\) is shown as the dashed-dotted line. and at least for \(\mu_{50}/k_{0}\lesssim 20\) there is always a strong imbalance between \(|\langle\mu_{\rm M}^{+}\rangle|\) and \(|\langle\mu_{\rm M}^{-}\rangle|\), which never enters a phase with a near-complete cancellation. To see whether this is related to the value of the conductivity, we compare simulations with different values of \(\sigma\). It turns out that runs with smaller magnetic diffusivity (\(\sigma^{-1}=10^{-4}\)) result in an even larger imbalance, while those with a larger diffusivity (\(\sigma^{-1}=5{\times}10^{-4}\)) have a smaller imbalance; compare Runs II+, II, and \(\Pi-\) in Table 2. Before closing this section, let us comment on another trend in the numerical runs we conducted regarding the absence of a near-complete cancellation between \(\langle\mu_{\rm M}^{-}\rangle\) and \(\langle\mu_{\rm M}^{+}\rangle\). For Runs III-VI, the ratio \(|\langle\mu_{\rm M}^{-}\rangle|^{\rm max}/|\mu_{50}|\) becomes rather large. This could be due to the very large scale separation of \(k_{0}\) and \(|\mu_{50}|\). This suggests a possibility that the CPI completes the cancellation between the magnetic helicity and chirality immediately. However, the positive and negative helicity modes are distributed at separate length scales with the negative ones sitting at higher length scales and the latter receives a stronger magnetic diffusion. Therefore we expect the cancellation not to be complete and that the two helicity modes decay with a power-law decay, not an exponential one, though the scaling index can be different from \(-2/3\). In order to investigate the evolution of the system in such extreme cases, \(|\mu_{50}|/k_{0}\gg{\cal O}(10)\), we need to have a sufficiently large box size to realize the corresponding scale separation. The detailed study is left for future study. ### Onset of ACC In Fig. 9, we show the dependence of \(\eta_{\mu_{\rm M}^{+}}\) and \(\eta_{\mu_{\rm M}^{-}}\) on \(\eta_{\rm ACC}=\sigma/|\mu_{50}k_{0}|\) (for the case \(k_{0}\ll|\mu_{50}|\); see Eq. (26)). It turned out that \(\eta_{\mu_{\rm M}^{+}}\) increases with \(\eta_{\rm ACC}=\sigma/|\mu_{50}k_{0}|\) such that \[\eta_{\mu_{\rm M}^{+}}\approx 0.2\,\eta_{\rm ACC}=0.2\,\sigma/|\mu_{50}k_{0}| \tag{36}\] provides a good description to the data, which supports the discussion in Sec. II.5. Furthermore, \(\eta_{\mu_{\rm M}^{-}}\) shows an approximately linear dependence on \(\eta_{\rm ACC}\). This is reasonable because the CPI becomes ineffective when the ACC onsets such that \(\langle\mu_{\rm M}^{-}\rangle\) is no longer amplified by the CPI after that. ### The scale ratios \(k_{\pm}/k_{\rm I}\) and \(k_{\rm II}/k_{\rm I}\) We also mention another observation in the case with \(k_{0}\ll|\mu_{50}|\). At late times, the scale ratios \(k_{\pm}/k_{\rm I}\) and \(k_{\rm II}/k_{\rm I}\) reach values that are approximately constant in time. It is about 10 in the case of Run O, i.e., equal to the initial scale separation, \(|\mu_{50}|/k_{0}=10\). One might have expected the scale ratios to increase with \(|\mu_{50}|/k_{0}\). However, in all other cases, this ratio is smaller. Some of this might also be caused by one of the two scale separation constraints not being well enough obeyed, although the counter-intuitive trend remains surprising. In Fig. 10, we show the ratios \(k_{\pm}/k_{\rm I}\) and \(k_{\rm II}/k_{\rm I}\) versus \(|\mu_{50}|/k_{0}=(\eta_{\rm diff}/\eta_{\rm CPI})^{1/2}\). The two insets give separately the dependencies on \(1/\eta_{\rm CPI}\), showing an \(\eta_{\rm CPI}^{-0.4}\) behavior, and on \(\eta_{\rm diff}\), with a \(\propto\eta_{\rm diff}^{1/2}\) behavior. We see that \(k_{\pm}/k_{\rm I}\) and \(k_{\rm II}/k_{\rm I}\) decrease both with \(1/\eta_{\rm CPI}\) and with \(\eta_{\rm diff}\), giving a combined dependence on just the ratio \(|\mu_{50}|/k_{0}\). Thus, we see that, somewhat unexpectedly, large \(|\mu_{50}|\) and small \(k_{0}\) tend to be detrimental to producing large scale ratios. ### Effect of chirality-flipping The simulations discussed so far had \(\Gamma=0\) and they resulted in a final state where \(\langle\mu_{5}\rangle\) and \(\langle\mu_{\rm M}\rangle\) vanish at late times. As discussed in the introduction, spin flipping could prematurely lead to a vanishing \(\langle\mu_{5}\rangle\), which would imply that the decay of \(\langle\mu_{\rm M}\rangle\) would slow down and level off at a value away from zero. To study this quantitatively, we show in Fig. 11 the evolution of \(\langle\mu_{5}\rangle\), \(\langle\mu_{\rm M}\rangle\), and \(\langle\mu_{5}\rangle+\langle\mu_{\rm M}\rangle\) for Run F with \(\eta_{\rm flip}=100\) and \(\Gamma=10^{-2}\) either for the rest of the run or only until \(\eta_{\rm off}=10^{3}\) (Run F in Table 2). First, we study a case where spin flipping acts permanently (after \(\eta=\eta_{\rm flip}\)), which is shown in Fig. 11 as solid lines. We see that \(|\langle\mu_{5}\rangle|\) begins to decrease rapidly to zero after \(\eta_{\rm flip}=100\). This slows down the decay of \(\langle\mu_{\rm M}\rangle\), which then declines at a much smaller rate. Qualitatively similar behaviors are also seen for smaller values of \(\Gamma\). In all cases, we see that \(\langle\mu_{5}\rangle+\langle\mu_{\rm M}\rangle\) evolves away from zero. This is because the total chirality is then no longer conserved. The decay of \(\langle\mu_{\rm M}\rangle\) is understood by magnetic diffusion. Thus we expect the decrease to slow down for a larger scale separation between the magnetic diffusion scale and \(k_{\rm I}\). Next, it is also of interest to study a case where spin flipping acts only for a certain time interval and is then turned off again at \(\eta=\eta_{\rm off}\). This case is shown in Fig. 11 as dashed lines. We see that, when \(\Gamma=0\) after \(\eta_{\rm off}=10^{3}\), the sum \(\langle\mu_{5}\rangle+\langle\mu_{\rm M}\rangle\) is strictly constant and away from zero. This is in contrast to the case with permanently nonvanishing \(\Gamma\), where the sum continues to decrease slowly. The constancy of the total chirality leads to the behavior that \(\langle\mu_{\rm M}\rangle\) stops to decline rapidly at a larger value. Furthermore, during that time, some of the magnetic helicity decays due to the magnetic diffusion and is temporarily converted back into fermion chirality through the total chirality conservation; see the small increase of \(\langle\mu_{5}\rangle\) with a positive maximum at \(\eta\approx 4000\) in Fig. 11. Later, however, this excess fermion chirality gets converted back into magnetic fields, which explains the slight uprise of \(\langle\mu_{\rm M}\rangle\) near the end of the simulation. Indeed, this process is similar to the one seen in Refs. [14; 15]. This is natural because after the decay of \(\langle\mu_{5}\rangle\) the setup becomes very similar to the ones in these studies. In Fig. 12, we show \(\eta_{\pm}^{(\rm i)}\) and \(\eta_{\rm I}\) in the presence of spin flipping. The results suggests that the \(\eta^{-4/9}\) decay changes into the faster \(\eta^{-2/3}\) decay. Spin flipping brings \(\langle\mu_{5}\rangle\) close to zero. This process stops or slows down the decline of magnetic helicity, which therefore remains positive. At late times, \(\langle\mu_{5}\rangle\), which was originally negative, now becomes positive and settles at a value of around \(\langle\mu_{5}\rangle\approx k_{1}\). This is because at later times the positive chirality induced due to the helicity decay by the magnetic diffusion through the chiral anomaly is balanced by the erasure of the chirality through the CME [14; 15], similar to the baryon asymmetry through the magnetic helicity decay much before the electroweak phase transition [19; 20; 24]. The sign of the final value of \(\langle\mu_{5}\rangle\) is determined by the magnetic helicity after the decay of \(\langle\mu_{5}\rangle\) due to the onset of spin flipping. In the cases presented above, the sign of the magnetic helicity at the time of the onset of spin flipping was positive and thus the chiral chemical potential at later times was also positive. If the initial magnetic field is weaker and the total chirality being neg ative (see App. A), the sign of the final value of \(\langle\mu_{5}\rangle\) can stay negative. Our runs show that spin flipping can lead to a significant increase of the fraction of the magnetic helicity that can be preserved in spite of the fact that the system has vanishing total chirality. This also reduces the total energy density dissipation of the system. In the absence of spin flipping, both magnetic helicity and chiral chemical potential would approach zero, so there would be no magnetic helicity available for successful baryogenesis. In the real Universe, however, spin flipping due to the electron Yukawa interaction, which really violates the (total) chirality conservation, inevitably acts at \(T\lesssim 10^{2}\,\)TeV [45; 46], and hence magnetic helicity survives more or less at the electroweak phase transition. In Fig. 12, we see an interval between the onset of spin flipping, \(\eta=\eta_{\rm flip}=10^{2}\), and the onset of the \(\eta^{-2/3}\) scaling evolution of \(k_{\rm I}\), \(\eta\sim 6\times 10^{3}\), which marks the real onset of the evolution with (pure) magnetic helicity conservation. For a rough estimate of the magnetic field evolution, however, we shall practically use \(\eta_{\rm flip}\) as the switching time between the adapted Hosking integral conservation and the (pure) magnetic helicity conservation. ### Cases with initially small \(|\mu_{50}|/k_{0}\) In all the cases considered so far, we assumed \(|\mu_{50}|/k_{0}>1\). We now consider the opposite case and discuss runs with \(\mu_{50}=-0.1\), keeping still \(k_{0}=1\), so \(|\mu_{50}|/k_{0}=0.1\) (Runs P, Q and R), and also a run with \(\mu_{50}=-0.5\) and \(k_{0}=1\) (Run S). To prevent the magnetic field from being too weak, while still preserving vanishing total chirality, we now decrease the value of \(\lambda\) and choose \(\lambda=500\), \(50\), and \(5\) for Runs P (and S), Q, and R, respectively. All the runs end at \(\eta\sim 10^{4}\). The parameters of these runs are summarized in Table 3. Smaller values of \(\lambda\) correspond to to larger magnetic fields. We see that this also leads to a gradual decrease of the scaling index of the envelope of the magnetic energy spectrum, \(\beta\), toward zero. For a given value of \(\beta\), we expect that the scaling indices \(q\), \(p\), and \(r\), which are those of the evolution of the magnetic coherence length, energy density, and helicity, respectively (\(\xi_{\rm M}\propto\eta^{q}\), \(\mathcal{E}_{\rm M}\propto\eta^{-p}\), and \(\mathcal{H}_{\rm M}\propto\eta^{-r}\)), are given as \(q=2/(\beta+3)\), \(p=2(1-q)\), and \(r=p-q\). In Fig. 13(a), we see that for Run P the exponents in agree reasonably well with those expected for \(\beta=1/3\). In Fig. 13(b), we also show the results for Run R, where \(\lambda\) is a hundred times smaller and the magnetic field ten times stronger. Now the value of \(\beta\) is very small (about \(0.05\)), corresponding to \(q=0.66\), \(p=0.69\), and \(r=0.03\). In Table 4, we list several combinations of the expected scaling indices \(q\), \(p\), and \(r\) for \(0\leq\beta\leq 3/2\). Interestingly, in the range \(0\leq\beta\leq 1/3\), the values of \(q\) and \(p\) do not vary much in this range, especially compared to the case for the evolution with the (adapted) Hosking integral conservation, \(\beta=3/2\), so if they do not agree with those from the simulations, the discrepancy cannot easily \begin{table} \begin{tabular}{c c c c c c c c c} Run & \(-\mu_{50}\) & \(\lambda\) & \(v_{\mu}\) & \(\tilde{v}_{\lambda}\) & \(v_{\Lambda 0}\) & \(u_{\rm rms}^{\rm max}\) & SLD & \(\beta\) & \(\eta\) \\ \hline P & 0.1 & 500 & \(2\times 10^{-5}\) & 0.014 & 0.026 & 0.008 & no & 0.33 & 160 \\ Q & 0.1 & 50 & \(2\times 10^{-5}\) & 0.045 & 0.081 & 0.028 & no & 0.15 & 50 \\ R & 0.1 & 5 & \(2\times 10^{-5}\) & 0.141 & 0.257 & 0.076 & yes & 0.05 & 27 \\ S & 0.5 & 500 & \(10^{-4}\) & 0.032 & 0.057 & 0.019 & no & 0.33 & 70 \\ \end{tabular} \end{table} Table 3: Empirical values of \(\beta\) for cases with \(|\mu_{50}|<k_{0}\). For a given value of \(\lambda\), the values of \(v_{\Lambda 0}\) followed from the requirement that the total chirality vanishes. The resulting maximum rms velocity \(u_{\rm rms}^{\rm max}\) is listed for completeness. be resolved by changing the value of \(\beta\) within reasonable limits. Note that \(\beta=0\) is expected if the evolution is governed by (pure) magnetic helicity conservation. In the corresponding \(pq\) diagram Fig. 14, we see that Run P approaches the scale-invariance line \(p=2(1-q)\) along the line \(r=1/5\). At the intersection, we have \(q=3/5\) and \(p=4/5\). However, for Runs Q and R with stronger magnetic field strengths, here measured in terms of the initial Alfven speed \(v_{\rm A0}=\tilde{v}_{\lambda}=B_{0}/\sqrt{\rho_{0}}\), the solution approaches the \(\beta=0\) line, which suggests better conservation of magnetic helicity. Note that near the end of those runs, the data points may not be reliable because of the finite size of the domain. In addition, because of the stronger magnetic field, the Alfven time is shorter and therefore \(k_{\rm I}\) reaches \(k_{1}\) more quickly. In any case, it is likely that for small \(|\mu_{50}|/k_{0}\) we see an intermediate stage of the evolution of the system where the magnetic helicity and chirality are temporarily conserved individually as discussed in Sec. II.5. This is supported by the fact that the theoretically predicted time of the onset of ACC comes much later than the end of the run; see Eq. (31). For all the simulations where initially \(|\mu_{50}|<k_{0}\), we find that \(\mu_{5}\) decays more slowly than \(k_{\rm I}\); see Fig. 15 for Run S, as an example, where we see the crossing of \(\mu_{5}\) and \(k_{\rm I}\). The same is also seen for \(|\mu_{50}|=0.1\), but then the crossing of \(|\mu_{5}|\) and \(k_{\rm I}\) is less prominent. Again, these observations suggest that the magnetic helicity-conserving phase is an intermediate one before the solution resumes the decay governed by the adapted Hosking integral, as discussed in Sec. II.5. The time evolution of the magnetic energy and helicity spectra for Run P are given in Fig. 16(a). We can see that a negative magnetic helicity part of the spectrum still emerges, again only at large wave numbers, although Figure 16: (a) Magnetic energy and (b) total helicity variance spectra at \(t=31\) (dashed), \(100\) (solid), \(316\) (dotted), \(10^{3}\) (blue), \(3.16\times 10^{3}\) (green), \(10^{4}\) (orange), and \(3.16\times 10^{4}\) (red) for Run P. In (a), note that the \(E_{\rm M}(k,t)\) evolve underneath the envelope \(k^{1/3}\), and the upward arrow indicates the sense of time. In (b), the slopes \(k^{4}\) and \(k^{-4}\) have been indicated and the inset compares Sp(\(2\mu_{5}/\lambda\)) (solid) with Sp(\(h_{\rm tot}\)) (dotted) at the last time. Figure 14: \(pq\) diagram for Runs P (red symbols), Q (orange symbols), and R (blue symbols) at times \(t=700\), \(1000\), \(1500\), \(2200\), \(3200\), \(4600\), \(6800\), \(10^{4}\), \(1.5\times 10^{4}\), \(1.5\times 10^{4}\), \(2.2\times 10^{4}\), and \(3.2\times 10^{4}\), corresponding to symbols of increasing size. The solid line denotes the scale-invariance line \(p=2(1-q)\), the dashed line the empirical \(\beta=1/3\) line, and the dashed-dotted line is the resulting \(r=1/5\) line for the magnetic helicity decay. We also show cases with stronger magnetic field strength for Runs Q and R, where the solutions evolve along \(\beta\approx 0.15\) and \(\beta\approx 0.05\), respectively. Toward the end of the runs, the finite size effects of the domain begin to affect the solution. The dotted line denotes the \(\beta=0\) line for magnetic helicity conservation and is shown for comparison. now much later. This means that \(|\langle\mu_{\rm M}^{-}\rangle|\) is induced by the CPI, but it stays extremely small. Furthermore, \(|\langle\mu_{\rm M}^{+}\rangle|\) does not decay much during the time of the run; see also Fig. 13(b). This can easily be understood by the fact that \(\eta_{\rm CPI}\) becomes very large in this run because of the small value of \(|\mu_{50}|\), which enters the CPI time inverse quadratically; see Eq. (10). Many other features of the magnetic field evolution remain superficially similar to the limit of large \(|\mu_{50}|/k_{0}\). One still sees inverse cascading of positive magnetic helicity. In Fig. 16(b), we show the total helicity variance spectrum \({\rm Sp}(h_{\rm tot})\). We clearly see that the spectral slope changes from \(k^{2}\) to \(k^{4}\) as time goes on. Again, this suggests that the adapted Hosking integral, as defined in Ref. [55], is not conserved. Looking at the scaling of the magnetic helicity correlation function in Fig. 17, we see that there is a plateau only for very small values of \(R\), which are generally smaller than \(\xi_{\rm M}\) (marked by colored symbols). This shows that the conservation of the Hosking integral is now superseded by the conservation of the magnetic helicity. This can be understood by the fact that the CME is inefficient at \(k_{\rm I}\) during the time of the run [see Eq. (31)], and hence the magnetic helicity and fermion chirality would be individually relatively well conserved quantities. Regarding the conserved quantity for runs in the limit of small \(|\mu_{50}|/k_{0}\), we can say that, in spite of vanishing total chirality, the Hosking integral is here not conserved, because the magnetic energy now peaks at scales where the CME is not effective during the time of the run and the magnetic helicity is conserved. As a result, the net chirality is no longer random, but systematically of positive sign. The subinertial range of the magnetic helicity variance begins to be dominated by a \(k^{4}\) spectrum, which suggests that the Hosking integral in the expansion \({\rm Sp}(h_{\rm tot})=I_{\rm H}k^{2}/2\pi^{2}+O(k^{4})\) is now subdominant. To summarize, these runs are consistent with the theoretical prediction in Sec. II.5, although a moderate violation of helicity conservation has been seen for Run P. Note that Run P has a larger value of \(\lambda\), which makes the theoretically predicted \(\eta_{\rm ACC}\) smaller [see Eq. (31)], so that an earlier transition to the evolution with adapted Hosking integral conservation is expected. For an analytic estimate of the evolution of the system in the next section, we shall use the theoretical prediction discussed in Sec. II.5. Namely, the system is frozen until \(\eta=\eta_{\lambda}\) and then evolves with the usual inverse cascade for the helical magnetic field as an intermediate stage until \(\eta=\eta_{\rm ACC}\). Then, it starts to evolve with a decay law determined by the conservation of the adapted Hosking integral. ## IV Application to the early universe ### From QED to the Standard Model Now we investigate the impact of our findings in the previous sections on the cosmology of the early Universe, especially, baryogenesis. Up to here, we focused on a QED-like theory. Thus, we first would like to clarify its relation to the dynamics in the early Universe. The Standard Model (SM) of particle physics involves the right-handed leptons \(e_{Rf}\), the left-handed lepton doublets \(\ell_{Lf}\), the right-handed up- and down-type quarks, \(u_{Rf}\) and \(d_{Rf}\), and the left-handed quark doublets \(q_{Lf}\) with the flavor index running through \(f=1,2,3\), alongside the scalar Higgs doublet \(\Phi\), which are in total 16 species. On top of this, we have gauge interactions of \({\rm U}(1)_{Y}\times{\rm SU}(2)_{L}\times{\rm SU}(3)_{C}\). It is not obvious why we can reduce this complicated system to chiral MHD based on a QED-like theory like the one introduced in Sec. II. What we are interested in here is the slow dynamics at long wave lengths compared to interactions among particles. The key idea for the reduction is to assume the equilibration of fast interactions and to keep only the slow variables. The hypermagnetic field of \({\rm U}(1)_{Y}\) with a correlation length much larger than the mean free path of the particles stands out as a slow variable because the magnetic flux cannot be cut thanks to the absence of monopoles. This feature does not hold for non-Abelian gauge fields because they are charged under their own gauge group. We also need the chiral chemical potential, since it is related to the magnetic field via the anomaly equation. Apart from these two fundamental building Figure 17: (a) \({\cal I}_{H}(R,t)\) versus \(R\) for different times \(t_{*}\) (indicated by the same colors/line styles, and (b) \({\cal I}_{H}(R,t)\) versus \(t\) (normalized) for \(R=\xi_{\rm M}(t_{*})\) marked by the four colors for Run P. In (a), the four colored symbols indicate the positions of \(k_{0}\xi_{\rm M}(t_{*})\), and in (b), the time dependencies are plotted for those \(R=\xi_{\rm M}(t_{*})\). blocks, we can coarse-grain the microscopic properties of all particle in the form of transport coefficients such as the diffusion constant and the electric conductivity, besides macroscopic quantities such as the pressure, energy density, and velocity field. In this way, one may see that the system can be reduced to chiral MHD as far as the slow and long-wave dynamics is concerned. Still, one might wonder why we can just focus on one particular chiral chemical potential, as in Eq. (3), since we have 15 chiral fermion species in the SM. To illustrate this, let us focus on the temperature right above \(10^{5}\,\mathrm{GeV}\), where the electron Yukawa interaction is not efficient compared to the cosmic expansion, but other interactions are fast enough. In this case, the chiral chemical potential for the right-handed electron, \(\tilde{\mu}_{e}\), should be counted as a slow variable, as it is directly related to the hypermagnetic field via the anomaly equation. On the other hand, other chiral chemical potentials are subject to fast SM interactions, which provides 11 nontrivial constraints among them. Recalling that the SM has four conserved charges, hypercharge \(Y\) and the flavored baryon-minus-lepton numbers \(B/3-L_{f}\) with \(f=1,2,3\), one may immediately see that the remaining 15 chemical potentials can be expressed as a function of \(\tilde{\mu}_{e}\) by solving \(11+4\) constraints. The chiral chemical potential \(\tilde{\mu}_{5}\) originates from the generalized Ohm's law, \[\mathbf{J}_{Y}=\sigma_{Y}\mathbf{E}_{Y}+\frac{2\alpha_{Y}}{\pi}\tilde{\mu}_{5}\,\mathbf{B} _{Y}, \tag{37}\] where \(\alpha_{Y}\) is now the U(1)\({}_{Y}\) fine-structure constant and \(\sigma_{Y}\) the hyperelectric conductivity of the plasma. In the following, we will work with the \(\alpha_{Y}\) value around the electroweak scale, \(\alpha_{Y}\simeq 0.01\), and neglect its renormalization group running when considering the dynamics of the hypermagnetic field at high energies. Also, note that, in this section, we set \(\hbar=c=k_{B}=1\), and all quantities are physical rather than comoving, unless explicitly stated otherwise. For the SM U(1)\({}_{Y}\), at \(T\sim 10^{5\cdots 6}\mathrm{GeV}\), one may express this \(\tilde{\mu}_{5}\) as a summation of chiral chemical potentials for the SM fermions as [12; 49] \[\tilde{\mu}_{5}=\sum_{i=1}^{15}\epsilon_{i}\,g_{i}\,y_{i}^{2}\,\frac{\tilde{ \mu}_{i}}{2}=\frac{711}{481}\frac{\tilde{\mu}_{e}}{2}\,, \tag{38}\] where \(i\) runs over all SM fermions, \(\epsilon_{i}=\pm\) for right- and left-handed fermions, respectively, \(g_{i}\) counts internal degrees of freedom, and \(y_{i}\) is the hypercharge of fermion species \(i\). In the second equality, we inserted the solution of the 15 constraint equations mentioned above. We now see that, up to the \(\mathcal{O}(1)\) coefficient of \(711/481\), one chiral chemical potential suffices to describe the system. In higher temperature regimes, we will have additionally more slow variables that enter the expression of \(\tilde{\mu}_{5}\), but it is still written as a linear combination of their chemical potentials with \(\mathcal{O}(1)\) coefficients. It still holds that the evolution of the system is described by chiral MHD as discussed in Sec. II with \(\tilde{\mu}_{5}\) being evaluated accordingly. ### Baryogenesis After these general remarks, let us now turn to the implications of our analysis for the generation of the baryon asymmetry of the Universe. We are primarily interested in the scenario of baryogenesis from decaying hypermagnetic helicity [20; 24; 25; 26; 19], which assumes the presence of a strongly helical hypermagnetic field during the radiation-dominated era in the early Universe. This scenario is based on the observation that the helicity stored in the hypermagnetic field decays at the time of the electroweak phase transition, not because of some exotic helicity-violating interactions, but simply because hypermagnetic helicity is converted to magnetic helicity. This decay of hypermagnetic helicity then sources a baryon asymmetry via the chiral anomaly of the baryon-number current. One possibility to generate the helical hypermagnetic field required for baryogenesis consists of axion inflation featuring a Chern-Simons coupling to U(1)\({}_{Y}\). Such a model leads to the nonperturbative production of hypermagnetic gauge fields in combination with charge asymmetries for the 15 chiral SM fermion species [48; 49], \[n_{i}-\bar{n}_{i}=\frac{1}{6}\,g_{i}\,\tilde{\mu}_{i}\,T^{2}=-\epsilon_{i}\,g _{i}\,y_{i}^{2}\,\frac{\alpha_{Y}}{2\pi}h_{Y}+\cdots\,, \tag{39}\] where the ellipsis represents all other SM contributions, which, however, can safely be neglected during inflation.2 Furthermore, \(h_{Y}\) in Eq. (39) is the physical helicity density, which we define in terms of the comoving vector potential \(\mathbf{A}_{Y,\mathrm{com}}\), comoving hypermagnetic field \(\mathbf{B}_{Y,\mathrm{com}}\), and scale factor \(a\), Footnote 2: The top-quark Yukawa interaction would be a possible exception; see the discussion in footnote 5 of Ref. [49] for more details. \[h_{Y}=\frac{1}{a^{3}}\,\langle\mathbf{A}_{Y,\mathrm{com}}\cdot\mathbf{B}_{Y,\mathrm{ com}}\rangle\,, \tag{40}\] where the angle brackets now stand for a double average including the spatial average and the quantum mechanical expectation value during inflation. From Eq. (39), we can read off the fermion chemical potentials at the end of inflation in terms of the helicity density at the end of inflation. Inserting this result into Eq. (38), we obtain the chiral chemical potential at the end of inflation, \[\frac{\tilde{\mu}_{5}}{T}=-\frac{c_{5}}{2}\,6\,\chi\,,\qquad c_{5}=\frac{95}{18 }\,, \tag{41}\] where the dimensionless yield parameter \(\chi\) quantifies the amount of \(CP\) violation during axion inflation [49], \[\chi=\frac{\alpha_{Y}}{2\pi}\frac{h_{Y}}{T^{3}}\,. \tag{42}\] Here, we assume instantaneous reheating. The same coefficient \(c_{5}\) was found in Ref. [49]; in total, the expression for \(\tilde{\mu}_{5}\) in Eq. (41) is, however, smaller than the one in Ref. [49] by a factor of \(\nicefrac{{1}}{{2}}\) because, in the present paper, we include a factor of \(\nicefrac{{1}}{{2}}\) in Eq. (38). The fermion asymmetries generated during axion inflation are consistent with the chiral anomalies of the respective fermion currents. In fact, it is straightforward to generalize the conversion law in Eq. (8) to the early Universe. To see this, let us rewrite Eq. (41) as follows, \[\mu_{5}+\frac{3\,c_{5}}{2}\left(\frac{2\alpha_{Y}}{\pi}\right)^{2}\frac{1}{2a ^{3}T^{2}}\left\langle\mathbf{A}_{Y,\text{com}}\cdot\mathbf{B}_{Y,\text{com}}\right\rangle =0\,, \tag{43}\] where we used \(\mu_{5}=\left(2\alpha_{Y}/\pi\right)\tilde{\mu}_{5}\). Then, introducing \[\lambda_{Y}=3\left(\frac{2\alpha_{Y}}{\pi aT}\right)^{2}\,, \tag{44}\] we obtain the relation \[\mu_{5}+\frac{c_{5}}{2}\,\mu_{\text{M}}^{Y}=0\,, \tag{45}\] where \[\mu_{\text{M}}^{Y}=\frac{1}{2a}\lambda_{Y}\langle\mathbf{A}_{Y,\text{com}}\cdot \mathbf{B}_{Y,\text{com}}\rangle\,. \tag{46}\] As the temperature in the early Universe decreases, more and more SM interactions reach chemical equilibrium. This includes the SM Yukawa interactions, which violate parity and hence render the coefficient \(c_{5}\) in Eq. (41) a time-dependent quantity [49]. During axion inflation, \(c_{5}\) assumes its maximal value, \(c_{5}=95/18\simeq 5.3\), before it then decreases down to \(c_{5}=711/481\simeq 1.5\) at temperatures of a few 100 TeV [see Eq. (38)]. This change in \(c_{5}\) is reflected in a changing value of the chiral chemical potential \(\mu_{5}\), which is always given by \(\mu_{5}=-c_{5}/2\,\mu_{\text{M}}^{Y}\) according to Eq. (45), with \(\mu_{\text{M}}^{Y}\) remaining constant until the onset of ACC or electroweak phase transition. At \(T\lesssim 10^{5}\,\text{GeV}\), \(c_{5}\) and hence \(\mu_{5}\) vanish because all SM interactions have reached chemical equilibrium. The \(CP\) asymmetry parameter \(\chi\) in Eq. (42) controls the outcome of baryogenesis from helicity decay. That is, if no CPI or ACC takes place before the onset of spin flipping, the decay of hypermagnetic helicity around the electroweak phase transition results in a present-day baryon asymmetry (quantified in terms of the baryon-to-photon ratio) that is fully controlled by \(\chi\)[49], \[\eta_{B}^{0}\equiv\frac{n_{B}^{0}}{n_{\gamma}^{0}}\simeq 0.15\,c_{B}^{\text{ dec}}\chi\,, \tag{47}\] where \(n_{\gamma}=2\,\zeta(3)T^{3}/\pi^{2}\) and the superscript \(0\) indicates that a quantity is evaluated at the present time. Here, the coefficient \(c_{B}^{\text{dec}}\) has a theoretical uncertainty of possibly two orders of magnitude [26]. In the following, we will work with the representative value \(c_{B}^{\text{dec}}=0.05\)[49; 50], which implies that \(\chi\) values of the order of \(\chi\sim 10^{-7}\) are necessary to reproduce the observed baryon asymmetry, \(\eta_{B}^{\text{obs}}\simeq 6.1\times 10^{-10}\)[74; 75]. Meanwhile, the parameter \(\chi\) also allows us to evaluate the ratio of \(k_{0}\) and \(\mu_{5}\) at the end of axion inflation. Specifically, if we estimate the comoving peak wave number \(k_{0}\) in terms of the comoving wave number that enters the Hubble horizon at the end of reheating, \(k_{\text{rh}}=a_{\text{rh}}H_{\text{rh}}\)[49], we find \[\frac{|\mu_{5}|}{k_{\text{rh}}/a_{\text{rh}}} =\frac{6\,\alpha_{Y}c_{5}\,\chi}{\pi}\frac{T_{\text{rh}}}{H_{\text {rh}}}=\frac{6\,\alpha_{Y}c_{5}\,\chi}{\pi}\frac{M_{*}}{T_{\text{rh}}}\] \[\sim 10^{-4}\left(\frac{\chi}{10^{-7}}\right)\left(\frac{10^{14} \text{GeV}}{T_{\text{rh}}}\right)\,, \tag{48}\] where \(M_{*}=\left(90/\pi^{2}/g_{*}\right)^{1/2}M_{\text{Pl}}\simeq 7.1\times 10^{17}\, \text{GeV}\) is the reduced Planck mass, \(M_{\text{Pl}}\simeq 2.4\times 10^{18}\,\text{GeV}\), rescaled by the effective number of relativistic degrees of freedom in the Standard Model plasma, \(g_{*}=427/4\). Axion inflation typically results in small values of the \(\chi\) parameter (e.g., \(\chi\sim 10^{-7}\); see above) and large values of the reheating temperature (e.g., \(T_{\text{rh}}\sim 10^{14}\,\text{GeV}\); see Ref. [49]), which puts us in the parametric regime where \(|\mu_{5}|\ll k_{\text{rh}}/a_{\text{rh}}\) at the end of axion inflation. Moreover, smaller values of \(T_{\text{rh}}\) typically result in smaller values of \(\chi\), following the scaling relation \(\chi\propto(T_{\text{rh}}/M_{*})^{3}\)[49], which means that the opposite hierarchy, \(|\mu_{5}|\gg k_{\text{rh}}/a_{\text{rh}}\), cannot simply be obtained by considering a smaller reheating temperature. For \(|\mu_{5}|\ll k_{\text{rh}}/a_{\text{rh}}\), we can estimate the time when the ACC sets in after axion inflation based on Eq. (31), \[\eta_{\text{ACC}}\sim\left[\frac{1}{a^{3}}\frac{2\,\sigma_{Y}^{3}}{\bar{\rho} \,\lambda_{Y}\mu_{5}^{2}}\right]_{\text{rh}}\,, \tag{49}\] where the factor of \(a^{-3}\) follows from the mass dimension of the factor \(\sigma_{Y}^{3}/(\bar{\rho}\mu_{5}^{2})\). Next, we write the hyperelectric conductivity \(\sigma_{Y}\) as \(\sigma_{Y}=c_{\sigma_{Y}}T\), with \(c_{\sigma_{Y}}\sim 100\)[71; 72], the average radiation energy density \(\bar{\rho}\) as \(\bar{\rho}=c_{\bar{\rho}}\,T^{4}\), with \(c_{\bar{\rho}}=\pi^{2}g_{*}/30\), and the parameter \(\lambda_{Y}\) as \(\lambda_{Y}=c_{\lambda_{Y}}/(aT)^{2}\) with \(c_{\lambda_{Y}}=12\,\alpha_{Y}^{2}/\pi^{2}\). With these definitions, we find \[\eta_{\text{ACC}}\sim\frac{c_{\text{ACC}}}{\chi^{2}}\frac{T_{\text{rh}}}{M_{*} }\frac{1}{k_{\text{rh}}} \tag{50}\] with the coefficient \(c_{\text{ACC}}\) being given as follows, \[c_{\text{ACC}}=\frac{2\,c_{\sigma_{Y}}^{3}}{c_{\,\lambda_{Y}}}\left(\frac{\pi}{6\, \alpha_{Y}c_{5}}\right)^{2}\sim 10^{11}\,. \tag{51}\] The time \(\eta_{\text{ACC}}\) marks the onset of the anomalous chirality cancellation and needs to be compared to the time \(\eta_{\text{sf}}=1/k_{\text{sf}}\) when spin flipping for left- and right-handed electrons becomes efficient, where \(k_{\text{sf}}=a_{\text{sf}}H_{\text{sf}}\) is the comoving horizon scale at \(\eta=\eta_{\text{sf}}\). Using Eq. (50), together with \(c_{\text{ACC}}=10^{11}\) and \(T_{\text{sf}}=10^{5}\,\text{GeV}\) for the electron Yukawa interaction in the SM [45; 46], we obtain \[\frac{\eta_{\text{ACC}}}{\eta_{\text{sf}}}\sim\frac{c_{\text{ACC}}}{\chi^{2}} \frac{T_{\text{sf}}}{M_{*}}\sim\frac{0.01}{\chi^{2}}\,. \tag{52}\] Therefore, in order to have ACC before the onset of spin flipping, we require very large values of the \(CP\) asymmetry parameter, \(\chi\gtrsim 0.1\). However, for such large \(\chi\) values, the naive estimate of the baryon asymmetry according to Eq. (47) is hopelessly too large. Furthermore, the reduction of the magnetic helicity because of ACC in the time window between \(\eta_{\rm ACC}\) and \(\eta_{\rm sf}\) will not be enough to counteract this overproduction of baryon number. To see this explicitly, one may use the ratio in Eq. (52) to introduce a dilution factor, \(\Delta=(\eta_{\rm ACC}/\eta_{\rm sf})^{2/3}\), that multiplies the naive baryon asymmetry in Eq. (47) whenever ACC should occur before the onset of spin flipping, \[\eta_{B}^{0}\simeq\min\left\{1,\Delta\right\}\times 0.15\,c_{B}^{\rm dec} \chi\,. \tag{53}\] This formula suggests that at extremely large \(\chi\) values, \(\chi\gg 1\), it appears to be possible to reach \(\eta_{B}^{0}\simeq\eta_{B}^{\rm obs}\). In realistic models of axion inflation, it should, however, be extremely difficult, if not impossible, to realize such large \(\chi\) values. Moreover, no such solution would be self-consistent, as we would always violate our initial assumption that \(|\mu_{5}|\ll k_{\rm rh}/a_{\rm rh}\); see the relation in Eq. (48). Finally, for completeness, we mention that any scenario that does allow for the hierarchy \(|\mu_{5}|\gg k_{\rm rh}/a_{\rm rh}\) in one way or another would be interesting as it might display similar dynamics as our Runs III-VI, which we commented on at the end of Sec. III.4. The decay law for the magnetic helicity may then be different from the \(\eta^{-2/3}\) behavior that we typically find for ACC, which may be relevant for the outcome of baryogenesis. As already stated in Section III.4, we leave a more detailed study of this more exceptional case for future work. ## V Conclusions We have performed numerical simulations of chiral MHD with zero initial total chirality for a range of parameters to determine the dependence of characteristic time and scale ratios, which are well explained by the analytical estimate in Sec. II.5. Namely, they are consistent with the scaling evolution, \(\xi_{\rm M}\propto\eta^{4/9}\), \(\mathcal{E}_{\rm M}\propto\eta^{-10/9}\), and \(\langle\mu_{5}\rangle\propto\eta^{-2/3}\), derived from the conservation of the adapted Hosting integral [55], and also the time scale of the onset of this scaling evolution, \(\eta_{\rm ACC}\); see Eqs. (26) and (31). Our numerical simulations also assess the possibility of artifacts resulting from insufficient scale separation. A particularly important constraint is a sufficiently large size of the computational domain (small \(k_{1}\)), which is needed to obtain the expected \(\eta^{4/9}\) scaling of the correlation length. When this constraint is not obeyed, the scaling is closer to \(\eta^{1/3}\). The second constraint of a sufficiently large Nyquist wave number is important to obtain the correct values of the scale ratio of the positive and negative magnetic helicity peaks, i.e., \(k_{\rm II}/k_{\rm I}\). Somewhat surprisingly, this ratio scales inversely with the initial scale separation between the scale of the magnetic field and the CPI scale. Increased values of \(\langle\mu_{\rm M}^{-}\rangle\), which characterize the strength of the CPI, are obtained when \(\sigma\) is small or \(|\mu_{50}|\) is large and therefore the coupling to the CPI is more efficient. In the absence of spin flipping, even the slightest initial imbalance will amplify as the magnetic energy decays; see Appendix A. On long time scales, this eventually leads to a fully helical state, although simulations of this are at present unable to demonstrate this conclusively owing to the finite size of the computational domain. Spin flipping is another mechanism that can produce an imbalance between magnetic helicity and fermion chirality. In any case, however, the finally available magnetic energy and helicity densities are always limited by the finiteness of the initial total chirality imbalance. For \(\eta<\eta_{\rm ACC}\), when the chiral magnetic effect is not effective at the peak scale, magnetic helicity conservation governs the decay of magnetic energy and the Hosking integral does not play a role. We also discussed the implications of our findings for the generation of the baryon asymmetry of the Universe, in particular, the scenario of baryogenesis from helicity decay. The final baryon asymmetry in this scenario is controlled by a dimensionless yield parameter \(\chi\) that quantifies the helicity density produced in the very early Universe, for instance, during a stage of axion inflation. In previous work, it was shown how the observed baryon asymmetry can be generated from helicity decay at the time of the electroweak phase transition for a specific \(\chi\) value, \(\chi_{0}\sim 10^{-7}\); see Eq. (47) and Ref. [49]. The situation at larger \(\chi\) values, however, remained unclear. At \(\chi\gg 1\), one may have anticipated either (A) the overproduction of baryon number or (B) catastrophic helicity erasure by the chiral plasma instability and consequently no baryon asymmetry at all. Thanks to the analysis in this paper, we now understand that, instead of helicity erasure owing to the CPI, we should rather be concerned about the possible effect of ACC on the primordial hypermagnetic helicity. However, for realistic \(\chi\) values, \(10^{-7}\lesssim\chi\ll 0.1\), even ACC will not become efficient before the onset of spin flipping triggered by the electron Yukawa interaction. We therefore conclude that, at large \(\chi\) values, neither the CPI nor ACC occurs. Instead, we encounter the problem of baryon-number overproduction that also naively follows from Eq. (47) in the large-\(\chi\) limit. This leaves us with one viable solution for baryogenesis from the decay of hypermagnetic helicity, \(\chi_{0}\sim 10^{-7}\), which had already been identified in Ref. [49]. Our analysis in the present paper confirms the validity of this solution and demonstrates that it is indeed unique. **Data availability**--The source code used for the simulations of this study, the Pencil Code, is freely available from Ref. [66]. The simulation setups and the corresponding data are freely available from Ref. [76]. ###### Acknowledgements. We thank V. Domcke for fruitful discussions and J. Warnecke for his work on the implementation of SLD, which is used in some of the simulations. Support through the grant 2019-04234 from the Swedish Research Council (Vetenskapsradet) (AB), Grant-in-Aid for Scientific Research No. (C) JP19K03842 from the JSPS KAKENHI (KK), MEXT Leading Initiative for Excellent Young Researchers No. JPMXS0320200430 (KM), Grant-in-Aid for Young Scientists No. JP22K14044 from the JSPS KAKENHI (KM), and the grant 185863 from the Swiss National Science Foundation (JS) are gratefully acknowledged. We acknowledge the allocation of computing resources provided by the Swedish National Infrastructure for Computing (SNIC) at the PDC Center for High Performance Computing Stockholm and Linkoping. ## Appendix A Behavior in imbalanced chirality decay In Sect. V, we emphasized that even the slightest initial imbalance between magnetic helicity and fermion chirality will amplify as the magnetic energy decays. It is therefore important to remember that the dynamics discussed in this paper is specific to the case of balanced chirality, which is arguably also the most generic case. We know that the decay of magnetic energy and the increase of the correlation length follow a different behavior in the completely imbalanced case compared to the unbalanced one. We now discuss the behavior for the mildly imbalanced case. Here, we show that there is a tendency for the system to approach the behavior of a completely imbalanced one. We discuss two runs, Run A where the initial \(\langle\mu_{\rm M}\rangle\) is enhanced by 20% compared with \(|\langle\mu_{5}\rangle|\), and Run B where it is decreased by 20%. Apart from that, the runs are the same as Run O, i.e., the run discussed in Ref. [55]. In Run A, where the magnetic helicity is weaker than in Run O, the CPI becomes dominant and overcompensates the magnetic helicity. The net chirality is then negative. Eventually, the sign of the magnetic helicity changes and all the remaining fermion chirality is converted to magnetic fields with negative helicity; see Fig. 18, where we show the magnetic energy \(\mathcal{E}_{\rm M}(k,t)\) and the normalized magnetic helicity spectra \(kH_{\rm M}(k)/2\) for Run A at times \(\eta=32\), 320, 1000, 3200, 10,000, and 32,000. We see Figure 19: Same as Fig. 3, but for Run B at times \(\eta=32\), 100, 320, 1000, 3200, 10,000, and 32,000. Figure 20: Time dependence of \(\mathcal{E}_{\rm M}\) (black), \(\xi_{\rm M}^{-1}\) (orange), \(\mathcal{H}_{\rm M}\) (red), and \(-2\langle\mu_{5}\rangle/\lambda\) (blue), for (a) Run A with smaller and (b) Run B with larger magnetic helicity than in the balanced case. Dashed lines indicate negative values; at late times \(-2\langle\mu_{5}\rangle/\lambda\) changes sign in (a), and \(\mathcal{H}_{\rm M}\) changes sign in (b). In (a) the dotted line denotes the \(\eta^{-1/2}\log(\eta/\eta_{\rm os})\) scaling of Ref. [15] with \(\eta_{\rm os}=3\). Figure 18: Magnetic energy (solid lines) and normalized helicity spectra \(kH_{\rm M}(k)/2\) (dotted lines with red and blue symbols for positive and negative helicity spectra, respectively) for Run A at times \(\eta=32\), 100, 320, 1000, 3200, 15,000, 22,000, and 32,000. that \(k|H_{\rm M}(k)|/2\) approaches \(\mathcal{E}_{\rm M}(k)\) near the maximum. In view of the spectral realizability condition, Eq. (16), this means that the magnetic field is fully helical. Away from the maxima, the inequality is no longer saturated, but this is a typical effect in all turbulent flows where the current helicity spectrum shows a Kolmogorov-type spectrum, making the magnetic helicity spectrum therefore steeper than what could still be allowed by the spectral realizability condition [77]. On the other hand, when the fermion chirality is weak (Run B), the usual inverse magnetic cascade quickly gets established; see Fig. 19. In either case, the fermion chirality gets ultimately converted into magnetic helicity. It just takes a little longer than when the magnetic helicity is initially weak. At the end, however, the usual inverse cascade for a fully helical magnetic field commences. The sign of magnetic helicity can be positive or negative, depending on the sign of the initial total chirality. To illustrate how the decay laws change when magnetic helicity and fermion chirality no longer balance, we plot in Fig. 20 the time dependencies of \(\mathcal{E}_{\rm M}\), \(\xi_{\rm M}\), \(\mathcal{H}_{\rm M}\), and \(-2\langle\mu_{5}\rangle/\lambda\), for (a) Run A with 20% smaller and (b) Run B with 20% larger magnetic helicity than in the balanced case. In both cases, we see a tendency of the decays of \(\mathcal{E}_{\rm M}\) and \(\xi_{\rm M}^{-1}\) to slow down while those of \(\mathcal{H}_{\rm M}\) and \(-2\langle\mu_{5}\rangle/\lambda\) follow separate evolutions. Especially in the case of Run B, where the magnetic helicity dominates of the fermion chirality, we see a tendency toward a \(\mathcal{E}_{\rm M}\propto\xi^{-1}\propto t^{-2/3}\) as well as \(\mathcal{H}_{\rm M}=\rm const\) evolution, as expected from magnetic helicity conservation.
2307.10520
Photo-excited charge carrier lifetime enhanced by slow cation molecular dynamics in lead iodide perovskite FAPbI$_3$
Using muon spin relaxation ($\mu$SR) measurements on formamidinium lead iodide [FAPbI$_3$, where FA denotes HC(NH$_2)_2$] we show that, among the five structurally distinct phases of FAPbI$_3$ exhibited through two different temperature hysteresis, the reorientation motion of FA molecules is quasi-static below $\approx50$ K over the time scale of 10$^{-6}$ s in the low-temperature (LT) hexagonal (Hex-LT, $<160$ K) phase which has relatively longer photo-excited charge carrier lifetime ($\tau_{\rm c}\sim$10$^{-6}$ s). In contrast, a sharp increase in the FA molecular motion was found above $\approx50$ K in the Hex-LT phase, LT-tetragonal phase (Tet-LT, $<140$ K), the high-temperature (HT) hexagonal phase (Hex-HT, 160-380 K), and the HT-tetragonal phase (Tet-HT, 140-280 K) where $\tau_{\rm c}$ decreases with increasing temperature. More interestingly, the reorientation motion is further promoted in the cubic phase at higher temperatures ($>380/280$ K), while $\tau_{\rm c}$ is recovered to comparable or larger than that of the LT phases. These results indicate that there are two factors that determine $\tau_{\rm c}$, one related to the local reorientation of cationic molecules that is not unencumbered by phonons, and the other to the high symmetry of the bulk crystal structure.
M. Hiraishi, A. Koda, H. Okabe, R. Kadono, K. A. Dagnall, J. J. Choi, S. -H. Lee
2023-07-20T01:40:37Z
http://arxiv.org/abs/2307.10520v1
Photo-excited charge carrier lifetime enhanced by slow cation molecular dynamics in lead iodide perovskite FAPbI\({}_{3}\) ###### Abstract Using muon spin relaxation (\(\mu\)SR) measurements on formamidinium lead iodide [FAPbI\({}_{3}\), where FA denotes HC(NH\({}_{2}\))\({}_{2}\)] we show that, among the five structurally distinct phases of FAPbI\({}_{3}\) exhibited through two different temperature hysteresis, the reorientation motion of FA molecules is quasi-static below \(\approx 50\) K over the time scale of \(10^{-6}\) s in the low-temperature (LT) hexagonal (Hex-LT, \(<160\) K) phase which has relatively longer photo-excited charge carrier lifetime (\(\tau_{\rm c}\sim\)\(10^{-6}\) s). In contrast, a sharp increase in the FA molecular motion was found above \(\approx 50\) K in the Hex-LT phase, LT-tetragonal phase (Tet-LT, \(<140\) K), the high-temperature (HT) hexagonal phase (Hex-HT, \(160\)-\(380\) K), and the HT-tetragonal phase (Tet-HT, \(140\)-\(280\) K) where \(\tau_{\rm c}\) decreases with increasing temperature. More interestingly, the reorientation motion is further promoted in the cubic phase at higher temperatures (\(>380/280\) K), while \(\tau_{\rm c}\) is recovered to comparable or larger than that of the LT phases. These results indicate that there are two factors that determine \(\tau_{\rm c}\), one related to the local reorientation of cationic molecules that is not unencumbered by phonons, and the other to the high symmetry of the bulk crystal structure. + Footnote †: preprint: AIP/123-QED ## I Introduction Hybrid organic-inorganic perovskites (HOIPs) are promising for application in solar cells [1] and various other optical devices due to their low production cost [2; 3] and excellent optoelectronic properties [4; 5]. The high photoelectric conversion efficiency of more than 25% shown as a solar cell material [6] is attributed to the long photo-excited charge carrier lifetime (\(\tau_{\rm c}\geq 10^{-6}\) s), which translates into a large carrier diffusion length despite the modest carrier mobility [7]. Several microscopic mechanisms have been proposed as intrinsic factors for the unusually long \(\tau_{\rm c}\), including ferroelectric domain deformation [8; 9; 10], the Rashba effect [11; 12], photon recycling [13], and large polarons [14; 15]. In particular, since the conversion efficiency of all-inorganic perovskite solar cells was well below that of HOIPs [16; 17; 18], the presence of organic cations was initially thought to be crucial to achieving higher efficiency in solar cells. Recently, however, the efficiency of all-inorganic perovskites has also improved significantly to over 20% [19], and there is an increasing need to clarify the microscopic relationship between organic cations and solar cell efficiency in order to obtain guidelines for future material design and selection. Among the aforementioned microscopic mechanisms, three theoretical models have been proposed that involve the electric polarization of cation molecules. First, it is predicted that nanoscale ferroelectric domains formed by an array of organic cations spatially separate photo-excited electrons and holes, thereby reducing their recombination [8; 20; 21]. Second, in the Rashba effect model [11; 22; 12], the electronic bands of inorganic atoms, which are accompanied by spin-orbit interactions, are split by the electric field from the cation molecule to create an effective indirect band gap, thereby increasing carrier longevity. The third model is that organic cations reorient locally in response to the presence of photo-excited carriers, resulting in the formation of large polarons [14; 15; 23]. In the last case, carriers screened by local polarization of organic cations would be protected from scattering by defects and phonons, leading to extended \(\tau_{\rm c}\). In order to examine these models, it is important to experimentally elucidate the microscopic correlations between organic cation motions and \(\tau_{\rm c}\). We recently performed muon spin relaxation (\(\mu\)SR) measurements on methylammonium lead iodide (MAPbI\({}_{3}\), where MA denotes CH\({}_{3}\)NH\({}_{3}\)) and found that the \(\mu\)SR linewidth (\(\Delta_{\rm MA}\)) determined by the magnetic dipolar fields exerted from the nuclear magnetic moments of the MA molecules varies in proportion to the relaxation time (\(\tau_{\rm c}\)) of molecular reorientation induced by thermal agitation, indicating that \(\Delta_{\rm MA}\) is a good probe of local molecular motion over a time range of \(10^{-11}<\tau_{\rm c}<10^{-8}\) s [24]. Moreover, the temperature dependence of \(\Delta_{\rm MA}\) (which turned out to be non-monotonic) is also proportional to the photo-luminescence (PL) lifetime (\(\tau_{\rm PL}\), \(\propto\tau_{\rm c}\)), suggesting that \(\Delta_{\rm MA}\propto\tau_{\rm r}\propto\tau_{\rm PL}\)[24]. Considering that the local electric dipole (\(\mathbf{P}\)) of the MA molecule seen from carriers is effectively reduced to \(\overline{\mathbf{P}}\) by motional averaging as in the case of \(\Delta_{\rm MA}\) (i.e., \(\overline{\mathbf{P}}\propto\Delta_{\rm MA}\)), this is interpreted that the longer \(\tau_{\rm r}\), the larger \(\overline{\mathbf{P}}\) is to promote the formation of large polarons by allowing local dielectric response of the molecules. In other words, it is suggested that for local reorientation of cationic molecules, the disadvantage associated with the disturbance of reorientation motion by phonons (shortening of \(\tau_{\rm r}\)) at high temperatures is greater than the advantage of increased freedom of reorientation motion due to structural phase transition. This is in contrast to the intuitive expectation where only the latter effect would be considered, and awaits further experimental verification as it needs to be carefully examined with similar materials. Here, we report our \(\mu\)SR study on formamidinium lead iodide [FAPbI\({}_{3}\), where FA denotes HC(NH\({}_{2}\))\({}_{2}\)] to elucidate the local FA molecular motion vs. temperature in detail. We found a similar correlation between the \(\mu\)SR linewidth and \(\tau_{\rm PL}\) as observed in the case of MAPbI\({}_{3}\). Specifically, among the five structurally distinct phases exhibited by FAPbI\({}_{3}\) through two different temperature hysteresis, the reorientation motion of FA molecules is quasi-static below \(\approx 50\) K in the low-temperature (LT) hexagonal (Hex-LT, \(<160\) K) phase which exhibits relatively long \(\tau_{\rm PL}\). (\(\sim\)10\({}^{-6}\) s). In contrast, a sharp increase in the FA molecular motion was found above \(\approx 50\) K in the Hex-LT phase, LT-tetragonal phase (Tet-LT, \(<140\) K), the high-temperature (HT) hexagonal phase (Tet-HT, 160-380 K), and the HT-tetragonal phase (Tet-HT, 140-280 K) where \(\tau_{\rm PL}\) is reduced with increasing temperature. More interestingly, in the cubic phase (\(>380/280\) K) where the FA molecular motion is further accelerated by thermal agitations, \(\tau_{\rm PL}\) is recovered to as large as or larger than that of the LT phases. These results suggest that \(\tau_{\rm c}\) is determined by both the relatively slow reorientation motion of cationic molecules (which is not subject to strong thermal disruption) and unknown factors associated with the cubic crystal structure. ## II \(\mu\)Sr Experiment and DFT calculation The \(\mu\)SR experiments were performed using the ARTEMIS spectrometer installed in the S1 area at the Materials and Life Science Experimental Facility in J-PARC, where a nearly 100% spin-polarised pulsed \(\mu^{+}\) beam (25 Hz, with the full width at half-maximum of 80 ns and a momentum of 27 MeV/c) was transported to a powder sample (\(\sim\)2 g, packed in a disk shape using aluminum foil) mounted on a silver sample holder attached to a He gas-flow cryostat for controlling temperature. \(\mu\)SR spectra [time-dependent positron asymmetry, \(A(t)\)] were measured in the two-step temperature change sequence shown in Fig. 1(a). FAPbI\({}_{3}\) synthesized at ambient temperature has the most stable hexagonal (Hex-HT) crystal structure, but when the temperature is raised above \(\sim\)380 K, it transitions to a metastable cubic phase. Then, upon rapid cooling down from this cubic crystal structure, FAPbI\({}_{3}\) does not return to the hexagonal crystal structure, but undergoes a structural phase transition to the tetragonal crystal structure. Furthermore, below 160 K and between 160 and 280 K, the FA molecule assumes two different tetragonal structures (Tet-LT and Tet-HT) associated with the rotational degrees of freedom. Considering this complex phase diagram, \(\mu\)SR measurements in the first sequence were performed on the as-prepared (ap-) sample after cooling down from \(\sim\)300 K to the lowest temperature (\(\sim\)50 K) to follow the path of hexagonal to cubic transition. The sample was then quenched by rapid cooling from 417 K to \(\sim\)40 K, and \(\mu\)SR measurements across the tetragonal-to-cubic transition were performed. The DFT calculations were performed to investigate the local structure of H defects (mimicking Mu) using the projector augmented wave approach [25] implemented in the Vienna _ab initio_ simulation package (VASP) [26] with the Perdew-Burke-Ernzerhof (PBE) exchange correlation potential [27], where the lattice parameters reported in the literature were adopted [28]. The cutoff energy for the plane-wave basis set was 400 eV. The distribution of theelocal magnetic field at the muon sites was calculated using Dipelec program [29]. Crystal structures were visualized using the VESTA program [30]. ## III Result In general, \(\mu^{+}\) implanted into a material behaves as a light isotope of hydrogen (denoted hereafter by the element symbol Mu). Especially in insulators, it is known that \(\mu^{+}\) can interact with carriers generated by the electronic excitation associated with the implantation and can take on several metastable relaxed-excited states, including neutral hydrogen atom-like state (Mu\({}^{0}\)). The specific electronic state including its valence is determined by the impurity level associated with Mu and Figure 1: (a) The temperature variation sequence for \(\mu\)SR measurements in FAPbI\({}_{3}\), (b) photo-luminescence (PL) energy inferring the band gap (quoted from Ref. [[31]]), and (c) the initial asymmetry of the \(\mu\)SR spectra under a transverse field of 2.0 mT. Hex-LT/HT denotes low/high-temperature hexagonal phase, and Tet-LT/HT denotes low/high-temperature tetragonal phase. The as-prepared sample (Hex-HT) was cooled down to the base temperature, and \(\mu\)SR measurements were performed with increasing temperature to reach the cubic phase, then the sample was quenched by rapid cooling for the \(\mu\)SR measurements in the tetragonal phase. the band structure of the host material [32]. Therefore, in order to use Mu as a probe, it is often crucial to know the electronic state of Mu in the target material. To this end, \(\mu\)SR measurements under a weak transverse field (\(B_{\rm TF}\)) is useful to evaluate the yield of the diamagnetic Mu state (Mu\({}^{+}\) or Mu\({}^{-}\)) by the initial asymmetry distribution in the \(\mu\)SR time spectra: \[A(t)\simeq A_{\rm d}e^{-\lambda_{\perp}t}\cos\omega_{\rm p}t+A_{\rm p}\cos \omega_{\rm p}t, \tag{1}\] where \(A_{\rm d}\) is the partial asymmetry of diamagnetic component, \(\lambda_{\perp}\) is the transverse relaxation rate, \(\omega_{\rm H}=\gamma_{\rm i}B_{\rm TF}\) with \(\gamma_{\rm i}=2\pi\times 135.53\) MHz/T being the muon gyromagnetic ratio, \(A_{\rm p}\) (= \(A_{0}-A_{\rm d}\), where \(A_{0}\) is the total asymmetry corresponding to 100% \(\mu^{+}\) polarization) is that of paramagnetic (Mu\({}^{0}\)) component, and \(\omega_{\rm p}=\frac{1}{2}(\gamma_{e}-\gamma_{\rm i})B_{\rm TF}\) with \(\gamma_{e}=2\pi\times 28.024\) GHz/T being the electron gyromagnetic ratio (which is valid when \(\omega_{\rm p}\) is much smaller than the \(\mu^{+}\)-\(e^{-}\) hyperfine frequency). In the present measurement (\(B_{\rm TF}=2.0\) mT), \(\omega_{\rm p}\) (\(\simeq 2\pi\times 28\) MHz) far exceeds the time resolution determined by the muon pulse width (\(\delta\approx\)80 ns at J-PARC, the corresponding Nyquist frequency being \(1/2\delta\approx 6.3\) MHz), and the second term is averaged out to yield \[A(t)\simeq A_{\rm d}e^{-\lambda_{\perp}t}\cos\omega_{\rm p}t. \tag{2}\] The temperature dependence of \(A_{\rm d}\) derived from curve fit using Eq. (2) is shown in Fig. 1(c). Interestingly, \(A_{\rm d}\) in the hexagonal phase shows a pronounced temperature dependence, decreasing significantly from \(\sim\)0.15 to \(\sim\)0.05 with the phase transition from the Hex-LT phase to the Hex-HT phase. The reduction of \(A_{\rm d}\) (\(<A_{0}\simeq 0.23\)) in the hexagonal phase suggests that a part of implanted \(\mu^{+}\)s takes a paramagnetic relaxed-excited state, indicating that the band structure with respect to the impurity level associated with Mu is considerably different between the tetragonal and hexagonal phases. This is consistent with the significant difference in the band gap energy between hexagonal and tetragonal/cubic phases inferred from the photo-luminescence energy shown in Fig. 1(b). In the following, we will focus mainly on the diamagnetic Mu and consider the effect of reorientation motion of the FA molecule on the \(\mu\)SR spectra. Figure 2 shows typical ZF/LF-\(\mu\)SR spectra in the respective phase. These were analyzed by curve fits using the following equation, \[A(t)\simeq A^{\prime}_{\rm d}e^{-\lambda_{\rm d}t}G^{\rm KT}_{z}(t;\Delta, \nu,B_{\rm LF})+A_{\rm p}e^{-\lambda_{\rm p}t}+A_{\rm bg}, \tag{3}\] where \(A^{\prime}_{\rm d}\simeq A_{\rm d}-A_{\rm bg}\), \(G^{\rm KT}_{z}(t;\Delta,\nu,B_{\rm LF})\) is the dynamical Kubo-Toyabe (KT) function [33], \(\Delta\) is the linewidth determined by random local fields from nuclear magnetic moments, \(B_{\rm LF}\) is the magnitude of LF, \(\nu\) is the fluctuation rate of \(\Delta\), \(\lambda_{\rm d}\) and \(\lambda_{\rm p}\) are the rate of relaxation induced by fluctuation of hyperfine fields exerted from \(\mu^{+}\)-induced charge carriers, and \(A_{\rm bg}\) is the background contributed from \(\mu^{+}\)s stopped in the sample holder and other parts near the sample in the cryostat. The KT function is expressed analytically in the case of static (\(\nu=0\)) and ZF (\(B_{\rm LF}=0\)) conditions, \[G^{\rm KT}_{z}(t;\Delta,0,0)=\frac{1}{3}+\frac{2}{3}\left(1-\Delta^{2}t^{2} \right)e^{-\frac{1}{2}\Delta^{2}t^{2}}. \tag{4}\] Provided that all atoms in the crystal are fixed to their lattice points, the magnitude of \(\Delta\) for a given Mu site (which we define as \(\Delta_{0}\)) is evaluated by calculating a sum of second moments contributed from the \(m\)-th kind of nuclear magnetic moments (\(m=1,2,3\) and 4 for \({}^{1}\)H, \({}^{14}\)N, \({}^{127}\)I, and \({}^{207}\)Pb with natural abundance \(f_{m}\simeq 1\) for \(m=1\)-3 and \(f_{4}=0.226\)), \[\Delta_{0}^{2} \simeq\gamma_{\rm H}^{2}\sum_{j}\langle B_{j}^{2}\rangle=\gamma_{ \rm H}^{2}\sum_{m}f_{m}\sum_{j}\sum_{\alpha=x,y}\sum_{\beta=x,y,z}\gamma_{m}^ {2}(\hat{D}_{j}\mathbf{I}_{m})^{2} \tag{5}\] \[\hat{D}_{j} =D_{j}^{\alpha\beta}=(3\alpha_{j}\beta_{j}-\delta_{\alpha\beta}p_ {j}^{2})/r_{j}^{5},\quad(\alpha,\beta=x,y,z)\] where \(r_{j}=(x_{j},y_{j},z_{j})\) the position vector of the \(j\)-th nucleus (with Mu at the origin), \(\mathbf{\mu}_{m}=\gamma_{\rm H}\mathbf{I}_{m}\) the nuclear magnetic moment with \(\gamma_{m}\) being their gyromagnetic ratio. Because \({}^{14}\)N and \({}^{127}\)I nuclei have spin \(I_{m}\geq 1\), the corresponding \(\mathbf{\mu}_{m}\) is subject to electric quadrupolar interaction with the electric field gradient generated by the point charge of the diamagnetic Mu. This leads to the reduction of effective \(\mathbf{\mu}_{m}\) to the value parallel with \(\mathbf{r}_{j}\) (by a factor \(\sqrt{2/3}\) in the classical limit) [33]. From Eq. (5), it is immediately understood that the linewidth can be divided into the contribution from the FA molecule, \(\Delta_{\rm FA}\), and that of the PbI\({}_{3}\) lattice, \(\Delta_{\rm PbI}\), as follows: \[\Delta_{0}^{2}=\Delta_{\rm FA}^{2}+\Delta_{\rm PbI}^{2}. \tag{6}\] As we will show below, the actual value of \(\Delta\) (and \(\nu\)) varies significantly with the reorientation motion of the FA molecule Figure 2: Typical examples of ZF/LF-\(\mu\)SR time spectra (\(\mu\)-\(e\) decay asymmetry) observed in FAPbJ\({}_{3}\) at various temperatures. (a)–(c) and (f) are those before quench, and (d), (e) are after quench; see Fig. 1 for the corresponding phases. nearby Mu, thus yielding direct information about FA molecular motion. We also conducted \(\mu\)SR measurements under a longitudinal field (\(B_{\rm LF}\)) up to 2.0 mT at each temperature point. The parameters in Eq. (3) were then deduced reliably by global curve fits of the spectra at various \(B_{\rm LF}\). \(A_{\rm bg}\) was fixed to the value determined by the data at Hex-LT/Tet-LT phases where the depolarization for Mu in the sample was presumed to be complete at later times (\(\Delta\nu\gg 1\)): \(A_{\rm bg}=0.0185\) before quench and \(=0.0159\) after quench. As shown by the solid lines in Fig. 2, all spectra were coherently reproduced by the curve fits in the respective phases. The temperature dependence of the parameters in Eq. (3) obtained from this analysis is shown in Fig. 3 together with the PL lifetime (\(\tau_{\rm I}\)) quoted from Ref. [31]. First, we focus on the temperature dependence of the linewidth \(\Delta\) in the sample before quench. As shown in Fig. 3(c), \(\Delta\) shows a maximum at the lowest temperature of the Hex-LT phase, decreases rapidly with increasing temperature up to \(\sim\)100 K, then begins to increase to reach a maximum around 160 K at the boundary with the Hex-HT phase, and decreases again with increasing temperature up to \(\sim\)300 K. This is qualitatively the same as the \(\mu\)SR results in MAPbI\({}_{3}\), suggesting that \(\Delta_{\rm FA}\) (which is mainly carried by the protons) is reduced due to motional averaging by their reorientation motion. The origin of enhanced \(\Delta\) around the phase boundary is also assumed to be the same as in the case of MA. Namely, the LT-HT phase transition is induced by the thermally agitated reorientation motion of the FA molecules, and the interaction between FA and the PbI\({}_{3}\) lattice resonantly increases near the phase boundary, causing a slowdown. (In the case of MA, this scenario is also supported by the fact that the structural phase transition temperature shifts significantly toward the higher temperature side due to the deutration of the MA molecules.) The above interpretation is quantitatively supported by the values of \(\Delta_{0}\) for the Mu sites deduced from DFT calculations for each phase evaluated using Eq. (5): as seen in Fig. 4(c), \(\Delta\) at the lowest temperature is close to \(\Delta_{0}\) in the presence of stationary FA molecules (\(\sim\)0.2 MHz, indicated by a green band, where the width in the Hex-LT/HT phases represents the ambiguity due to the muon site), whereas in the higher temperature Hex-Cubic phases it approaches \(\sim\)0.1 MHz without FA molecules (\(\simeq\Delta_{\rm PbI}\), yellow band). The parameter \(\nu\), which gives the fluctuation frequency of \(\Delta\) in the KT relaxation function over a time range of \(10^{-8}<\nu^{-1}<10^{-4}\) s (where \(\Delta\) serves as a constant), does not show a pronounced temperature dependence, as seen in Fig. 3(e). This result is in apparent contrast to the increase in \(\nu\) observed near the phase boundary at low temperatures seen in the MA case. However, as is clear from Figs. 1 and 2, the partial asymmetry of the diamagnetic Mu (\(A_{\rm d}\)) decreases rapidly toward the phase boundary in the Hex-LT/HT phase that coincides with the reduction of \(\Delta\), which makes it extremely difficult to observe the change in the KT relaxation function due to the \(\nu\) change in this temperature region. Therefore, the lack of sufficient sensitivity to \(\nu\) in the Hex-LT/HT phase is considered to be responsible for the apparent difference with MA. A similar situation would be true for \(\lambda_{\rm d}\), which represents the effect of paramagnetic spin fluctuations on \(\Delta\): the monotonous increase of \(\lambda_{\rm p}\), which reflects the paramagnetic spin fluctuations themselves, is also consistent this interpretation. Next, let us look at the results for the sample after quench into the tetragonal structures. In contrast to the results before quench, \(\Delta\) decreases monotonically with increasing temperature in the Tet-LT phase and approaches the value in the Tet-HT phase; \(\Delta\) exhibits almost no temperature change in the Tet-HT and cubic phases, and its value is almost consistent with the value expected in the absence of FA molecules (\(=\Delta_{\rm PbI}\), yellow band). In the Tet phase, the evaluation of \(\Delta_{0}\) is highly arbitrary because of the large number of inequivalent FA molecular configurations reflecting the relative ease of reorientation for FA: \(\Delta_{0}\) varies from 0.18 to 0.29 MHz depending on whether different FA configurations are taken into account by the virtual crystal approximation or not [as indicated by a green band in Fig. 3(d)]. In any case, \(\Delta\) at the lowest temperature is consistent with \(\Delta_{0}\) where the FA molecules Figure 3: (a), (b) PL lifetime (semi-log plot, quoted from Ref. [31]), (c), (d) the Kubo-Toyabe linewidth \(\Delta\), (e), (f) fluctuation rate \(\nu\), and (g)–(i) exponential relaxation rate versus temperature, where left/right columns represent data on samples before and after quench. The horizontal bands hatched in green in (c) and (d) show the expected values when the FA molecule is stationary (\(\Delta=\Delta_{0}\)), and those in yellow when the FA contribution disappears due to motional averaging (\(\Delta\simeq\Delta_{\rm PbI}\)). appear to be nearly stationary. The temperature dependence of \(\nu\) is similar to that of MA, showing a peak at \(\sim\)100 K where \(\Delta\) exhibits a rapid decrease. This is probably because the yield of diamagnetic Mu in the quenched sample is close to 100% regardless of temperature, as seen in Figs. 1 and 2, which is common to the MA case. Conversely, the temperature dependence of \(\lambda_{\rm d}\) indicates that its sensitivity to paramagnetic spin fluctuations decreases in proportion to \(\Delta\) (which is similar to the case in the sample before quench). ## IV Discussion In a previous study using \({}^{1}\)H-NMR on a sample that seems to correspond to the post-quench phase, the relaxation time of FA molecular reorientation derived from the longitudinal relaxation rate (\(1/T_{1}\)) was reported to decrease rapidly from \(\tau_{\rm r}\sim\)\(10^{-8}\) s to \(4\times 10^{-11}\)s (at 140 K) in the temperature region corresponding to the Tet-LT phase, followed by a moderate decrease to \(1\times 10^{-11}\)s in the Tet-HT and cubic phases [34; 35]. Assuming that this causes fluctuations of \(\Delta_{\rm FA}\) so that \(\nu\simeq 1/\tau_{\rm r}\), we would expect \(\nu\gg\Delta_{\rm FA}\) and \(\Delta_{\rm FA}\) is effectively reduced to zero due to motional averaging. Therefore, the sharp decrease of \(\Delta\) in the Tet-LT phase corresponds to the reduction of \(\tau_{\rm r}\), and is consistent with our interpretation that the value of \(\Delta\) above 140 K corresponds to \(\Delta_{\rm PbI}\). We also speculate that the slow fluctuations of \(\Delta\) observed in \(\mu\)SR (\(1/\nu\simeq 10^{-5}\) s), except for the peak observed at temperatures near 100 K for the sample after quench, are of a different origin than the reorientation of the FA molecules. The previous DFT simulations of FA molecular rotation showed that the cubic phase has the lowest potential barrier to rotation around all three symmetry axes shown in Fig. 4(d), while the rotation is restricted due to the higher potential barrier around the \(\phi_{1,2}\) axes (0.2-0.6 eV) in the Tet-LT/HT phases [35]. As is illustrated in Fig. 4(d), the fast reorientation around the \(\phi_{3}\) axis reduces the contribution of \(\mu_{m}\) for \({}^{1}\)H and \({}^{14}\)N nuclei, reducing \(\mu_{m}\) to \(\overline{\mu}_{m}\) (corresponding the projection of \(\mu_{m}\) to the \(\phi_{3}\) axis). Specifically, the nuclear dipolar fields in Eq. (5) are expressed as \[B_{j}=\frac{\mu_{m}}{r_{j}^{3}}[(3\cos^{2}\theta_{\rm r}-1)\cos\theta_{\rm m} +3\sin\theta_{\rm r}\cos\theta_{\rm r}\sin\theta_{m}\cos\psi_{m}], \tag{7}\] where \(\theta_{\rm r}\) is the polar angle of \(\mathbf{r}_{j}\), \(\theta_{m}\) and \(\psi_{m}\) are the polar and azimuth angle of \(\mathbf{\mu}_{m}\). Assuming that the \(\hat{z}\) axis for the polar coordinates is parallel with \(\phi_{3}\), the term proportional to \(\sin\theta_{m}\) in Eq. (7) is averaged out by the jumping rotation with remaining contribution \(\overline{\mu}_{m}=\mu_{m}\cos\theta_{m}\) when \(\tau_{3}^{-1}\gg\Delta_{\rm FA}\). While the extent of decrease in \(\Delta\) due to this depends on the relationship between the \(\phi_{3}\) axis and Mu site, it is estimated to decrease by a factor of \(1/\sqrt{2}\) in the random average [33]. The FA contribution is eventually eliminated by further averaging around \(\phi_{1,2}\) axis (\(\perp\phi_{3}\)) when \(\tau_{1,2}^{-1}\gg\Delta_{\rm FA}\). Thus, provided that the difference in activation temperatures of reorientation motion around the \(\phi_{3}\) and \(\phi_{1,2}\) axes is large enough, \(\Delta\) is expected to exhibit decrease in two steps with increasing temperature. However, given that the contribution from the FA molecules to \(\Delta\) almost disappears at around 140 K [see Fig. 3(d)], it can be inferred that the potential barrier for the rotation is relatively small for all the three axes in Tet-LT/HT phase. Since the DFT calculations do not take into account the relaxation of PbI\({}_{3}\) lattice [35], this suggests that the lattice relaxation occurs to the extent that it does not interfere with the rotation of the FA molecules. In contrast, the observed temperature dependence of \(\Delta\) in the Hex-LT/HT phase [Fig. 3(c)] is qualitatively similar to Figure 4: Crystal structure of FAPbI\({}_{3}\) for (a) cubic, (b) Hex-HT, and (c) Tet-HT phases, where the yellow-hatched areas show the Mu sites estimated by DFT calculations. (d) Schematic illustrations of FA molecule with three symmetry axes (\(\phi_{i}\), \(i=1,2,3\)) of jumping rotation. The contribution of nuclear magnetic moments (\(\mu_{m}\)) to \(\Delta\) is effectively reduced by the reorientation around the \(\phi_{3}\) axis to \(\overline{\mu}_{m}\) by motional averaging (a projection to the \(\phi_{3}\) axis) and then to zero by further averaging around \(\phi_{1,2}\) axes. (e) A mechanism of reducing electric dipoles (\(\mathbf{P}\rightarrow\overline{\mathbf{P}}\approx 0\)) similar to that for \(\mu_{m}\) is illustrated, where the hatched areas indicate local charge asymmetry obtained by DFT calculation [31]. that in the case of MAPbI\({}_{3}\) in that it occurs in almost two steps with a maximum at the boundary of the orthorhombic and tetragonal phases: \(\Delta\) decreases with increasing temperature due to reorientation associated with rotation around the \(C_{3}\) symmetry axis (corresponding to the \(\phi_{3}\) axis for FA molecules) in the tetragonal phase, and then reorientation around the \(C_{4}\) symmetry axis (corresponding to the \(\phi_{1,2}\) axes) develops rapidly near the structural phase transition temperature to the tetragonal phase (\(\sim\)162 K), and \(\Delta\) reaches a maximum once and then decreases gradually. Therefore, it can be inferred that almost the same scenario holds in the Hex-LT/HT phase for FA. This suggests that, in contrast to the Tet-LT/HT phase, the reorientation motion of FA molecules is relatively strongly restricted by the PbI\({}_{3}\) lattice. Let us look at the relationship between \(\tau_{\text{PL}}\) (corresponding to \(\tau_{\text{c}}\)), and \(\Delta_{\text{FA}}\) [\(=(\Delta-\Delta_{\text{PbI}})^{1/2}\)], which is now interpreted as proportional to \(\tau_{\text{c}}\) for the FA molecules. First, it is noteworthy that the overall temperature dependence of \(\Delta\), including the hump around 160 K in the sample before quench, is very similar to that observed for \(\tau_{\text{PL}}\)[31]. As shown in Fig. 3(a) (semi-log plot), the longer \(\tau_{\text{PL}}\) (\(=\tau_{1}\) in Ref. [31]) decreases rapidly above \(\sim\)80 K, stops at around 160-220 K, and then decreases rapidly above 220 K. [\(\Delta\) similar trend is observed for the shorter lifetime (\(\tau_{2}\), not shown)]. This is qualitatively in line with the case of MAPbI\({}_{3}\)[24]. On the other hand, in the sample after quench, \(\tau_{\text{PL}}\) is already more than one order of magnitude shorter than that before quench at around 80 K [the lowest temperature of the data, see Fig. 3(b)], consistent with a more rapid decrease in \(\Delta\) in the Tet-LT phase than in the Hex-LT phase. Such a correlation suggests an intrinsic relationship between \(\tau_{\text{c}}\) and FA molecular motion, as in the case of MA, and supporting the idea that the mechanism causing the change in \(\Delta\) is also acting on the local dielectric permittivity \(\varepsilon_{\text{loc}}(\omega)=\varepsilon_{\text{loc}}^{\prime}(\omega)+i \varepsilon_{\text{loc}}^{\prime\prime}(\omega)\). As illustrated in Fig. 4(e), the FA molecules is accompanied by an electric dipole moment (\(P=0.35\) Debye) along the \(\phi_{1}\) axis due to local charge imbalance [31], and \(\varepsilon_{\text{loc}}^{\prime\prime}(\omega\approx\tau_{\text{c}}^{-1})\) is expected to decrease for \(\tau_{\text{c}}\ll\tau_{\text{c}}\) by motional averaging as in \(\Delta\)[24]. However, since \(P\) has only one component parallel to the \(\phi_{1}\) axis, it differs from the \(\Delta\) case in that it becomes zero by fast jump rotation around either the \(\phi_{2}\) or \(\phi_{3}\) axis. Thus, the anomaly shown by \(\Delta\) and \(\tau_{\text{PL}}\) around 160 K suggests that the activation of the rotational mode around the \(\phi_{1,2}\) axis temporarily inhibits the reorientation around the \(\phi_{3}\) axis in the process of inducing the structural phase transition. The implication that the more reorientation of the cation molecule is activated, the less the photo-excited carriers are able to form the large polarons required for longer \(\tau_{\text{c}}\) suggests that, since such activation is a random thermal excitation by phonons, the effect is rather that of disrupting the coherent motion of the cation molecules necessary for large-polaron formation. NMR and neutron scattering suggest that \(\tau_{\text{c}}\sim 10^{-7}\)-\(10^{-8}\) s at temperatures below 50 K where \(\tau_{\text{c}}\) is long. In other words, the cation molecules can rotate even at such low temperatures and are not strongly scattered by phonons, and thus can reorient coherently with respect to the carriers. Based on these considerations, it may be useful to search for combinations of cation molecules and PbX\({}_{3}\) lattices that can suppress the fast reorientation of molecules as a guideline for material design to achieve long \(\tau_{\text{c}}\). Finally, we briefly discuss the origin of the PL lifetime recovery in the cubic phase. It is clear from Figs. 3(a) and (b) that the increase in \(\tau_{\text{PL}}\) in this phase is completely uncorrelated with \(\Delta\), both before and after quench. Therefore, the cause of the increase in \(\tau_{\text{PL}}\) is independent of the degrees of freedom of the FA molecule. This is consistent with the high photoelectric conversion efficiency of CsPbI\({}_{3}\) in the same phase, where MA/FA cations are replaced with Cs ions [19]. Interestingly, the real part of the bulk permittivity \(\varepsilon^{\prime}(\omega)\) shows a clear increase from low temperatures toward the boundary between the Tet-HT and cubic phases at \(\sim\)280 K (\(\varepsilon^{\prime}=44\to 46\)) and further increases with increasing temperature [36]. A similar trend is reported for FAPbCl\({}_{3}\)[37] and CsPbBr\({}_{3}\)[38]. These facts may imply that the recovery of \(\tau_{\text{PL}}\) is related to \(\varepsilon^{\prime}(\omega)\) derived from the displacement of the center of gravity of the cation. ###### Acknowledgements. We would like to thank the MLF staff for their technical support. This work was supported by the Elements Strategy Initiative to Form Core Research Centers, from the Ministry of Education, Culture, Sports, Science, and Technology of Japan (MEXT) under Grant No. JPMXP0112101001, and partially by the MEXT Program: Data Creation and Utilization Type Material Research and Development Project under Grant No. JPMXP1122683430. M.H. also acknowledges the support of JSPS KAKENHI Grant No. 19K15033 from MEXT. The \(\mu\)SR experiments were conducted at the Materials and Life Science Experimental Facility (MLF), J-PARC under the support of Inter-University-Research Programs (Proposals No. 2018MI21) by Institute of Materials Structure Science, KEK. S.-H. L. and J. J. C. acknowledge support from the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0016144.
2305.01254
Time-Domain Moment Matching for Second-Order Systems
This paper studies a structure-preserving model reduction problem for large-scale second-order dynamical systems via the framework of time-domain moment matching. The moments of a second-order system are interpreted as the solutions of second-order Sylvester equations, which leads to families of parameterized second-order reduced models that match the moments of an original second-order system at selected interpolation points. Based on this, a two-sided moment matching problem is addressed, providing a unique second-order reduced system that match two distinct sets interpolation points. Furthermore, we also construct the reduced second-order systems that matches the moments of both zero and first order derivative of the original second-order system. Finally, the Loewner framework is extended to the second-order systems, where two parameterized families of models are presented that retain the second-order structure and interpolate sets of tangential data.
Xiaodong Cheng, Tudor C. Ionescu, Monica Pătraşcu
2023-05-02T08:35:14Z
http://arxiv.org/abs/2305.01254v1
# Time-Domain Moment Matching for Second-Order Systems ###### Abstract This paper studies a structure-preserving model reduction problem for large-scale second-order dynamical systems via the framework of time-domain moment matching. The moments of a second-order system are interpreted as the solutions of second-order Sylvester equations, which leads to families of parameterized second-order reduced models that match the moments of an original second-order system at selected interpolation points. Based on this, a two-sided moment matching problem is addressed, providing a unique second-order reduced system that match two distinct sets interpolation points. Furthermore, we also construct the reduced second-order systems that matches the moments of both zero and first order derivative of the original second-order system. Finally, the Loewner framework is extended to the second-order systems, where two parameterized families of models are presented that retain the second-order structure and interpolate sets of tangential data. ## I Introduction Second-order dynamical systems are commonly used to capture the behavior of various physical systems, including electrical circuits, power systems, mechanical systems, see e.g., [1, 2, 3, 4, 5]. The dynamics of a linear time-invariant second-order system is described by \[\mathbf{\Sigma}:\left\{\begin{aligned} & M\ddot{x}(t)+D\dot{x}(t)+Kx(t)& =Bu(t),\\ & C_{1}\dot{x}(t)+C_{0}x(t)&=y(t),\end{aligned}\right. \tag{1}\] where \(M,D,K\in\mathbb{R}^{n\times n}\) are commonly referred to as the mass, damping, and stiffness matrices in mechanical systems. \(B\in\mathbb{R}^{n\times p}\) is the input matrix of external forces, and \(C_{0},C_{1}\in\mathbb{R}^{q\times n}\) are the output matrices for positions and velocities. The transfer function of the system \(\mathbf{\Sigma}\) is given by \[W(s)=(C_{1}s+C_{0})(Ms^{2}+Ds+K)^{-1}B.\] In real applications, the model description (1) often has a high dimension \(n\), which requires a large amount of computational resources and thus hinders simulation, prediction, and control of these systems. Therefore, model reduction techniques for large-scale second-order dynamical systems has been paid increasing attention, and reduced order models are indispensable for efficient analysis and optimization of these structured systems. The essential problem in model reduction of second-order systems is to preserve the second-order structure, allowing for a physical interpretation of the resulting reduced model. However, achieving this preservation is not necessarily straightforward. Although a second-order system (1) can be rewritten in the first-order form, with state vector \([x(t)^{\top}\ \dot{x}(t)^{\top}]^{\top}\), which can be reduced via first-order reduction methods, the reduced-order models generally lose the second-order structure. To cope with this structure-preserving problem, second-order balancing approaches have been proposed in e.g., [6, 7, 8, 9, 10]. The so-called _position and velocity Gramians_ are defined as the diagonal blocks in the Gramian matrices of the first-order representation. Then, balanced truncation can be performed based on different pairs of position and velocity Gramians. However, unlike the balanced truncation in the first-order case, these methods can hardly preserve stability and provide a global error bound. A port-Hamiltonian approach in [11], in contrast, reduces a second-order system via a generalized Hamiltonian framework and preserves the Hamiltonian structure and stability. Recently, a positive real balanced truncation method is presented in [12], which guarantees stability and passivity of reduced second-order model. The model reduction problem in [13, 14] is tackled by optimization approaches, where reduced systems are constructed as the optimal solution of an \(H_{2}\)-optimization problem subject to certain structural constraint. In [15, 16, 17], a clustering-based framework is considered to simplify the structure of second-order network systems, and the scheme is based on identifying and aggregating nodal states that have similar responses to external input signals. Moment matching techniques provide efficient tools for model reduction of dynamical systems, see [18, 19, 20, 21, 22] for an extensive overview for first-order systems. By using the projection matrices generated in the Krylov subspace, reduced models are constructed to match the original system at selected interpolation points in the complex plane. Recent extensions to second-order systems are found in e.g., [23, 24, 25, 26, 27], in which the second-order Krylov subspace is introduced to preserve the second-order structure. A time-domain approach to moment matching has been presented in [28, 21], where the moments of a system are characterized by the unique solutions of Sylvester equations. It is shown that there is a one-to-one relation between the moments and the steady-state response, which is obtained by interconnecting the system with a signal generator interpreting a set of interpolation points. This time-domain approach has been further developed in e.g., [29, 30, 31] for port-Hamiltonian systems and two-sided moment matching problem. The current paper extends the time-domain moment matching approach to linear second-order systems in (1). Particularly, we represent the moments of \(W(s)\) at a set of interpolation points by the unique solution of a second-order Sylvester equation. Thereby, a family of parameterized reduced models in the second-order form is constructed, based on which, we further analyze reduced models that preserve stability and passivity. The another contribution is the two-sided moment matching approach, with which the reduced second-order model matches the moments of \(W(s)\) at two distinct sets of interpolation points. Furthermore, we also study the problem of time-domain moment matching for the first-order derivative of the transfer function of the system (1), denoted by \(W^{\prime}(s)\). These moments are shown to have a one-to-one relation with the steady-state response of the system composed of the state-space representation of \(W^{\prime}(s)\) and two dual signal generators in a cascade form. We present a reduced-order model that achieves moment matching at both zero and first-order derivatives of \(W(s)\). Finally, the Loewner framework is extended to the second-order systems, where we present two families of parameterized systems that not only match given sets of right and left tangential data but also possess the second-order structure. Given a set of right tangential interpolation data, we present two approaches in the Loewner framework to establish a second-order model that interpolates the data. The paper is organized as follows. In Section II, we present preliminary results regarding time-domain moment matching for linear systems. In Section III, the the moments of a second-order system are characterized with second-order Sylvester equations, and the time-domain moment matching approach for second-order systems is presented. The moment matching problems pertaining two-sided moment matching, pole placement, and first-order derivatives are discussed in Section IV. In Section V, the second-order Loewner framework is presented, and finally, concluding remarks are made in Section VII. _Notation:_ The symbol \(\mathbb{R}\) and \(\mathbb{C}\) denotes the sets of real and complex numbers, respectively, and \(\mathbb{C}^{-}\) and \(\mathbb{C}^{0}\) are the set of complex numbers with negative real part and zero real part, respectively. \(\emptyset\) denotes the empty set, and \(\mathds{1}_{n\times m}\) represents a matrix with all elements equal to \(1\). For a complex matrix \(A\in\mathbb{C}^{n\times m}\), \(A^{*}\in\mathbb{C}^{m\times n}\) denotes the conjugate transpose of \(A\). Moreover, \(\sigma(A)\) represents the set of the eigenvalues of \(A\), and \(\det(A)\) represents the determinant of \(A\). ## II Preliminaries In this section we briefly recall the notion of time-domain moment matching a stable LTI system of order one, see e.g., [32, 33]. ### _Time-Domain Moment Matching for Linear Systems_ Consider a single input-single output (SISO) linear time-invariant (LTI) minimal system \[\Sigma:\ \dot{x}=Ax+Bu,\quad y=Cx, \tag{2}\] with the state \(x\in\mathbb{R}^{n}\), the input \(u\in\mathbb{R}\) and the output \(y\in\mathbb{R}\). The transfer function of (2) is \[K(s)=C(sI-A)^{-1}B,\quad K:\mathbb{C}\to\mathbb{C}. \tag{3}\] Assume that (2) is a minimal realization of the transfer function \(K(s)\). The moments of (3) are defined as follows. **Definition 1**.: _[_32, 34_]_ _The \(k\)-moment of system (2) with the transfer function \(K\) as in (3), at \(s_{1}\in\mathbb{R}\) is defined by \(\eta_{k}(s_{1})=(-1)^{k}/k!\left[\mathrm{d}^{k}\,K(s)/\mathrm{d}s^{k}\right]_{ s=s_{1}}\in\mathbb{R}\)._ For the sake of clarity, without loss of generality, throughout the rest of this section, we consider real quantities. Picking the points \(s_{1},\ldots,s_{\nu}\in\mathbb{R}\) let the \(S\in\mathbb{R}^{\nu\times\nu}\), with the spectrum \(\sigma(S)=\{s_{1},\ldots,s_{\nu}\}\). Let \(L=[1\ \ldots\ 1]\in\mathbb{R}^{1\times\nu}\), such that the pair \((L,S)\) is observable. Denote by \(\Pi\in\mathbb{R}^{n\times\nu}\) be the solution of the Sylvester equation \[A\Pi+BL=\Pi S. \tag{4}\] Furthermore, since the system is minimal, assuming that \(\sigma(A)\cap\sigma(S)=\emptyset\), then \(\Pi\) is the unique solution of the equation (4) and \(\mathrm{rank}\ \Pi=\nu\), see e.g. [35]. Then, the moments of (2) are characterised as follows **Proposition 1**.: _[_32_]_ _The moments of system (2) at the interpolation points \(\{s_{1},s_{2},...,s_{\ell}\}=\sigma(S)\) are in one-to-one relation1 with the elements of the matrix \(C\Pi\)._ Footnote 1: By one-to-one relation between a set of moments and the elements of a matrix, we mean that the moments are uniquely determined by the elements of the matrix. The following proposition gives necessary and sufficient conditions for a low-order system to achieve moment matching. **Proposition 2**.: _[_32_]_ _Consider the LTI system_ \[\dot{\xi}=F\xi+Gu,\quad\psi=H\xi,\] _with \(F\in\mathbb{R}^{\nu\times\nu},\ G\in\mathbb{R}^{\nu}\) and \(H\in\mathbb{R}^{p\times\nu}\), and the corresponding transfer function_ \[K_{G}(s)=H(sI-F)^{-1}G.\] _Fix \(S\in\mathbb{R}^{\nu\times\nu}\) and \(L\in\mathbb{R}^{1\times\nu}\), such that the pair \((L,S)\) is observable and \(\sigma(S)\cap\sigma(A)=\emptyset\). Furthermore, assume that \(\sigma(F)\cap\sigma(S)=\emptyset\). The reduced system (2) matches the moments of (2) at \(\sigma(S)\) if and only if_ \[HP=C\Pi,\] _where the invertible matrix \(P\in\mathbb{R}^{\nu\times\nu}\) is the unique solution of the Sylvester equation \(FP+GL=PS\)._ We are now ready to present a family of \(\nu\) reduced order models parameterized in \(G\) that match \(\nu\) moments of the given system (2). The reduced system \[\Sigma_{G}:\ \dot{\xi}=(S-GL)\xi+Gu,\quad\psi=C\Pi\xi, \tag{5}\] with the transfer function \[K_{G}(s)=C\Pi(sI-S+GL)^{-1}G, \tag{6}\] describes a family of \(\nu\) order models that achieve moment matching at \(\sigma(S)\)_fixed_, satisfying the properties 1. \(\Sigma_{G}\) is parameterized in \(G\in\mathbb{R}^{\nu}\), 2. \(\sigma(S-GL)\cap\sigma(S)=\emptyset\). ### _Time-Domain Moment Matching for MIMO Systems_ The results can be directly extended to the MIMO case, see, e.g., [33] for more details. Consider a MIMO system (2), with input \(u(t)\in\mathbb{R}^{m}\), output \(y(t)\in\mathbb{R}^{p}\) and the transfer function \(K(s)\in\mathbb{C}^{p\times m}\). Let \(S\in\mathbb{C}^{\nu\times\nu}\) and \(L=[l_{1}\ l_{2}\...\ l_{\nu}]\in\mathbb{C}^{m\times\nu}\), \(l_{i}\in\mathbb{C}^{m}\), \(i=1,...,\nu\), be such that the pair \((L,S)\) is observable. Let \(\Pi\in\mathbb{C}^{n\times\nu}\) be the unique solution of the Sylvester equation (4). Then the moments \(\eta(s_{i})=K(s_{i})l_{i}\), \(\eta(s_{i})\in\mathbb{C}^{p}\), \(i=1,...,\nu\) of at \(\{s_{1},...,s_{\nu}\}=\sigma(S)\) are in one-to-one relation with \(C\Pi\). The model reduction problem for MIMO systems boils down to finding a \(\nu\)-th order model described by the equations (2), with the transfer function \(K_{G}\) as in (2), \(G\in\mathbb{R}^{\nu\times m}\) which satisfies the right tangetial interpolation conditions [36] \[K(s_{i})l_{i}=\widehat{K}(s_{i})l_{i},\ i=1,...,\nu.\] It immediately follows that a family of reduced order MIMO models that achieve moment matching in the sense of satisfying the tangential interpolation conditions is given by \(\Sigma_{G}\) described by the equation (5). ## III Moments and Moment Matching of Second-order System ### _Moments of Second-Order Systems_ In this section, we characterize the moments of the second-order system \(\mathbf{\Sigma}\) in (1) at a set of interpolation points, which is different from the poles of \(\mathbf{\Sigma}\), defined as follows. \[\Omega:=\{s\in\mathbb{C}\mid\det(Ms^{2}+Ds+K)=0\}, \tag{7}\] with \(|\Omega|=2n\). Following [21, 18], the moments of second-order systems are defined as follows. **Definition 2**.: _Let \(s_{\star}\in\mathbb{C}\) such that \(s_{\star}\notin\Omega\). The 0-moment of \(W(s)\) at \(s_{\star}\in\mathbb{C}\) is the complex matrix_ \[\eta_{0}(s_{\star})=W(s_{\star})=(C_{1}s_{\star}+C_{0})(Ms_{\star}^{2}+Ds_{ \star}+K)^{-1}B,\] _and the \(k\)-moment at \(s_{\star}\in\mathbb{C}\) is defined by_ \[\eta_{k}(s_{\star})=\frac{(-1)^{k}}{k!}\left[\frac{\mathrm{d}^{k}}{\mathrm{d} s^{k}}W(s)\right]_{s=s_{\star}},\ k\geq 1\ \text{and integer}. \tag{8}\] Note that the 0-moment of \(W(s)\) at \(s_{\star}\) can be written as \(\eta_{0}(s_{\star})=C_{0}\Pi+C_{1}\Pi s_{\star}\), where \(\Pi\) is the unique solution of the matrix equation \[M\Pi s_{\star}^{2}+D\Pi s_{\star}+K\Pi=B.\] Then, the following lemma is obtained for moments at distinct interpolation points. **Lemma 1**.: _Let_ \[S =\mathrm{diag}(s_{1},s_{2},\cdots,s_{\nu}),\ \text{and}\ L=\begin{bmatrix}l_{1}&...&l_{\nu}\end{bmatrix},\] \[Q =\mathrm{diag}(s_{\nu+1},s_{\nu+1},\cdots,s_{2\nu}),\ \text{and}\ R= \begin{bmatrix}r_{1}^{*}&...&r_{\nu}^{*}\end{bmatrix}^{*},\] _where \(s_{i}\in\mathbb{C}\setminus\Omega\), \(\forall i=1,2,\cdots,2\nu\), \(l_{i}\in\mathbb{C}^{p}\), and \(r_{i}\in\mathbb{C}^{1\times q}\), \(i=1,2,...,\nu\). If the pair \((L,S)\) is observable, and \((Q,R)\) is controllable. Then, the 0-moments \(\eta_{0}(s_{i})\) satisfy the following relations_ \[\begin{bmatrix}\eta_{0}(s_{1})&\eta_{0}(s_{2})&\cdots&\eta_{0}(s_{ \nu})\end{bmatrix} =C_{0}\Pi+C_{1}\Pi S,\] \[\begin{bmatrix}\eta_{0}(s_{\nu+1})&\eta_{0}(s_{\nu+2})&\cdots& \eta_{0}(s_{2\nu})\end{bmatrix} =\Upsilon B,\] _where \(\Pi\), \(\Upsilon\in\mathbb{C}^{n\times\nu}\) satisfy the following second-order Sylvester equations_ \[M\Pi S^{2}+D\Pi S+K\Pi =BL, \tag{9}\] \[Q^{2}\Upsilon M+Q\Upsilon D+\Upsilon K =RC_{0}+QRC_{1}. \tag{10}\] Proof.: Let \(\Pi=[\Pi_{1},\Pi_{2},\cdots,\Pi_{\nu}]\in\mathbb{C}^{n\times\nu}\) with \(\Pi_{i}\in\mathbb{C}^{n}\). Then, the matrix equation (9) is written as \[M\Pi_{i}s_{i}^{2}+D\Pi_{i}s_{i}+K\Pi_{i}=Bl_{i},\ \forall i=1,2,\cdots,\nu.\] It leads to \[\Pi_{i}=(Ms_{i}^{2}+Ds_{i}+K)^{-1}Bl_{i},\] Thus, \(\eta_{0}(s_{i})=C_{0}\Pi_{i}+C_{1}\Pi_{i}s_{i}\) for all \(i=1,2,\cdots,\nu\), which gives the result. Analogously, we denote \(\Upsilon^{*}:=[\Upsilon_{1}^{*},\Upsilon_{1}^{*},\cdots,\Upsilon_{\nu}^{*}]\) with \(\Upsilon_{i}\in\mathbb{C}^{1\times n}\). Then, (10) is equivalent to \[\Upsilon_{i}Ms_{\nu+i}^{2}+\Upsilon_{i}Ds_{\nu+i}+\Upsilon_{i}K=r_{i}^{*}C_{0} +r_{i}^{*}C_{1}s_{\nu+i},\] for all \(i=1,2,\cdots,\nu\). Thus, we obtain \[\Upsilon_{i}=(r_{i}^{*}C_{0}+r_{i}^{*}C_{1}s_{\nu+i})(Ms_{i+\nu}^{2}+Ds_{i+\nu} +K)^{-1},\] which gives the 0-moments \(\eta_{0}(s_{\nu+1})\), \(\cdots\), \(\eta_{0}(s_{2\nu})\). Furthermore, the following lemma provides the characterization of the moments at a single interpolation point with higher order derivatives. **Lemma 2**.: _Consider the second-order system in (1) and \(s_{\star},z_{\star}\in\mathbb{C}\setminus\Omega\). Let the matrices \(S\in\mathbb{C}^{(\nu+1)\times(\nu+1)}\), \(L\in\mathbb{R}^{p\times(\nu+1)}\) and \(Q\in\mathbb{R}^{(\nu+1)\times(\nu+1)}\), \(R\in\mathbb{R}^{(\nu+1)\times q}\) be such that the pair \((L,S)\) is observable, and the pair \((Q,R)\) is controllable, respectively. Suppose \(S\) and \(Q\) are non-derogatory2 such that_ Footnote 2: A matrix is called non-derogatory if its minimal and characteristic polynomials are identical. \[\det(sI-S)=(s-s_{\star})^{\nu+1},\ \det(sI-Q)=(s-z_{\star})^{\nu+1}.\] _Then the following statements hold._ 1. _There exists a one-to-one relation between the moments_ \(\eta_{0}(s_{\star})\)_,_ \(\eta_{1}(s_{\star})\)_,_ \(\cdots\)_,_ \(\eta_{\nu}(s_{\star})\) _and and the matrix_ \(C_{0}\Pi+C_{1}\Pi S\)_, where_ \(\Pi\) _satisfies_ \[M\Pi S^{2}+D\Pi S+K\Pi=BL.\] (11) 2. _There exists a one-to-one relation between the moments_ \(\eta_{0}(z_{\star})\)_,_ \(\eta_{1}(z_{\star})\)_,_ \(\cdots\)_,_ \(\eta_{\nu}(z_{\star})\) _and and the matrix_ \(\Upsilon B\)_, where_ \(\Upsilon\) _satisfies_ \[Q^{2}\Upsilon M+Q\Upsilon D+\Upsilon K=RC_{0}+QRC_{1}.\] (12) Proof.: For simplicity, let \(\mathscr{F}(s)=(Ms^{2}+Ds+K)^{-1}\). Due to \(\mathscr{F}(s)^{-1}\mathscr{F}(s)=I\), we have \[\frac{d}{ds}\left[\mathscr{F}(s)^{-1}\mathscr{F}(s)\right]=\frac{d}{ds} \mathscr{F}(s)^{-1}\mathscr{F}(s)+\mathscr{F}(s)^{-1}\frac{d}{ds}\mathscr{F}(s) =0.\] Thus, we obtain the first-order derivate of \(\mathscr{F}(s)\) as \[\frac{d}{ds}\mathscr{F}(s)=-\mathscr{F}(s)\frac{d}{ds}\mathscr{F}(s)^{-1} \mathscr{F}(s),\] The above equation is important, as it facilitates the calculation of the second order derivative of \(\mathscr{F}(s)\): \[\begin{split}\frac{d^{2}}{ds^{2}}\mathscr{F}(s)=&- \frac{d}{ds}\left[\mathscr{F}(s)\frac{d}{ds}\mathscr{F}(s)^{-1}\mathscr{F}(s) \right]\\ =& 2\mathscr{F}(s)\frac{d}{ds}\mathscr{F}(s)^{-1} \mathscr{F}(s)\frac{d}{ds}\mathscr{F}(s)^{-1}\mathscr{F}(s)\\ &-\mathscr{F}(s)\frac{d^{2}}{ds^{2}}\mathscr{F}(s)^{-1} \mathscr{F}(s)\\ =&-2\mathscr{F}(s)\frac{d}{ds}\mathscr{F}(s)^{-1} \frac{d}{ds}\mathscr{F}(s)\\ &-\mathscr{F}(s)\frac{d^{2}}{ds^{2}}\mathscr{F}(s)^{-1} \mathscr{F}(s),\end{split}\] where \[\frac{d}{ds}\mathscr{F}(s)^{-1}=2Ms+D,\text{ and }\frac{d^{2}}{ds^{2}} \mathscr{F}(s)^{-1}=2M. \tag{13}\] More generally, the \(k\)-th order (\(k\geq 2\)) of \(\mathscr{F}(s)\) derivative can be proven by induction. The result is given by \[\begin{split}\frac{d^{k}}{ds^{k}}\mathscr{F}(s)=&- k\mathscr{F}(s)\frac{d}{ds}\mathscr{F}(s)^{-1}\frac{d^{k-1}}{ds^{k-1}} \mathscr{F}(s)\\ &-\frac{k(k-1)}{2}\mathscr{F}(s)\frac{d^{2}}{ds^{2}}\mathscr{F}( s)^{-1}\frac{d^{k-2}}{ds^{k-2}}\mathscr{F}(s).\end{split} \tag{14}\] We start proving the first statement. Let \(\Pi=[\Pi_{0},\Pi_{1},\cdots,\Pi_{\nu}]\in\mathbb{C}^{n\times(\nu+1)}\) with \(\Pi_{0}=(Ms_{\star}^{2}+Ds_{\star}+K)^{-1}Bl_{0}\) and \[\Pi_{k}:=\frac{1}{k!}\left[\left.\frac{d^{k}}{ds^{k}}\mathscr{F}(s)\right|_{s= s_{\star}}\right]Bl_{0},\text{ }k=1,2,\cdots,\nu, \tag{15}\] where \(l_{0}\in\mathbb{C}^{p}\). Note that pre-multiplying \(\frac{1}{k!}\) to (14) yields \[\begin{split}\frac{1}{k!}\frac{d^{k}}{ds^{k}}\mathscr{F}(s)=& -\mathscr{F}(s)\frac{d}{ds}\mathscr{F}(s)^{-1}\left[\frac{1}{(k-1)!} \frac{d^{k-1}}{ds^{k-1}}\mathscr{F}(s)\right]\\ -&\frac{1}{2}\mathscr{F}(s)\frac{d^{2}}{ds^{2}} \mathscr{F}(s)^{-1}\left[\frac{1}{(k-2)!}\frac{d^{k-2}}{ds^{k-2}}\mathscr{F}(s )\right],\end{split}\] which implies \[\begin{split}\left(Ms_{\star}^{2}+Ds_{\star}+K\right)\Pi_{0}=& Bl _{0},\\ \left(Ms_{\star}^{2}+Ds_{\star}+K\right)\Pi_{1}=&- \left(2Ms_{\star}+D\right)\Pi_{0},\\ \left(Ms_{\star}^{2}+Ds_{\star}+K\right)\Pi_{2}=&- \left(2Ms_{\star}+D\right)\Pi_{1}-M\Pi_{0},\\ &\vdots\\ \left(Ms_{\star}^{2}+Ds_{\star}+K\right)\Pi_{\nu}=&- \left(2Ms_{\star}+D\right)\Pi_{\nu-1}-M\Pi_{\nu-2}.\end{split}\] Therefore, we obtain a series of second-order Sylvester equations as follows. \[\begin{split} M\Pi_{0}s_{\star}^{2}+D\Pi_{0}s_{\star}+K\Pi_{0}=& Bl _{0},\\ M(\Pi_{1}s_{\star}^{2}+2\Pi_{0}s_{\star})+D(\Pi_{1}s_{\star}+\Pi_ {0})+K\Pi_{1}=& 0,\\ M(\Pi_{2}s_{\star}^{2}+2\Pi_{1}s_{\star}+\Pi_{0})+D(\Pi_{2}s_{ \star}+\Pi_{1})+K\Pi_{2}=& 0,\\ \vdots&\\ M(\Pi_{\nu}s_{\star}^{2}+2\Pi_{\nu-1}s_{\star}+\Pi_{\nu-2})\\ +& D(\Pi_{\nu}s_{\star}+\Pi_{\nu-1})+K\Pi_{\nu}=& 0, \end{split}\] The above equations can be written in a compact form: \[M\Pi\bar{S}^{2}+D\Pi\bar{S}+K\Pi=B\bar{L}, \tag{16}\] where \(\bar{L}=\begin{bmatrix}l_{0}&0&...&0\end{bmatrix}\in\mathbb{C}^{p\times(\nu+1)}\) and \[\bar{S}=\begin{bmatrix}s_{\star}&1&0&\cdots&0\\ 0&s_{\star}&1&\cdots&0\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ 0&\cdots&0&s_{\star}&1\\ 0&\cdots&\cdots&0&s_{\star}\end{bmatrix}\in\mathbb{C}^{(\nu+1)\times(\nu+1)}.\] Next, the moments at \(\sigma(S)\) are characterized. The 0-moment is obtained directly as \[\eta_{0}(s_{\star})=(C_{0}+C_{1}s_{\star})\mathscr{F}(s_{\star})B=C_{0}\Pi_{0}+ C_{1}\Pi_{0}s_{\star}.\] Furthermore, we observe that \[\begin{split}&\frac{d^{k}}{ds^{k}}\left[(C_{0}+C_{1}s)\mathscr{F}( s)\right]\\ =& C_{1}\frac{d^{k-1}}{ds^{k-1}}\mathscr{F}(s)+\frac{d^{k-1}}{ds^{k-1}} \left[(C_{0}+C_{1}s)\frac{d}{ds}\mathscr{F}(s)\right]\\ =& 2C_{1}\frac{d^{k-1}}{ds^{k-1}}\mathscr{F}(s)+\frac{d^{k-2}}{ds^{k- 2}}\left[(C_{0}+C_{1}s)\frac{d^{2}}{ds^{2}}\mathscr{F}(s)\right]\\ =&\cdots=kC_{1}\frac{d^{k-1}}{ds^{k-1}}\mathscr{F}(s)+(C_{0}+C_{1}s) \frac{d^{k}}{ds^{k}}\mathscr{F}(s).\end{split} \tag{17}\] Thus, by the definition of the \(k\)-moment in (8), we have \[\begin{split}\eta_{k}(s_{\star})=&\frac{(-1)^{k}}{k!} \frac{d^{k}}{ds^{k}}\left[(C_{0}+C_{1}s)\mathscr{F}(s)\right]B\right|_{s=s_{ \star}}\\ =&\frac{(-1)^{k}}{(k-1)!}\left.C_{1}\frac{d^{k-1}}{ds^{k-1}} \mathscr{F}(s)B\right|_{s=s_{\star}}\\ &+\frac{(-1)^{k}}{k!}(C_{0}+C_{1}s_{\star})\left.\frac{d^{k}}{ds^{k }}\mathscr{F}(s)B\right|_{s=s_{\star}}\\ =&(-1)^{k}\left[C_{1}\Pi_{k-1}+(C_{0}+C_{1}s_{\star}) \Pi_{k}\right]\\ =&(-1)^{k}\left[C_{0}\Pi_{k}+C_{1}(\Pi_{k-1}+\Pi_{k}s_{ \star})\right],\end{split}\] for \(k=1,2,\cdots,\nu\). Consequently, the following relation holds. \[\begin{split}\begin{bmatrix}\eta_{0}(s_{\star})&\eta_{1}(s_{\star})&\cdots&\eta_{\nu}(s_{\star})\end{bmatrix}=(C_{0}\Pi+C_{1}\Pi S) \Phi_{\nu},\end{split}\] with \(\Phi_{\nu}=\operatorname{diag}(1,-1,1,\cdots,(-1)^{\nu})\). Therefore, there is a one-to-one relation between the moments \(\eta_{k}(s_{\star})\) and the entries of the matrix \(C_{0}\Pi+C_{1}\Pi S\). Notice that the pair \((\bar{L},\bar{S})\) is observable for any \(s_{\star}\). For a given pair \((L,S)\) that is observable, there exists a unique invertible matrix \(T\in\mathbb{R}^{(\nu+1)\times(\nu+1)}\) such that \(\bar{S}=TST^{-1}\) and \(\bar{L}=T^{-1}L\). Substituting \(\bar{S}\) and \(\bar{L}\) into (16) yields the Sylvester equation in (11). Now, we prove the second statement. Before proceeding, we claim that \[\begin{split}\frac{d^{k}}{ds^{k}}\mathscr{F}(s)=&-k \frac{d^{k-1}}{ds^{k-1}}\mathscr{F}(s)\frac{d}{ds}\mathscr{F}(s)^{-1}\mathscr{F}(s) \\ &-\frac{k(k-1)}{2}\frac{d^{k-2}}{ds^{k-2}}\mathscr{F}(s)\frac{d^{2}}{ ds^{2}}\mathscr{F}(s)^{-1}\mathscr{F}(s),\end{split} \tag{18}\] where \(\frac{d}{ds}\mathscr{F}(s)^{-1}\) and \(\frac{d^{2}}{ds^{2}}\mathscr{F}(s)^{-1}\) are defined in (13). The proof is similar to (18). Let \(\ \(\mathbb{C}^{(\nu+1)\times n}\) with \(\Upsilon_{0}=(r_{0}^{*}C_{0}+r_{0}^{*}C_{1}z_{\star})(Mz_{\star}^{2}+Dz_{\star}+K)^ {-1}\) and \[\Upsilon_{k}:=\frac{1}{k!}\left[\left.\frac{d^{k}}{ds^{k}}(r_{0}^{*}C_{0}+r_{0}^ {*}C_{1}s)\mathscr{F}(s)\right|_{s=z_{\star}}\right],k=1,2,\cdots,\nu,\] where \(r_{0}\in\mathbb{C}^{1\times q}\). Observe that by Definition 2, there is a one-to-one relation between \(\Upsilon_{k}\) and the moment \(\eta_{k}(z_{\star})\): \[\eta_{0}(z_{\star})=\Upsilon_{0}B,\ \eta_{k}(z_{\star})=(-1)^{k}\Upsilon_{k}B.\] For simplicity, we denote \[\phi_{k}=\frac{1}{k!}\left[\left.\frac{d^{k}}{ds^{k}}\mathscr{F}(s)\right|_{s =z_{\star}}\right].\] Thus, from (17) and (18), the following equations hold. \[\Upsilon_{k} =r_{0}^{*}C_{1}\phi_{k-1}+(r_{0}^{*}C_{0}+r_{0}^{*}C_{1}z_{\star} )\phi_{k},\] \[\phi_{k} =-\phi_{k-1}(2Mz_{\star}+D)\mathscr{F}(z_{\star})-\phi_{k-2}M \mathscr{F}(z_{\star}).\] Combining the above equations, we obtain the relation between \(\Upsilon_{k}\), \(\Upsilon_{k-1}\) and \(\Upsilon_{k-2}\): \[\Upsilon_{k}\mathscr{F}(z_{\star})^{-1}= \Upsilon_{k}(Mz_{\star}^{2}+Dz_{\star}+K)\] \[-C_{1}\phi_{k-2}(2Mz_{\star}+D)-C_{1}\phi_{k-3}M\] \[-(C_{0}+C_{1}z_{\star})\phi_{k-1}(2Mz_{\star}+D)\] \[-(C_{0}+C_{1}z_{\star})\phi_{k-2}M\] \[= -\Upsilon_{k-1}(2Mz_{\star}+D)-\Upsilon_{k-2}M.\] Then, similar to the proof of the first statement, we therefore obtain a second-order Sylvester equation: \[\bar{Q}^{2}\Upsilon M+\bar{Q}\Upsilon D+\Upsilon K=\bar{R}C_{0}+\bar{Q}\bar{R}C _{1},\] where \(\bar{R}=\begin{bmatrix}r_{0}^{*}&0&...&0\end{bmatrix}^{*}\in\mathbb{C}^{(\nu+ 1)\times q}\) and \[\bar{Q}=\begin{bmatrix}z_{\star}&0&0&\cdots&0\\ 1&z_{\star}&0&\cdots&0\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ 0&\cdots&1&z_{\star}&0\\ 0&\cdots&\cdots&1&z_{\star}\end{bmatrix}\in\mathbb{C}^{(\nu+1)\times(\nu+1)}.\] Since both pairs \((\bar{Q},\bar{R})\) and \((Q,\bar{R})\) are controllable, there exists a unique invertible matrix \(\tilde{T}\) such that \(\bar{Q}=TQT^{-1}\) and \(\bar{R}=TR\), which yields (12). Let us combine the conclusions in Lemma 1 and Lemma 2, which leads to the following result. **Theorem 1**.: _Consider the second-order system (1) with transfer function \(W(s)\). Let the matrices \(S\in\mathbb{C}^{\nu\times\nu}\), \(L\in\mathbb{R}^{p\times\nu}\) and \(Q\in\mathbb{R}^{\nu\times\nu}\), \(R\in\mathbb{R}^{\nu\times q}\) be such that the pair \((L,S)\) is observable, and the pair \((Q,R)\) is controllable, respectively. Then, the following statements hold._ 1. _If_ \(\sigma(S)\cap\Omega=\emptyset\)_, there is a one-to-one relation between the moments of_ \(W(s)\) _at_ \(\sigma(S)\) _and the matrix_ \(C_{0}\Pi+C_{1}\Pi S\)_, where_ \(\Pi\in\mathbb{C}^{n\times\nu}\) _is the unique solution of_ \[M\Pi S^{2}+D\Pi S+K\Pi=BL.\] (19) 2. _If_ \(\sigma(Q)\cap\Omega=\emptyset\)_, there is a one-to-one relation between the moments of_ \(W(s)\) _at_ \(\sigma(Q)\) _and the matrix_ \(\Upsilon B\)_, where_ \(\Upsilon\in\mathbb{C}^{\nu\times n}\) _is the unique solution of_ \[Q^{2}\Upsilon M+Q\Upsilon D+\Upsilon K=RC_{0}+QRC_{1}.\] (20) Proof.: It follows from the results in Lemma 1 and Lemma 2 that the moments moments of \(W(s)\) at \(\sigma(S)\) and \(\sigma(Q)\) are characterized by \(C_{0}\Pi+C_{1}\Pi S\) and \(\Upsilon B\), respectively, where \(\Pi\) and \(\Upsilon\) satisfy the second-order Sylvester equations in (19) and (20), respectively. Then, in this proof, we show the solutions of (19) and (20) are unique. Consider the following first-order Sylvester equations: \[\begin{bmatrix}0&I\\ -M^{-1}K&-M^{-1}D\end{bmatrix}\tilde{\Pi}+\begin{bmatrix}0\\ M^{-1}B\end{bmatrix}L=\tilde{\Pi}S, \tag{21}\] and \[\begin{bmatrix}0&I\\ -M^{-1}K&-M^{-1}D\end{bmatrix}+R\begin{bmatrix}C_{0}&C_{1}\end{bmatrix}=Q \tilde{\Upsilon}, \tag{22}\] where \(\tilde{\Pi}\in\mathbb{C}^{2n\times\nu}\) and \(\tilde{\Upsilon}\in\mathbb{C}^{\nu\times 2n}\). Note that the roots of \[\begin{vmatrix}sI_{2n}-\begin{bmatrix}0&I\\ -M^{-1}K&-M^{-1}D\end{bmatrix}\end{vmatrix}=0\] coincide with \(\Omega\) in (7). Since \(\sigma(S)\cap\Omega=\emptyset\) and \(\sigma(Q)\cap\Omega=\emptyset\), \(\tilde{\Pi}\) and \(\tilde{\Upsilon}\) are unique solutions of (21) and (22), respectively. Furthermore, we show the one-to-one relations between \(\tilde{\Pi}\) and \(\Pi\) as well as between \(\tilde{\Upsilon}\) and \(\Upsilon\). Partition \(\tilde{\Pi}\) and \(\tilde{\Upsilon}\) as \[\tilde{\Pi}=\begin{bmatrix}\tilde{\Pi}_{a}\\ \tilde{\Pi}_{b}\end{bmatrix},\ \text{with}\ \tilde{\Pi}_{a},\tilde{\Pi}_{b}\in \mathbb{C}^{n\times\nu},\] \[\tilde{\Upsilon}=\begin{bmatrix}\Upsilon_{a}&\tilde{\Upsilon}_{b} \end{bmatrix},\ \text{with}\ \tilde{\Upsilon}_{a},\tilde{\Upsilon}_{b}\in \mathbb{C}^{\nu\times n},\] which lead to \[\tilde{\Pi}_{b} =\tilde{\Pi}_{a}S, \tag{23a}\] \[BL =M\tilde{\Pi}_{b}S+D\tilde{\Pi}_{b}+K\tilde{\Pi}_{a}, \tag{23b}\] and \[-\tilde{\Upsilon}_{b}M^{-1}K+RC_{0}=Q\tilde{\Upsilon}_{a}, \tag{24a}\] \[\tilde{\Upsilon}_{a}-\tilde{\Upsilon}_{b}M^{-1}D+RC_{1}=Q\tilde{ \Upsilon}_{b}. \tag{24b}\] Substituting (23a) to (23b) then yields \(M\tilde{\Pi}_{a}S^{2}+D\tilde{\Pi}_{a}S+K\tilde{\Pi}_{a}=BL\). Due to the uniqueness of the solution, we have \[\tilde{\Pi}=\begin{bmatrix}\Pi\\ \Pi S\end{bmatrix}, \tag{25}\] where \(\Pi\) is the solution of the second-order Sylvester equation in (19). Moreover, the following relation holds. Similarly, from (24a) and (24b), we have \[Q\tilde{\Upsilon}_{b}+\tilde{\Upsilon}_{b}M^{-1}D-RC_{1}=\tilde{ \Upsilon}_{a},\] \[Q^{2}\tilde{\Upsilon}_{b}+Q\tilde{\Upsilon}_{b}M^{-1}D+\tilde{ \Upsilon}_{b}M^{-1}K=RC_{0}+QRC_{1}, \tag{26}\] which yields \[\tilde{\Upsilon}=\begin{bmatrix}Q\Upsilon M+\Upsilon D-RC_{1}&\Upsilon M\end{bmatrix}. \tag{27}\] From (25) and (26), we conclude that \(\Pi\) and \(\Upsilon\) are unique solutions of (19) and (20), respectively. **Remark 1**.: _In fact, the proof of Theorem 1 provides an effective way to obtain \(\Pi\) and \(\Upsilon\) as the solutions of the second order Sylvester equations in (19) and (20). We can compute the first-order Sylvester equations in (21) and (22) to obtain \(\tilde{\Pi}\) and \(\tilde{\Upsilon}\), from which, \(\Pi\) and \(\Upsilon\) can be uniquely determined._ ### _Reduced-Order Second-Order Systems_ Using the characterization of moments in Theorem 1, we now define the families of second-order reduced models achieving moment matching at the given interpolation points. The following results necessary and sufficient conditions for a low-order system \(\mathbf{\hat{\Sigma}}\) to achieve moment matching. **Proposition 3**.: _Consider the second-order reduced model_ \[\mathbf{\hat{\Sigma}}:\left\{\begin{array}{ll}F_{2}\hat{\xi}(t)+F_{1}\hat{ \xi}(t)+F_{0}\xi(t)&=Gu(t),\\ H_{1}\hat{\xi}(t)+H_{0}\xi(t)&=\eta(t),\end{array}\right. \tag{28}\] _with \(\hat{q}(t),\dot{\hat{q}}(t)\in\mathbb{R}^{\nu}\), \(F_{i}\in\mathbb{R}^{\nu\times\nu}\), for \(i=0,1,2\), and \(G\in\mathbb{R}^{\nu\times p}\), \(H_{1},H_{0}\in\mathbb{R}^{q\times\nu}\). Denote the following set_ \[\hat{\Omega}:=\{s\in\mathbb{C}:\det(s^{2}F_{2}+sF_{1}+F_{0})=0\}. \tag{29}\] _Let \(S\in\mathbb{C}^{\nu\times\nu}\), \(L\in\mathbb{R}^{p\times\nu}\) and \(Q\in\mathbb{R}^{\nu\times\nu}\), \(R\in\mathbb{R}^{\nu\times q}\) be such that the pair \((L,S)\) is observable, and the pair \((Q,R)\) is controllable, respectively._ 1. _Assume that_ \(\sigma(S)\cap\Omega=\emptyset\) _and_ \(\sigma(S)\cap\hat{\Omega}=\emptyset\)_. The reduced system_ \(\mathbf{\hat{\Sigma}}\) _matches the moments of_ \(\mathbf{\Sigma}\) _at_ \(\sigma(S)\) _if and only_ \[C_{0}\Pi+C_{1}\Pi S=H_{0}P+H_{1}PS\] (30) _where_ \(P\in\mathbb{C}^{\nu\times\nu}\) _is unique solution of the second-order Sylvester equation_ \[F_{2}PS^{2}+F_{1}PS+F_{0}P=GL.\] (31) 2. _Assume that_ \(\sigma(Q)\cap\Omega=\emptyset\) _and_ \(\sigma(Q)\cap\hat{\Omega}=\emptyset\)_. The reduced system_ \(\mathbf{\hat{\Sigma}}\) _matches the moments of_ \(\mathbf{\Sigma}\) _at_ \(\sigma(S)\) _if and only if_ \[\Upsilon B=PG,\] (32) _where_ \(P\in\mathbb{C}^{\nu\times\nu}\) _is unique solution of the second-order Sylvester equation_ \[Q^{2}PF_{2}+QPF_{1}+PF_{0}=RH_{0}+QRRH_{1}.\] (33) The proof follows a similar reasoning as in [21, 30]. Choosing \(P=I_{\nu}\), we then obtain the family of second-order reduced model \(\mathbf{\hat{\Sigma}}_{G}:\) \[\left\{\begin{array}{ll}F_{2}\ddot{\xi}+F_{1}\dot{\xi}+(GL-F_{2}S^{2}-F_{1} S)\xi&=Gu,\\ H_{1}\dot{\xi}+(C_{0}\Pi+C_{1}\Pi S-H_{1}S)\xi&=\eta(t),\end{array}\right. \tag{34}\] which is parameterized by \(F_{1}\), \(F_{2}\), \(G\) and \(H_{1}\), matches the moments of \(\mathbf{\Sigma}\) at \(\sigma(S)\). Analogously, the reduced model \(\mathbf{\hat{\Sigma}}_{H}\): \[\left\{\begin{array}{ll}F_{2}\ddot{\xi}+F_{1}\dot{\xi}+(RH_{0}+QRH_{1}-Q^{ 2}F_{2}-QF_{1})\xi&=\Upsilon Bu,\\ H_{1}\dot{\xi}+H_{0}\xi&=\eta(t),\end{array}\right. \tag{35}\] parameterized by \(F_{1},F_{2}\), \(H_{0}\), and \(H_{1}\), matches the moments of \(\mathbf{\Sigma}\) at \(\sigma(Q)\). ### _Stability and Passivity Preserving Moment Matching_ Based on the two families of reduced models in (34) and (35), we derive second-order reduced models that not only match the moments of the original system \(\mathbf{\Sigma}\) at a prescribed set of finite interpolation points but also preserve the stability and passivity of \(\mathbf{\Sigma}\). The second-order system \(\mathbf{\Sigma}\) in (1) is asymptotically stable if \(M>0,\ D>0,\ \text{and}\ K>0\), [37]. It immediately leads to the following result. **Proposition 4**.: _The second-order reduced system \(\mathbf{\hat{\Sigma}}_{G}\) is asymptotically stable for any \(G\), \(F_{2}>0\), and \(F_{1}>0\) that satisfy_ \[GL-F_{2}S^{2}-F_{1}S>0. \tag{36}\] _Moreover, the second-order reduced system \(\mathbf{\hat{\Sigma}}_{H}\) is asymptotically stable for any \(H_{0}\), \(H_{1}\), \(F_{2}>0\), and \(F_{1}>0\) that satisfy_ \[RH_{0}+QRH_{1}-Q^{2}F_{2}-QF_{1}>0. \tag{37}\] Note that both (36) and (37) are linear matrix inequalities, which are computed via standard LMI solvers, e.g, YALMIP and CVX. Furthermore, with free parameters \(F_{1},F_{2}>0\), and \(G\in\mathbb{R}^{\nu\times p}\), there always exists a solution for (36). Similarly, with \(F_{1},F_{2}>0\), and \(H_{1},H_{0}\in\mathbb{R}^{q\times\nu}\), a solution for (37) is also guaranteed. Thereby, we present a particular choice of these parameters in a special case. **Proposition 5**.: _Consider \(S\) and \(Q\) with negative real eigenvalues such that_ \[S=T^{-1}\Lambda_{S}T,\ Q=Z\Lambda_{Q}Z^{-1},\] _with \(\Lambda_{S},\Lambda_{Q}<0\) diagonal and \(T,Z\) nonsingular._ 1. _Let_ \(F_{1}=T^{*}DT,\ 0<F_{2}<-T^{*}D\Lambda_{S}^{-1}T,\ G=L^{*},\) _with an arbitrary diagonal matrix_ \(D>0\)_. Then, the reduced system_ \(\mathbf{\hat{\Sigma}}_{G}\) _is asymptotically stable._ 2. _Let_ \(F_{1}=ZDZ^{*},\ 0<F_{2}<-ZD\Lambda_{Q}^{-1}Z^{*},\ H_{0}=R^{*},\ H_{1}=R^{*}Q\) _with an arbitrary diagonal matrix_ \(D>0\)_. Then, the second-order reduced system_ \(\mathbf{\hat{\Sigma}}_{H}\) _is asymptotically stable._ Proof.: With \(D>0\) and \(\Lambda_{S}>0\), both \(F_{1}\) and \(F_{2}\) are positive definite. Then, \(\mathbf{\hat{\Sigma}}_{G}\) is asymptotically stable, if \(GL-F_{2}S^{2}-F_{1}S>0\) holds. Observe that \[GL-F_{2}S^{2}-F_{1}S> L^{*}L+(T^{*}D\Lambda_{S}^{-1}T)S^{2}-T^{*}DTS\] \[= (T^{*})^{-1}L^{*}LT^{-1}+D\Lambda_{S}^{-1}\Lambda_{S}^{2}-D \Lambda_{S}\] \[= (T^{*})^{-1}L^{*}LT^{-1}\geq 0,\] which leads to the first statement. The proof of the second statement follows similar arguments. Next, a passivity-preserving model reduction for the second-order system \(\mathbf{\Sigma}\) is discussed. It follows from e.g., [11, 15] that the original system \(\mathbf{\Sigma}\) is passive if \[M>0,\ D>0,\ K>0,\ H_{1}=B^{\top},\ \text{and}\ H_{0}=0. \tag{38}\] Then, the following results hold. **Proposition 6**.: _Consider the original second-order system \(\mathbf{\Sigma}\), which satisfies the passivity condition in (38). The second-order reduced system \(\mathbf{\hat{\Sigma}}_{G}\) is passive if \(G^{*}=H_{1}=C_{1}\Pi\), and \(F_{1},F_{2}>0\) satisfy_ \[(\Pi^{*}M\Pi-F_{2})S^{2}+(\Pi^{*}D\Pi-F_{1})S+\Pi^{*}K\Pi>0. \tag{39}\] _Moreover, the second-order reduced system \(\mathbf{\hat{\Sigma}}_{H}\) is passive if \(H_{0}=0\), \(H_{1}=B^{*}\Upsilon^{*}\), and \(F_{1},F_{2}>0\) satisfy_ \[Q^{2}(\Upsilon M\Upsilon^{*}-F_{2})+Q(\Upsilon D\Upsilon^{*}-F_{1})+\Upsilon K \Upsilon^{*}>0. \tag{40}\] Proof.: As the conditions \(G^{*}=H_{1}\), \(F_{1},F_{2}>0\) are given, to show the passivity of \(\mathbf{\hat{\Sigma}}_{G}\), we only needs the positive definiteness of \(F_{0}\), namely \(GL-F_{2}S^{2}-F_{1}S>0\). From (39), we have \(\Pi^{*}BL-F_{2}S^{2}-F_{1}S>0\), which holds since \(B=C_{1}\). The proof of the system \(\mathbf{\hat{\Sigma}}_{H}\) follows similar reasoning. Based on the concludes in Proposition 4 and Proposition 6, we show the following result. **Proposition 7**.: _Consider the original second-order system \(\mathbf{\Sigma}\), which is asymptotically stable and satisfies the passivity condition in (38). The second-order reduced system \(\mathbf{\hat{\Sigma}}_{G}\) with parameters_ \[F_{2} =\Pi^{*}M\Pi,F_{1}=\Pi^{*}D\Pi,F_{0}=\Pi^{*}K\Pi,\] \[G =\Pi^{*}B,H_{1}=B^{*}\Pi.\] _and reduced system \(\mathbf{\hat{\Sigma}}_{H}\) with parameters_ \[F_{2} =\Upsilon M\Upsilon^{*},F_{1}=\Upsilon D\Upsilon^{*},F_{0}= \Upsilon K\Upsilon^{*},\] \[G =\Upsilon B,H_{1}=B^{*}\Upsilon^{*}.\] _are asymptotically stable and passive._ ## IV Two-Sided Moment Matching This section presents a two-sided time-domain moment matching approach to obtain a unique reduced model with \(\nu\) dimensions that matches both the moments of (1) at interpolation points in the two distinct sets \(\sigma(S)\) and \(\sigma(Q)\), simultaneously, consider the signal generators \[\dot{\omega}=S\omega,\ \omega(0)\neq 0,\ \theta=L\omega, \tag{41}\] and \[\dot{\varpi}=Q\varpi+R\psi,\varpi(0)=0,\] \[d=\varpi+(Q\Upsilon M+\Upsilon D-RC_{1})x+\Upsilon M\dot{x}, \tag{42}\] where \(\omega,\varpi\in\mathbb{R}^{\nu}\). Let (41), (42) and the original second-order system \(\mathbf{\Sigma}\) in (1) be interconnected with \(u=\theta\) and \(\psi=y\). The interconnection of systems is illustrated in Fig. 1. Following [21, 30], we show that the moments of system \(\mathbf{\Sigma}\) at the interpolation points \(\sigma(S)\) and \(\sigma(Q)\) are characterized simultaneously by the steady-state response of signal \(d(t)\). **Proposition 8**.: _Consider the two-sided interconnection of \(\mathbf{\Sigma}\) with the signal generators (41) and (42), where \(u=\theta\) and \(\psi=y\). Then, on the manifold \(\mathcal{M}=\{(x,\omega)\in\mathbb{R}^{n+\nu}\mid x=\Pi\omega\}\), it holds that_ \[\dot{d}=Qd+\Upsilon BL\omega \tag{43}\] _where \(\Upsilon\) is the unique solution of equation (20)._ Proof.: From (42), we have \[\dot{d}=\dot{\varpi}+(Q\Upsilon M+\Upsilon D-RC_{1})\dot{x}+\Upsilon M\ddot{x}, \tag{44}\] where \(\dot{\varpi}=Q\varpi+R(C_{1}\dot{x}+C_{0}x)\). Moreover, on the manifold \(\mathcal{M}\), it hold that \(\dot{x}=\Pi\dot{\omega}=\Pi S\omega\), and \(\ddot{x}=\Pi S\dot{\omega}=\Pi S^{2}\omega\), which are substituted into (44) and lead to \[\dot{d}=Q\varpi+\left[RC_{0}\Pi+(Q\Upsilon M+\Upsilon D)\Pi S+\Upsilon M\Pi S ^{2}\right]\omega. \tag{45}\] Observe that (19) and (20) implies that \[RC_{0}\Pi+\Upsilon D\Pi S+\Upsilon M\Pi S^{2}\] \[= Q^{2}\Upsilon M\Pi+Q\Upsilon D\Pi+\Upsilon BL-QR_{1}\Pi\] \[= Q(Q\Upsilon M+\Upsilon D-RC_{1})\Pi+\Upsilon BL.\] Consequently, (45) is further written as \[\dot{d} =Q\left[\varpi+(Q\Upsilon M+\Upsilon D-RC_{1})x+\Upsilon M\dot{x} \right]+BL\omega\] \[=Qd+BL\omega,\] which completes the proof. With the above result, we are ready to determine the second-order reduced model that matches the moments of \(\mathbf{\Sigma}\) at both \(\sigma(S)\) and \(\sigma(Q)\). Note that this model is within the families of second-order reduced models defined in (34) and (35) with particular choice of \(G\) and \(H\), respectively. **Theorem 2**.: _Consider a linear second-order system \(\mathbf{\Sigma}\) in (1) and let \(S,Q\in\mathbb{C}^{\nu\times\nu}\) be such that \(\sigma(S)\cap\sigma(Q)\cap\Omega=\emptyset\). Let \(L\in\mathbb{C}^{p\times\nu}\), \(R\in\mathbb{C}^{\nu\times q}\) be such that the pair \((L,S)\) is observable and the pair \((Q,R)\) is controllable. Suppose \(\Pi\in\mathbb{C}^{n\times\nu}\) and \(\Upsilon\in\mathbb{C}^{\nu\times n}\) are the unique solutions of (19) and (20), respectively, and \(\Upsilon\Pi\) is nonsingular, and denote_ \[\Pi^{\dagger}:=(\Upsilon\Pi)^{-1}\Upsilon,\text{ and }\Upsilon^{\dagger}:=\Pi( \Upsilon\Pi)^{-1} \tag{46}\] _as the left pseudo inverse of \(\Pi\) and the right pseudo inverse of \(\Upsilon\), respectively. Let \(\hat{\Omega}\) be the set defined in (29), which satisfies \(\hat{\Omega}\cap\sigma(S)\cap\sigma(Q)=\emptyset\)._ 1. _The unique model_ \(\mathbf{\hat{\Sigma}}_{G}\) _in (_34_) that matches the moments of_ \(\mathbf{\Sigma}\) _at_ \(\sigma(S)\) _and_ \(\sigma(Q)\) _simultaneously is given by_ \[F_{2}=\Pi^{\dagger}M\Pi,F_{1}=\Pi^{\dagger}D\Pi,\Pi,G=\Pi^{\dagger}B,H_{1}=C_{1}\Pi.\] (47) 2. _The unique model_ \(\mathbf{\hat{\Sigma}}_{H}\) _in (_35_) that matches the moments of_ \(\mathbf{\Sigma}\) _at_ \(\sigma(S)\) _and_ \(\sigma(Q)\) _simultaneously is given by_ \[F_{2}=\Upsilon M\Upsilon^{\dagger},F_{1}=\Upsilon D\Upsilon^{\dagger},H_{1}=C_{1} \Upsilon^{\dagger},H_{0}=C_{0}\Upsilon^{\dagger}.\] (48) 3. _The reduced models_ \(\mathbf{\hat{\Sigma}}_{G}\) _and_ \(\mathbf{\hat{\Sigma}}_{H}\) _are equivalent._ Proof.: We start with the proof for \(\mathbf{\hat{\Sigma}}_{G}\). With the parameters in (47), we obtain \(F_{0}=\Pi^{\dagger}K\), and \(H_{0}=C_{0}\Pi\). Consider Fig. 1: Illustration of the two-sided interconnection of \(\mathbf{\Sigma}\) with the signal generators (41) and (42). a system \(\mathbf{\hat{\Sigma}}_{G}\) in form of (34), which connects the signal generator \[\dot{w}=Qw+R\hat{\psi},w(0)=0,\] \[\zeta=w+(QPF_{2}+PF_{1}-RH_{1})\xi+PF_{2}\dot{\xi}.\] as a downstream system with \(\hat{\psi}=\eta\). Then, the system \(\mathbf{\hat{\Sigma}}_{G}\) matches the moments \(\Upsilon B\), with \(\Upsilon\) the unique solution of (20), at \(\sigma(Q)\) if and only if \[\dot{\zeta} =Q\zeta+\Upsilon Bu\] \[=Qw+(Q^{2}PF_{2}+QPF_{1}-QRH_{1})\xi+QPF_{2}\dot{\xi}+\Upsilon Bu. \tag{49}\] We refer [28, 30] for similar reasoning in the case of first-order systems. Note that \[\dot{\zeta}= \dot{w}+(QPF_{2}+PF_{1}-RH_{1})\dot{\xi}+PF_{2}\ddot{\xi}\] \[= Qw+R\left[H_{1}\dot{\xi}+(C_{0}\Pi+C_{1}\Pi S-H_{1}S)\xi\right]\] \[-RH_{1}\dot{\xi}+QPF_{2}\dot{\xi}+PF_{1}\dot{\xi}+PF_{2}\ddot{\xi}\] \[= Qw+R(C_{0}\Pi+C_{1}\Pi S-H_{1}S)\xi\] \[+QPF_{2}\dot{\xi}+PGu-P(GL-F_{2}S^{2}-F_{1}S)\xi. \tag{50}\] Therefore, from (49) and (50), the system \(\mathbf{\hat{\Sigma}}_{G}\) matches the moments \(\Upsilon B\), if and only if the parameters \(F_{1}\), \(F_{2}\), \(G\) and \(H_{1}\) in \(\mathbf{\hat{\Sigma}}_{G}\) satisfy \[PG=\Upsilon B,\] and \[Q^{2}PF_{2}+QPF_{1}+P(GL-F_{2}S^{2}-F_{1}S)\] \[=R(C_{0}\Pi+C_{1}\Pi S-H_{1}S)+QRH_{1}. \tag{51}\] It is verified that \(P=\Upsilon\Pi\) is the unique solution of (51) due to \(\hat{\Omega}\cap\sigma(Q)=\emptyset\). Moreover, since \(\Upsilon\) and \(\Pi\) are unique, the parameter matrices of \(\mathbf{\hat{\Sigma}}_{G}\) in (47) is unique. The proof for \(\mathbf{\hat{\Sigma}}_{H}\) with parameters in (48) follows similar arguments. Besides, the equivalence of \(\mathbf{\hat{\Sigma}}_{G}\) and \(\mathbf{\hat{\Sigma}}_{H}\) follows from the nonsingularity of \(\Upsilon\Pi\), with which there exists a coordination transformation between the two systems. ### _Moment Matching With Pole Placement_ In this section, we extend the arguments in [38, 39] to consider the pole placement problem in the reduced-order modeling of second-order systems, and we derive explicit reduced second-order models that simultaneously possess a set of prescribed poles and match desired high-order moments. Specifically, we consider \(\mathbf{\Sigma}\) in (1) and the family of approximations \(\mathbf{\hat{\Sigma}}_{G}\) as in (34) that matches the moments of \(\mathbf{\Sigma}\) at \(\sigma(S)\) with \(S\in\mathbb{C}^{\nu\times\nu}\). The objective is to find the parameter matrices \(F_{1}\), \(F_{2}\), \(G\), and \(H_{1}\) such that \(\mathbf{\hat{\Sigma}}_{G}\) has the poles at prescribed locations \(\lambda_{1},\lambda_{2},...,\lambda_{\kappa}\), where \(\kappa\leq\nu\), and \(\lambda_{i}\notin\sigma(S)\cap\Omega\) with \(\Omega\) defined in (7). Define \(Q_{\mathrm{p}}\in\mathbb{C}^{\kappa\times\kappa}\) such that \(\sigma(Q_{\mathrm{p}})=\{\lambda_{1},\lambda_{2},...,\lambda_{\kappa}\}\). Due to \(\sigma(Q_{\mathrm{p}})\cap\Omega=\emptyset\), the second-order Sylvester equation \[Q_{\mathrm{p}}^{2}\Upsilon_{\mathrm{p}}M+Q_{\mathrm{p}}\Upsilon_{\mathrm{p}}D +\Upsilon_{\mathrm{p}}K=R_{\mathrm{p}}C_{\mathrm{p}0}+Q_{\mathrm{p}}R_{\mathrm{ p}}C_{\mathrm{p}1}. \tag{52}\] has the unique solution \(\Upsilon_{\mathrm{p}}\in\mathbb{C}^{\kappa\times n}\), where \(R_{\mathrm{p}}\in\mathbb{C}^{\kappa\times q}\) is any matrix such that the pair \((Q_{\mathrm{p}},R_{\mathrm{p}})\) is controllable, and \(C_{\mathrm{p}0},C_{\mathrm{p}1}\in\mathbb{C}^{q\times n}\) such that \(C_{\mathrm{p}0}\Pi=C_{\mathrm{p}1}\Pi=0\), i.e. \(C_{\mathrm{p}0}^{*}\in\ker(\Pi)\) and \(C_{\mathrm{p}1}^{*}\in\ker(\Pi)\) with \(\Pi\) the unique solution of (19). Then, we impose linear constraints on the free parameters of the reduced model \(\mathbf{\hat{\Sigma}}_{G}\) such that the reduced model \(\mathbf{\hat{\Sigma}}_{G}\) has poles at \(\sigma(Q_{\mathrm{p}})\). **Theorem 3**.: _Consider \(\mathbf{\hat{\Sigma}}_{G}\) in (34) as a reduced model that matches the moments of the system (1) at \(\sigma(S)\). Let \(\Pi\) and \(\Upsilon_{\mathrm{p}}\in\mathbb{C}^{\kappa\times n}\) be the unique solutions of the second-order Sylvester equations in (19) and (52), respectively. Assume that \(\mathrm{rank}(\Upsilon_{\mathrm{p}}\Pi)=\kappa\). If the following constraints hold_ \[\Upsilon_{\mathrm{p}}\Pi F_{2} =\Upsilon_{\mathrm{p}}M\Pi, \tag{53a}\] \[\Upsilon_{\mathrm{p}}\Pi F_{1} =\Upsilon_{\mathrm{p}}D\Pi,\] (53b) \[\Upsilon_{\mathrm{p}}\Pi G =\Upsilon_{\mathrm{p}}B, \tag{53c}\] _then \(\sigma(Q_{\mathrm{p}})=\{\lambda_{1},\lambda_{2},...,\lambda_{\kappa}\}\subseteq\hat {\Omega}\) with \(\hat{\Omega}\) in (29) the set of poles of the reduced model \(\mathbf{\hat{\Sigma}}_{G}\)._ Proof.: Observe that \(\hat{\Omega}\) of the reduced model \(\mathbf{\hat{\Sigma}}_{G}\) is characterized by the solution of the following determinant equation \[|\rho(\lambda)|=|\lambda^{2}F_{2}+\lambda F_{1}+(GL-F_{2}S^{2}-F_{1}S)|=0.\] With the equations in (53), the matrix polynomial in the above determinant can be rewritten as \[\Upsilon_{\mathrm{p}}\Pi\rho(\lambda)= \Upsilon_{\mathrm{p}}\Pi\left[\lambda^{2}F_{2}+\lambda F_{1}+(GL-F_ {2}S^{2}-F_{1}S)\right]\] \[= \lambda^{2}\Upsilon_{\mathrm{p}}M\Pi+\lambda\Upsilon_{\mathrm{p}}D\Pi\] \[+\Upsilon_{\mathrm{p}}BL-\Upsilon_{\mathrm{p}}M\Pi S^{2}-\Upsilon_{ \mathrm{p}}D\Pi S. \tag{54}\] Moreover, it follows from the second-order Sylvester equation (19) that \[\Upsilon_{\mathrm{p}}BL-\Upsilon_{\mathrm{p}}M\Pi S^{2}-\Upsilon_{ \mathrm{p}}D\Pi S\] \[= \Upsilon_{\mathrm{p}}(M\Pi S^{2}+D\Pi S+K\Pi)-\Upsilon_{\mathrm{p}} M\Pi S^{2}-\Upsilon_{\mathrm{p}}D\Pi S\] \[= \Upsilon_{\mathrm{p}}K\Pi. \tag{55}\] Let (52) be post-multiplied by \(\Pi\), which yields \[\Upsilon_{\mathrm{p}}K\Pi=-Q_{\mathrm{p}}^{2}\Upsilon_{\mathrm{p}}M\Pi-Q_{ \mathrm{p}}\Upsilon_{\mathrm{p}}D\Pi, \tag{56}\] as \(C_{\mathrm{p}0}\) and \(C_{\mathrm{p}1}\) are chosen such that \(C_{\mathrm{p}0}\Pi=C_{\mathrm{p}1}\Pi=0\). Combining (54), (55), and (56), we obtain \[\Upsilon_{\mathrm{p}}\Pi\rho(\lambda)=\left(\lambda I-Q_{\mathrm{p}}\right)\left[ (\lambda I+Q_{\mathrm{p}})\Upsilon_{\mathrm{p}}M\Pi+\Upsilon_{\mathrm{p}}D \Pi\right]. \tag{57}\] Notice that \(\lambda\in\sigma(Q_{\mathrm{p}})\) if and only if there exists a left eigenvector \(v\in\mathbb{C}^{\kappa}\) such that \(v^{*}(\lambda I-Q_{\mathrm{p}})=0\). Then, we obtain from (57) that \[v^{*}(\lambda I-Q_{\mathrm{p}})\left[(\lambda I+Q_{\mathrm{p}})\Upsilon_{\mathrm{p }}M\Pi+\Upsilon_{\mathrm{p}}D\Pi\right]=0\] i.e. \(v^{*}\Upsilon_{\mathrm{p}}\Pi\rho(\lambda)=\bar{v}^{*}\rho(\lambda)=0\) with \(\bar{v}=(\Upsilon_{\mathrm{p}}\Pi)^{*}v\in\mathbb{C}^{\nu}\). It means that there is a vector \(\bar{v}_{r}\in\mathbb{C}^{\nu}\) such that \(\rho(\lambda)\bar{v}_{r}=0\), i.e. \(F_{2}^{-1}(\lambda^{2}F_{2}+\lambda F_{1}+(GL-F_{2}S^{2}-F_{1}S))=0\), which is equivalent to \[\left(\lambda I-\begin{bmatrix}0&I\\ -F_{2}^{-1}(GL-F_{2}S^{2}-F_{1}S)&-F_{2}^{-1}F_{1}\end{bmatrix}\right)\left[ \begin{matrix}\bar{v}_{r}\\ \lambda which is equivalent to \(\bar{v}\in\) such that \(\rho(\lambda)\bar{v}=0,\) with \(v=\left[\lambda\bar{v}^{*}\quad\bar{v}^{*}\right]^{*}\). Therefore, we obtain from (57) that \(\Upsilon_{\mathrm{p}}\Pi\rho(\lambda)\bar{v}=0\), i.e. \[(\lambda I-Q_{\mathrm{p}})\left[(\lambda I+Q_{\mathrm{p}})\Upsilon_{\mathrm{p }}M\Pi+\Upsilon_{\mathrm{p}}D\Pi\right]\bar{v}=0\] Consequently, any \(\lambda_{i}\in\sigma(Q_{\mathrm{p}})\), i.e. \(|\lambda_{i}I-Q_{\mathrm{p}}|=0\), also leads to \(|\rho(\lambda_{i})|=0\), which means \(\lambda_{i}\in\hat{\Omega}\). ### _Moment Matching of First-Order Derivatives_ Next, we study the reduced second-order systems that matches the moments of both zero and first order derivative of the transfer function \[W(s)=C(Ms^{2}+Ds+K)^{-1}B, \tag{58}\] where \(C=C_{0}\) and \(C_{1}=0\) in the original system (1). Denote \[W_{L}(s): =-C(Ms^{2}+Ds+K)^{-1} \tag{59a}\] \[W_{R}(s): =(2Ms+D)(Ms^{2}+Ds+K)^{-1}B \tag{59b}\] Then, the first-order derivative of \(W(s)\) is \(W^{\prime}(s)=W_{L}(s)\cdot W_{R}(s),\) which has a state-space representation as \[\mathbf{\Sigma}^{\prime}:\ \left\{\begin{array}{ll}M\ddot{x}(t)+D\dot{x}(t)+Kx(t)&= Bu(t),\\ M\ddot{z}(t)+D\dot{z}(t)+Kz(t)&=2M\dot{x}(t)+Dx(t),\\ -Cz(t)&=y(t),\end{array}\right. \tag{60}\] with \(z(t)\in\mathbb{R}^{n}\) and \(y(t)\in\mathbb{R}\). Consider the following signal generator \[\dot{\varpi}=S\varpi+L^{*}\psi,\varpi(0)=0,\] \[d=\varpi+(S\Upsilon M+\Upsilon D)z+\Upsilon M\dot{z}, \tag{61}\] where \(\Upsilon\in\mathbb{R}^{\nu\times n}\) is the unique solution of the second-order Sylvester equation: \[S^{2}\Upsilon M+S\Upsilon D+\Upsilon K=-L^{*}C, \tag{62}\] since \(\sigma(S)\cap\Omega=\emptyset\) is assumed. We then connect the system \(\mathbf{\Sigma}^{\prime}\) with the signal generators (41) and (61), where \(u=\theta\) and \(\psi=y\), see Fig. 2. The following result is obtained with the property of the signal \(d(t)\) in (61). **Theorem 4**.: _Consider the system \(\mathbf{\Sigma}^{\prime}\) in (60), which is connected to the signal generators (41) and (61) with \(u=\theta\) and \(\psi=y\). Let \(\Pi\) and \(\Upsilon\) be the unique solutions of (19) and (62), respectively, and assume that \(\Omega\subset\mathbb{C}^{-}\) and \(\sigma(S)\subset\mathbb{C}^{0}\). Then the moments of \(\mathbf{\Sigma}^{\prime}\) at \(\sigma(S)\) are in a one-to-one relation with the steady-state response of the signal \(d(t)\) in (61)._ Proof.: The scheme of proving this result follows similar arguments as in [28, 30], but the details require nontrivial modifications due to the second-order structure of the system. Observe that \[\dot{d} =\ddot{\varpi}+(S\Upsilon M+\Upsilon D)\dot{z}+\Upsilon M\ddot{z}\] \[=S\varpi-L^{*}Cz+S\Upsilon M\dot{z}+\Upsilon(\tilde{u}-Kz),\] which, using \(\tilde{u}=\tilde{y}=2M\dot{x}+Dx\) and (62), leads to \[\dot{d}= S\varpi-L^{*}Cz+S\Upsilon M\dot{z}+2\Upsilon M\dot{x}+2\Upsilon Dx\] \[-(-L^{*}C-S^{2}\Upsilon M-S\Upsilon D)z,\] \[= S\left[\varpi+(S\Upsilon M+\Upsilon D)z+\Upsilon M\dot{z}\right] +\Upsilon(2M\dot{x}+Dx)\] \[= Sd+\Upsilon\begin{bmatrix}D&2M\end{bmatrix}\begin{bmatrix}x(t)\\ \dot{x}(t)\end{bmatrix},\] Since \(M(\ddot{x}-\Pi\ddot{\omega})+D(\dot{x}-\Pi\dot{\omega})+K(x-\Pi\dot{\omega})= M\ddot{x}+D\dot{x}+Kx-(M\Pi\dot{S}^{2}+D\Pi S+K\Pi\omega)\omega=Bu-BL\omega=0\), we obtain \[\begin{bmatrix}x(t)\\ \dot{x}(t)\end{bmatrix}=\begin{bmatrix}\Pi\\ \Pi S\end{bmatrix}\omega+e^{\mathcal{A}t}\begin{bmatrix}x(0)-\Pi\omega(0)\\ \dot{x}(0)-\Pi S\omega(0)\end{bmatrix},\] which yields \[\dot{d}= Sd+(2\Upsilon M\Pi S+\Upsilon D\Pi)\omega+\Delta_{d}, \tag{63}\] with \(\Delta_{d}:=\Upsilon\begin{bmatrix}D&2M\end{bmatrix}e^{\mathcal{A}t}\begin{bmatrix} x(0)-\Pi\omega(0)\\ \dot{x}(0)-\Pi S\omega(0)\end{bmatrix}\). Denote \(\mathscr{D}(s)\) as the Laplace transform of \(d(t)\). Note that the term \(\Delta_{d}\) vanishes in the steady-state response, and thus \[\mathscr{D}(s)=(sI-S)^{-1}\Upsilon(2M\Pi S+D\Pi)\mathscr{W}(s),\] where \(\mathscr{W}(s)\) denotes the Laplace transform of \(\omega(t)\). Moreover, we obtain from (41) that \(\mathscr{W}(s):=(sI-S)^{-1}\omega(0)\), which leads to \[\mathscr{D}(s)=(sI-S)^{-1}\Upsilon(2Ms+D)\Pi(sI-S)^{-1}\omega(0)\] \[-2(sI-S)^{-1}\Upsilon M\Pi\omega(0). \tag{64}\] Denote \(\mathscr{F}(s)=(Ms^{2}+Ds+K)^{-1}\). Then, the following two equations hold. \[(sI-S)^{-1}\Upsilon\mathscr{F}^{-1}\] \[= (sI-S)^{-1}(\Upsilon Ms^{2}+\Upsilon Ds+\Upsilon K)\] \[= (sI-S)^{-1}(\Upsilon Ms^{2}+\Upsilon Ds-L^{*}C-S^{2}\Upsilon M-S \Upsilon D)\] \[= (sI+S)\Upsilon M+\Upsilon D-L^{*}C,\] and \[\mathscr{F}^{-1}\Pi(sI-S)^{-1}\] \[= (M\Pi s^{2}+D\Pi s+K\Pi)(sI-S)^{-1}\] \[= (M\Pi s^{2}+D\Pi s+BL-M\Pi S^{2}-D\Pi S)(sI-S)^{-1}\] \[= M\Pi(sI+S)+D\Pi+BL.\] Therefore, and we can rewrite the first term of \(\mathscr{D}(s)\) as \[\mathscr{D}_{1}(s) =(sI-S)^{-1}\Upsilon\mathscr{F}^{-1}(s)\mathscr{F}(s)(2Ms+D)\] \[\qquad\qquad\cdot\mathscr{F}(s)\mathscr{F}^{-1}(s)\Pi(sI-S)^{-1} \omega(0),\] \[=[(sI+S)\Upsilon M+\Upsilon D-L^{*}C]\mathscr{F}(s)(2Ms+D)\] \[\qquad\qquad\cdot\mathscr{F}(s)\left[M\Pi(sI+S)+D\Pi+BL\right]\omega(0)\] \[=-L^{*}C\mathscr{F}(s)(2Ms+D)\mathscr{F}(s)BL+\cdots\] Fig. 2: Illustration of the interconnection of \(\mathbf{\Sigma}^{\prime}\) with the signal generators (41) and (61). As a result, the steady-state response of \(d(t)\) contains terms of the form \(L^{*}\frac{W(s_{i})}{(s-s_{i})^{2}}L,\) with \(s_{i}\in\sigma(S)\), which proves the claim. Next, we present a second-order reduced model which matches the moments of \(W(s)\) and \(W^{\prime}(s)\) simultaneously at the interpolation points \(\sigma(S)\). Thereby, we suppose \(H_{1}=0\) and \(H_{0}=H\) in (34) and (35). **Theorem 5**.: _Consider a linear second-order system \(\mathbf{\Sigma}\) in (1) and let \(S\in\mathbb{C}^{r\times\nu}\), \(L\in\mathbb{C}^{p\times\nu}\) be such that the pair \((L,S)\) is observable, and \(\Pi\in\mathbb{C}^{n\times\nu}\) and \(\Upsilon\in\mathbb{C}^{\nu\times n}\) are the unique solutions of (19) and (62), respectively, such that \(\Upsilon\Pi\) is nonsingular. Then the following statements hold._ 1. _A model_ \(\mathbf{\hat{\Sigma}}_{G}\) _that matches the moments of_ \(W(s)\) _and_ \(W^{\prime}(s)\) _at_ \(\sigma(S)\) _is given by_ \[F_{2}=\Pi^{\dagger}M\Pi,F_{1}=\Pi^{\dagger}D\Pi,G=\Pi^{\dagger}B,\] (65) _with_ \(\Pi^{\dagger}:=(\Upsilon\Pi)^{-1}\Upsilon\)_._ 2. \(A\) \(\mathbf{\hat{\Sigma}}_{H}\) _that matches the moments of_ \(W(s)\) _and_ \(W^{\prime}(s)\) _at_ \(\sigma(S)\) _is given by_ \[F_{2}=\Upsilon M\Upsilon^{\dagger},F_{1}=\Upsilon D\Upsilon^{\dagger},H=C \Upsilon^{\dagger},\] _with_ \(\Upsilon^{\dagger}:=\Pi(\Upsilon\Pi)^{-1}\)_._ 3. _The reduced models_ \(\mathbf{\hat{\Sigma}}_{G}\) _and_ \(\mathbf{\hat{\Sigma}}_{H}\) _are equivalent._ Proof.: First, with the reduced matrices in (65), we obtain \(F_{0}=\Pi^{\dagger}K\) and \(H=C\Pi\). It is not hard to verify according to Proposition 3 that \(\mathbf{\hat{\Sigma}}_{G}\) matches the moments of \(W(s)\) at \(\sigma(S)\). Then, we prove that \(\mathbf{\hat{\Sigma}}_{G}\) also matches the moments of \(W^{\prime}(s)\), which means that \(W_{L}(s_{i})W_{R}(s_{i})=-H(F_{2}s_{i}^{2}+F_{1}s_{i}+F_{0})^{-1}(2F_{2}s_{i} +F_{1})(F_{2}s_{i}^{2}+F_{1}s_{i}+F_{0})^{-1}B\), for all \(s_{i}\in\sigma(S)\), with the transfer function \(W_{L}(s)\), \(W_{R}(s)\) defined in (59). Observe that \(2F_{2}s_{i}+F_{1}=P\Upsilon(2Ms_{i}+D)\Pi\), where \(P:=(\Upsilon\Pi)^{-1}\). Therefore, the moment matching is achieved if \[C(Ms_{i}^{2}+Ds_{i}+K)^{-1}=C\Pi(F_{2}s_{i}^{2}+F_{1}s_{i}+F_{0})^{-1}P\Upsilon, \tag{66}\] and \[(Ms_{i}^{2}+Ds_{i}+K)^{-1}B=\Pi(F_{2}s_{i}^{2}+F_{1}s_{i}+F_{0})^{-1}P\Upsilon B. \tag{67}\] It follows from the second-order Sylvester equations (19) and (20) that \[S^{2}P^{-1}F_{2}+SP^{-1}F_{1}+P^{-1}F_{0}=L^{*}C\Pi,\] \[F_{2}S^{2}+F_{1}S+F_{0}=P\Upsilon BL,\] with \(F_{2}\), \(F_{1}\), and \(F_{0}\) in (65). Thus, (66) and (67) are satisfied. Besides, we note that the systems \(\mathbf{\hat{\Sigma}}_{G}\) and \(\mathbf{\hat{\Sigma}}_{H}\) are equivalent, as there exists a coordination transformation between the two systems due to the nonsingularity of \(\Upsilon\Pi\). ## V Second-Order Loewner Framework An overview of the Loewner framework is found in [40, 41, 42], which provides results connecting this rational interpolation tool with system theory. In the paper, we extend the Loewner framework in the first order setting to the second-order one. Specifically, we consider \(C_{1}=0\) and \(C_{0}=C\). The preliminary results of this part can be found in [43]. In the tangential interpolation problem, we collect the samples of input/output frequency response data of a system directionally on the left and on the right. Specifically, the right and left tangential interpolation data are defined, respectively, as \[\{\alpha_{i},r_{i},w_{i}\mid\alpha_{i}\in\mathbb{C},r_{i}\in \mathbb{C}^{p},w_{i}\in\mathbb{C}^{q}\}_{i=1,2,\ldots,\nu} \tag{68a}\] \[\{\beta_{j},l_{j},v_{j}\mid\beta_{j}\in\mathbb{C},l_{j}\in \mathbb{C}^{q},v_{j}\in\mathbb{C}^{p}\}_{j=1,2,\ldots,\nu}, \tag{68b}\] where \(\alpha_{i}\) and \(\beta_{j}\) are the right and left driving frequencies, \(r_{i}\) and \(l_{j}\) are the right and left tangential directions, and \(w_{i}\) and \(v_{j}\) are the right and left responses. All the data in (68a) can be rearranged compactly as \((\Lambda_{\alpha},\mathcal{R},\mathcal{W})\) and \((\Lambda_{\beta},\mathcal{L},\mathcal{V})\) with \[\Lambda_{\alpha}=\begin{bmatrix}\beta_{1}&&\\ &\ddots&\\ &&\alpha_{\nu}\end{bmatrix},\ \mathcal{L}=\begin{bmatrix}l_{1}^{\top}\\ \vdots\\ l_{\nu}^{\top}\end{bmatrix},\ \mathcal{V}=\begin{bmatrix}v_{1}^{\top}\\ \vdots\\ v_{\nu}^{\top}\end{bmatrix},\] The problem is to find a realization in the second-order form as in (1) such that the associated transfer function \[W(s)=C(Ms^{2}+Ds+K)^{-1}B\in\mathbb{C}^{q\times p}\] satisfies the right and the left tangential constraints: \[W(\alpha_{i})r_{i}=w_{i},\forall\ i.\] \[l_{j}^{\top}W(\beta_{j})=v_{j}^{\top},\forall\ j.\] Similar to the Loewner framework for first-order systems [40, 41], we establish the Loewner matrix \(\mathbb{L}\) and the shifted Loewner matrix \(\mathbb{L}_{s}\) for second-order systems as \[\mathbb{L}_{s}=\begin{bmatrix}\frac{v_{1}^{\top}r_{1}-l_{1}^{\top}w_{1}}{ \beta_{1}-\alpha_{1}}&\ldots&\frac{v_{1}^{\top}r_{\nu}-l_{1}^{\top}w_{\nu}}{ \beta_{1}-\alpha_{\nu}}\\ \vdots&\ddots&\vdots\\ \frac{v_{1}^{\top}r_{1}-l_{1}^{\top}w_{1}}{\beta_{\nu}-\alpha_{1}}&\ldots&\frac{v _{1}^{\top}r_{\nu}-l_{1}^{\top}w_{\nu}}{\beta_{\nu}-\alpha_{\nu}}\end{bmatrix}. \tag{70}\] \[\mathbb{L}_{s}=\begin{bmatrix}\frac{\beta_{1}v_{1}^{\top}r_{1}-\alpha_{1}l_{1}^{ \top}w_{1}}{\beta_{1}-\alpha_{1}}&\ldots&\frac{v_{1}^{\top}r_{\nu}-\alpha_{\nu}l_{ 1}^{\top}w_{\nu}}{\beta_{1}-\alpha_{\nu}}\\ \vdots&\ddots&\vdots\\ \frac{\beta_{\nu}v_{1}^{\top}r_{1}-\alpha_{1}l_{1}^{\top}w_{1}}{\beta_{\nu}- \alpha_{1}}&\ldots&\frac{\beta_{\nu}v_{\nu}^{\top}r_{\nu}-\alpha_{\nu}l_{ 1}^{\top}w_{\nu}}{\beta_{\nu}-\alpha_{\nu}}\end{bmatrix}. \tag{71}\] Furthermore, we also introduce the double-shifted Loewner matrix \(\mathbb{L}_{ss}\) as \[\mathbb{L}_{ss}=\begin{bmatrix}\frac{\beta_{1}^{2}v_{1}^{\top}r_{1}-\alpha_{1} ^{2}l_{1}^{\top}w_{1}}{\beta_{1}-\alpha_{1}}&\ldots&\frac{\beta_{1}^{2}v_{1}^{ \top}r_{\nu}-\alpha_{\nu}^{2}l_{1}^{\top}w_{\nu}}{\beta_{1}-\alpha_{\nu}}\\ \vdots&\ddots&\vdots\\ \frac{\beta_{\nu}^{2}v_{1}^{\top}r_{1}-\alpha_{1}^{2}l_{1}^{\top}w_{1}}{\beta_{ \nu}-\alpha_{1}}&\ldots&\frac{\beta_{\nu}^{2}v_{\nu}^{\top}r_{\nu}-\alpha_{\nu}^{2 }l_{1}^{\top}w_{\nu}}{\beta_{\nu}-\alpha_{\nu}}\end{bmatrix}. \tag{72}\] Denote the tangential versions of the generalized controllability and observability matrices as \[\mathcal{X} =[(M\alpha_{1}^{2}+D\alpha_{1}+K)^{-1}Br_{1}\] \[\quad\cdots(M\alpha_{\nu}^{2}+D\alpha_{\nu}+K)^{-1}Br_{\nu}]\] \[\mathcal{Y} =\begin{bmatrix}l_{1}^{\top}C(M\beta_{1}^{2}+D\beta_ The following result then shows how the matrices \(\mathbb{L}\), \(\mathbb{L}_{s}\), and \(\mathbb{L}_{ss}\) are related with \(\mathcal{X}\) and \(\mathcal{Y}\). **Lemma 3**.: _Consider the generalized controllability and observability matrices \(\mathcal{X}\) and \(\mathcal{Y}\) in (73). Then, the Loewner matrix \(\mathcal{L}\) in (70), the shifted Loewner matrix \(\mathcal{L}_{s}\) in (71), and the double-shifted Loewner matrix \(\mathcal{L}_{ss}\) in (72) can be represented as_ \[\mathbb{L} =-\Lambda_{\beta}\mathcal{Y}M\mathcal{X}-\mathcal{Y}M\mathcal{X} \Lambda_{\alpha}-\mathcal{Y}D\mathcal{X}, \tag{74a}\] \[\mathbb{L}_{s} =-\Lambda_{\beta}\mathcal{Y}M\mathcal{X}\Lambda_{\alpha}+ \mathcal{Y}K\mathcal{X},\] (74b) \[\mathbb{L}_{ss} =\Lambda_{\beta}\mathcal{Y}D\mathcal{X}\Lambda_{\alpha}+\Lambda _{\beta}\mathcal{Y}K\mathcal{X}+\mathcal{Y}K\mathcal{X}\Lambda_{\alpha}. \tag{74c}\] Proof.: Note that \[v_{j}^{\top}r_{i}-l_{j}^{\top}w_{i}\] \[= l_{j}^{\top}C\left[(M\beta_{j}^{2}+D\beta_{j}+K)^{-1}-(M\alpha_ {i}^{2}+D\alpha_{i}+K)^{-1}\right]Br_{i}\] \[= l_{j}^{\top}C(M\beta_{j}^{2}+D\beta_{j}+K)^{-1}\left[M(\alpha_{i }^{2}-\beta_{j}^{2})+D(\alpha_{i}-\beta_{j})\right]\] \[(M\alpha_{i}^{2}+D\alpha_{i}+K)^{-1}Br_{i}\] \[= (\alpha_{i}-\beta_{j})l_{j}^{\top}C(M\beta_{j}^{2}+D\beta_{j}+K)^ {-1}\left[M(\alpha_{i}+\beta_{j})+D\right]\] \[(M\alpha_{i}^{2}+D\alpha_{i}+K)^{-1}Br_{i}\] \[= (\alpha_{i}-\beta_{j})\mathcal{Y}_{j}\left[M(\alpha_{i}+\beta_{j} )+D\right]\mathcal{X}_{i}\] Thus, the \((i,j)\)-element of \(\mathbb{L}\) is given by \[\left[\mathbb{L}\right]_{ij}=-\mathcal{Y}_{j}\left[M(\alpha_{i}+\beta_{j})+D \right]\mathcal{X}_{i},\] which leads to (74a). Analogously, we can find \[\beta_{j}v_{j}^{\top}r_{i}-\alpha_{i}l_{j}^{\top}w_{i} =(\alpha_{i}-\beta_{j})\mathcal{Y}_{j}\left[M\alpha_{i}\beta_{j}- K\right]\mathcal{X}_{i}\] \[\beta_{j}^{2}v_{j}^{\top}r_{i}-\alpha_{i}^{2}l_{j}^{\top}w_{i} =-(\alpha_{i}-\beta_{j})\mathcal{Y}_{j}\left[D\alpha_{i}\beta_{j}+ K(\alpha_{i}+\beta_{j})\right]\mathcal{X}_{i}.\] yielding that \[\left[\mathbb{L}_{s}\right]_{ij} =-\mathcal{Y}_{j}\left[M\alpha_{i}\beta_{j}-K\right]\mathcal{X} _{i}, \tag{75}\] \[\left[\mathbb{L}_{ss}\right]_{ij} =\mathcal{Y}_{j}\left[D\alpha_{i}\beta_{j}-K(\alpha_{i}+\beta_{j} )\right]\mathcal{X}_{i}. \tag{76}\] Therefore, the equations (74b) and (74c) follow. In the sequel, the matrices \(\mathbb{L}\), \(\mathbb{L}_{s}\), and \(\mathbb{L}_{ss}\) are characterized as the solutions of Sylvester equations. **Lemma 4**.: _The Loewner matrix \(\mathbb{L}\) in (70) is the unique solution of the Sylvester equation,_ \[\Lambda_{\beta}\mathbb{L}-\mathbb{L}\Lambda_{\alpha}=\mathcal{V}\mathcal{R}- \mathcal{L}\mathcal{W}, \tag{77}\] _the shifted Loewner matrix is the unique solution of the Sylvester equation_ \[\Lambda_{\beta}\mathbb{L}_{s}-\mathbb{L}_{s}\Lambda_{\alpha}=\Lambda_{\beta} \mathcal{V}\mathcal{R}-\mathcal{L}\mathcal{W}\Lambda_{\alpha}, \tag{78}\] _and furthermore, the double-shifted Loewner matrix \(\mathcal{L}_{ss}\) in (72) is the unique solution of the Sylvester equation_ \[\Lambda_{\beta}\mathbb{L}_{ss}-\mathbb{L}_{ss}\Lambda_{\alpha}=\Lambda_{\beta} ^{2}\mathcal{V}\mathcal{R}-\mathcal{L}\mathcal{W}\Lambda_{\alpha}^{2}. \tag{79}\] Proof.: Upon multiplication of (74a) with \(\Lambda_{\alpha}\) on the right we have \[\mathcal{Y}M\mathcal{X}\Lambda_{\alpha}^{2}+\mathcal{Y}D\mathcal{X}\Lambda_{ \alpha}=-\mathbb{L}\Lambda_{\alpha}-\Lambda_{\beta}\mathcal{Y}M\mathcal{X} \Lambda_{\alpha}, \tag{80}\] and further obtain from (81) that \[\mathcal{Y}K\mathcal{X}=\mathbb{L}_{s}+\Lambda_{\beta}\mathcal{Y}M\mathcal{X} \Lambda_{\alpha}. \tag{81}\] Letting the equation (19) be multiplied by \(\mathcal{Y}\) on the left leads to \[\mathcal{Y}M\mathcal{X}\Lambda_{\alpha}^{2}+\mathcal{Y}D\mathcal{X }\Lambda_{\alpha}+\mathcal{Y}K\mathcal{X}=\mathcal{Y}B\mathcal{R},\] \[\Leftrightarrow -\mathbb{L}\Lambda_{\alpha}-\Lambda_{\beta}\mathcal{Y}M\mathcal{X} \Lambda_{\alpha}+\mathbb{L}_{s}+\Lambda_{\beta}\mathcal{Y}M\mathcal{X} \Lambda_{\alpha}=\mathcal{V}\mathcal{R}\] \[\Leftrightarrow -\mathbb{L}\Lambda_{\alpha}+\mathbb{L}_{s}=\mathcal{V}\mathcal{R}, \tag{82}\] where the equations (80) and (81) are used. Similarly, multiplying by \(\mathcal{X}\) on the right of the equation (20) then yields \[-\Lambda_{\beta}\mathbb{L}+\mathbb{L}_{s}=\mathcal{L}\mathcal{W}, \tag{83}\] The Sylvester equations in (77) and (78) are then followed by adding/subtracting appropriate multiples of (82) and (83). We then proceed to prove the Sylvester equation for the double-shifted Loewner matrix \(\mathcal{L}_{ss}\). First, it is not hard to show the following relation from (74): \[\mathbb{L}_{ss}=-\Lambda_{\beta}\mathbb{L}\Lambda_{\alpha}+\Lambda_{\beta} \mathbb{L}_{s}+\mathbb{L}_{s}\Lambda_{\alpha}. \tag{84}\] Then, we have \[\Lambda_{\beta}\mathbb{L}_{ss}-\mathbb{L}_{ss}\Lambda_{\alpha}\] \[= -\Lambda_{\beta}^{2}\mathbb{L}\Lambda_{\alpha}+\Lambda_{\beta}^{2} \mathbb{L}_{s}+\Lambda_{\beta}\mathbb{L}_{s}\Lambda_{\alpha}\] \[-(-\Lambda_{\beta}\mathbb{L}\Lambda_{\alpha}^{2}+\Lambda_{\beta} \mathbb{L}_{s}\Lambda_{\alpha}+\mathbb{L}_{s}\Lambda_{\alpha}^{2})\] \[= \Lambda_{\beta}^{2}(\mathbb{L}\Lambda_{\alpha}+\mathbb{L}_{s})-(- \Lambda_{\beta}\mathbb{L}+\mathbb{L}_{s})\Lambda_{\alpha}^{2}\] \[= \Lambda_{\beta}^{2}\mathcal{V}\mathcal{R}-\mathcal{L}\mathcal{W} \Lambda_{\alpha}^{2}.\] Therefore, (79) is proven. In the following, we will show how to use different pairwise combinations of matrices \(\mathbb{L}\), \(\mathbb{L}_{s}\), and \(\mathbb{L}_{ss}\) to construct parameterized families of interpolants possessing the second-order structure. **Theorem 6**.: _Let \(\mathbb{L}\in\mathbb{C}^{\nu\times\nu}\) and \(\mathbb{L}_{s}\in\mathbb{C}^{\nu\times\nu}\) be the Loewner matrix and shifted Loewner matrix, respectively, associated to the right and left tangential data \((\Lambda_{\alpha},\mathcal{R},\mathcal{W})\) and \((\Lambda_{\beta},\mathcal{L},\mathcal{V})\). Define a reduced-order model with second-order structure as_ \[\hat{M}\xi+(-\mathbb{L}-\Lambda_{\beta}\hat{M}-\hat{M}\Lambda_{ \alpha})\xi+(\mathbb{L}_{s}+\Lambda_{\beta}\hat{M}\Lambda_{\alpha})\xi =\mathcal{V}u,\] \[\eta =\mathcal{W}\xi, \tag{85}\] _where \(\hat{M}\) is any square matrix such that the matrix pencil_ \[\hat{M}\lambda^{2}+(-\mathbb{L}-\Lambda_{\beta}\hat{M}-\hat{M}\Lambda_{\alpha}) \lambda+(\mathbb{L}_{s}+\Lambda_{\beta}\hat{M}\Lambda_{\alpha}) \tag{86}\] _is regular3 and has no eigenvalues belonging to \(\sigma(\Lambda_{\alpha})\cup\sigma(\Lambda_{\beta})\). Then, the model (85) interpolates the tangential data \((\Lambda_{\alpha},\mathcal{R},\mathcal{W})\) and \((\Lambda_{\beta},\mathcal{L},\mathcal{V})\), simultaneously._ Footnote 3: The pencil \(M\lambda^{2}+D\lambda+K\) is called regular if there is at least one value of \(\lambda\in\mathbb{C}\) such that \(\det(M\lambda^{2}+D\lambda+K)\neq 0\). Proof.: It is obtained from the tangential constraints on data that \(C\mathcal{R}=\mathcal{W}\) and \(B\mathcal{L}=\mathcal{V}\). Then, according to Proposition 3, the model These equations are simplified as \[-\mathbb{L}\Lambda_{\alpha}+\mathbb{L}_{s}=\mathcal{V}\mathcal{R},\ \text{and}\ - \Lambda_{\beta}\mathbb{L}+\mathbb{L}_{s}=\mathcal{L}\mathcal{W}, \tag{88}\] respectively, which are proved in (82) and (83). Therefore, the model (85) with a free parameter \(\hat{M}\) interpolates both the left and right tangential data. Theorem 6 presents a parameterized family of interpolants (85) possessing the second-order structure with \(\hat{M}\) a free parameter. Any \(\hat{M}\) that fulfills the matrix pencil condition on (86) will lead to an interpolant of the left and right tangential data. Particularly, we may also choose \(\hat{M}=0\), then a first-order model is generated as \[-\mathbb{L}\dot{\xi}+\mathbb{L}_{s}\xi=\mathcal{V}u,\ \eta=\mathcal{W}\xi,\] which is consistent with the results for the first-order Loewner framework in [40, 41]. **Remark 2**.: _A parameterized family of interpolants based on the matrices \(\mathbb{L}\) and \(\mathbb{L}_{s}\) is also presented in [43] with \(J\) a free parameter. If \(J=-\hat{M}\Lambda_{\alpha}\), the family of interpolants in [43] can be obtained by a coordinate transformation of (85)._ Next, we show how to use the pairs \((\mathbb{L},\mathbb{L}_{ss})\) and \((\mathbb{L},\mathbb{L}_{ss})\) to construct an alternative parameterized family of interpolants that possess the second-order structure. Before proceeding, the following lemma is provided to reveal the relation between \(\mathbb{L}\), \(\mathbb{L}_{s}\), and \(\mathbb{L}_{ss}\). **Lemma 5**.: _The double-shifted Loewner matrix \(\mathbb{L}_{ss}\) in (72) and the Loewner matrix \(\mathbb{L}\) in (70) are related as_ \[-\mathbb{L}\Lambda_{\alpha}^{2}+\mathbb{L}_{ss}=\Lambda_{\beta} \mathcal{V}\mathcal{R}+\mathcal{V}\mathcal{R}\Lambda_{\alpha}, \tag{89a}\] \[-\Lambda_{\beta}^{2}\mathbb{L}+\mathbb{L}_{ss}=\Lambda_{\beta} \mathcal{L}\mathcal{W}+\mathcal{L}\mathcal{W}\Lambda_{\alpha}. \tag{89b}\] _Furthermore, \(\mathbb{L}_{ss}\) and the shifted Loewner matrix \(\mathbb{L}_{s}\) in (71) satisfy_ \[-\mathbb{L}_{s}\Lambda_{\alpha}+\mathbb{L}_{ss}=\Lambda_{\beta} \mathcal{V}\mathcal{R}, \tag{90a}\] \[-\Lambda_{\beta}\mathbb{L}_{s}+\mathbb{L}_{ss}=\mathcal{L} \mathcal{W}\Lambda_{\alpha}. \tag{90b}\] Proof.: Using equations (82), (83), and (84), the four equations are obtained as follows. \[-\mathbb{L}\Lambda_{\alpha}^{2}+\mathbb{L}_{ss}=-\mathbb{L} \Lambda_{\alpha}^{2}-\Lambda_{\beta}\mathbb{L}\Lambda_{\alpha}+\Lambda_{\beta }\mathbb{L}_{s}+\mathbb{L}_{s}\Lambda_{\alpha}\] \[=(-\mathbb{L}\Lambda_{\alpha}+\mathbb{L}_{s})\Lambda_{\alpha}+ \Lambda_{\beta}(-\mathbb{L}\Lambda_{\alpha}+\mathbb{L}_{s})\] \[=\Lambda_{\beta}\mathcal{V}\mathcal{R}+\mathcal{V}\mathcal{R} \Lambda_{\alpha},\] \[-\Lambda_{\beta}^{2}\mathbb{L}+\mathbb{L}_{ss}=-\Lambda_{\beta} ^{2}\mathbb{L}-\Lambda_{\beta}\mathbb{L}\Lambda_{\alpha}+\Lambda_{\beta} \mathbb{L}_{s}+\mathbb{L}_{s}\Lambda_{\alpha}\] \[=(-\Lambda_{\beta}\mathbb{L}+\mathbb{L}_{s})\Lambda_{\alpha}+ \Lambda_{\beta}(-\Lambda_{\beta}\mathbb{L}+\mathbb{L}_{s})\] \[=\Lambda_{\beta}\mathcal{L}\mathcal{W}+\mathcal{L}\mathcal{W} \Lambda_{\alpha},\] \[-\mathbb{L}_{s}\Lambda_{\alpha}+\mathbb{L}_{ss}=-\mathbb{L}_{s} \Lambda_{\alpha}-\Lambda_{\beta}\mathbb{L}\Lambda_{\alpha}+\Lambda_{\beta} \mathbb{L}_{s}+\mathbb{L}_{s}\Lambda_{\alpha}\] \[=\Lambda_{\beta}(-\mathbb{L}\Lambda_{\alpha}+\mathbb{L}_{s})= \Lambda_{\beta}\mathcal{V}\mathcal{R},\] \[-\Lambda_{\beta}\mathbb{L}_{s}+\mathbb{L}_{ss}=-\Lambda_{\beta} \mathbb{L}\Lambda_{\alpha}+\Lambda_{\beta}\mathbb{L}_{s}+\mathbb{L}_{s} \Lambda_{\alpha}\] \[=(-\Lambda_{\beta}\mathbb{L}+\mathbb{L}_{s})\Lambda_{\alpha}= \mathcal{L}\mathcal{V}\Lambda_{\alpha}.\] That completes the proof. Using the double-shifted Loewner matrix and the shifted Loewner matrix, we can construct a parameterized family of interpolants with the second-order structure with a free parameter \(\hat{K}\). **Theorem 7**.: _Let \(\mathbb{L}_{s}\in\mathbb{C}^{\nu\times\nu}\) and \(\mathbb{L}_{ss}\in\mathbb{C}^{\nu\times\nu}\) be the shifted Loewner matrix and double-shifted Loewner matrix, respectively, associated to the right and left tangential data \((\Lambda_{\alpha},\mathcal{R},\mathcal{W})\) and \((\Lambda_{\beta},\mathcal{L},\mathcal{V})\). Suppose \(\Lambda_{\alpha}\) and \(\Lambda_{\beta}\) are nonsingular. Define a reduced-order model with the second-order structure as_ \[(-\mathbb{L}_{s}+\hat{K})\tilde{\xi}+(\mathbb{L}_{ss}-\Lambda_{ \beta}\hat{K}-\hat{K}\Lambda_{\alpha})\dot{\xi}+\Lambda_{\beta}\hat{K} \Lambda_{\alpha}\xi=\Lambda_{\beta}\mathcal{V}u,\] \[\eta=\mathcal{W}\Lambda_{\alpha}\xi, \tag{91}\] _where \(\hat{K}\) is any square matrix such that the matrix pencil_ \[(-\mathbb{L}_{s}+\hat{K})\lambda^{2}+(-\mathbb{L}_{ss}+\Lambda_{\beta}\hat{K}+ \hat{K}\Lambda_{\alpha})\lambda+\Lambda_{\beta}\hat{K}\Lambda_{\alpha}\] _is regular and has no eigenvalues belonging to \(\sigma(\Lambda_{\alpha})\cup\sigma(\Lambda_{\beta})\). Then, the model (91) interpolates the tangential data \((\Lambda_{\alpha},\mathcal{R},\mathcal{W})\) and \((\Lambda_{\beta},\mathcal{L},\mathcal{V})\), simultaneously._ Proof.: If \(\Lambda_{\alpha}\) and \(\Lambda_{\beta}\) are non-singular, the transfer function of the system (91) is represented as \[\hat{G}(s)=\mathcal{W}\left(\mathbb{M}s^{2}+\mathbb{D}s+\hat{K}\right)^{-1} \mathcal{V},\] with \(\mathbb{M}:=-\Lambda_{\beta}^{-1}\mathbb{L}_{s}\Lambda_{\alpha}^{-1}+\Lambda_{ \beta}^{-1}\hat{K}\Lambda_{\alpha}^{-1}\), \(\mathbb{D}:=\Lambda_{\beta}^{-1}\mathbb{L}_{ss}\Lambda_{\alpha}^{-1}-\Lambda_{ \beta}^{-1}\hat{K}-\hat{K}\Lambda_{\alpha}^{-1}\). Then, we follow a similar reasoning as the proof of Theorem 6. With \(\mathcal{C}\mathcal{R}=\mathcal{W}\) and \(B\mathcal{L}=\mathcal{V}\), \(\hat{G}(s)\) interpolates the data \((\Lambda_{\alpha},\mathcal{R},\mathcal{W})\) and \((\Lambda_{\beta},\mathcal{L},\mathcal{V})\) if the following Sylvester equations are satisfied. \[\Lambda_{\alpha}^{2}\mathbb{M}+\Lambda_{\alpha}\mathbb{D}+\hat{K}=\mathcal{V} \mathcal{R},\ \mathbb{M}\Lambda_{\beta}^{2}+\mathbb{D}\Lambda_{\beta}+\hat{K}=\mathcal{L} \mathcal{W},\] which are simplified by substituting the expressions of \(\mathbb{M}\) and \(\mathbb{D}\) as \[-\Lambda_{\beta}^{-1}\mathbb{L}_{s}\Lambda_{\alpha}+\Lambda_{\beta}^{-1}\mathbb{L}_ {ss}=\mathcal{V}\mathcal{R},\ -\Lambda_{\beta}\mathbb{L}_{s}\Lambda_{\alpha}^{-1}+\mathbb{L}_{ss}\Lambda_{ \alpha}^{-1}=\mathcal{L}\mathcal{W}.\] The above equations hold due to Lemma 5. Theorem 7 also provides a parameterized family of the second-order interpolants with \(\hat{K}\) a free parameter. As a special case, we choose choose \(\hat{K}=0\), then the model (91) is simplified as \[-\mathbb{L}_{s}\tilde{\xi}-\mathbb{L}_{ss}\xi=\Lambda_{\alpha}\mathcal{V}u,\ \eta=\mathcal{W}\Lambda_{\beta}\xi.\] **Remark 3**.: _It is worth emphasizing the presented second-order Loewner frameworks in Theorem 6 and Theorem 7 can be applied to preserve the second-order structure with the Rayleigh-Damped hypothesis, i.e., the damping matrix in (1) is constrained as_ \[D=\alpha M+\beta K,\] _where \(\alpha,\beta\geq 0\), see [44]. To retain the above property in the interpolant (85), we impose \(-\mathbb{L}-\Lambda_{\beta}\hat{M}-\hat{M}\Lambda_{\alpha}=\alpha\hat{M}\tilde {\xi}+\beta(\mathbb{L}_{s}+\hat{M}\Lambda_{\alpha}),\) i.e._ \[(\Lambda_{\beta}+\alpha I)\hat{M}+\hat{M}(\beta\Lambda_{\alpha}+I)=-\mathbb{L}+ \beta\mathbb{L}_{s}.\] _This means that, the value of \(\hat{M}\) is determined by the above Sylvester equation rather than free parameter to choose. Analogously, we can also preserve the Rayleigh-Damped hypothesis in interpolant (91) by requiring \(\mathbb{L}_{ss}-\Lambda_{\beta}\hat{K}-\hat{K}\Lambda_{\alpha}=\alpha(-\mathbb{L}_{s }+\hat{K})+\beta\Lambda_{\beta}\hat{K}\Lambda_{\alpha}\), which leads to the Sylvester equation to determining \(\hat{K}\):_ \[\Lambda_{\beta}\hat{K}+\hat{K}(\Lambda_{\beta}+\alpha I)(\beta \Lambda_{\alpha}+I)^{-1}\\ =(\mathbb{L}_{s}s-\alpha\mathbb{L}_{s})(\beta\Lambda_{\alpha}+I)^ {-1}.\] ## VI Numerical Example In this section, we will show the effectiveness of the Loewner frameworks presented in the previous section through simulations. As in [43], we consider the mass-spring-damper system in Fig.3, where the system consists of \(n=100\) masses. The coefficient matrices of the second-order system is then given as follows. \[M =\text{diag}\big{[}m_{1},\ \ldots,\ m_{n}\big{]},\] \[D =\begin{bmatrix}c_{1}&-c_{1}\\ -c_{1}&c_{1}+c_{2}&-c_{2}\\ &\ddots&\ddots&\ddots\\ &&\ddots&\ddots&-c_{n-1}\\ &&&-c_{n-1}&c_{n+1}+c_{n}\end{bmatrix},\] \[K =\begin{bmatrix}k_{1}&-k_{1}\\ -k_{1}&k_{1}+k_{2}&-k_{2}\\ &\ddots&\ddots&\ddots\\ &&\ddots&\ddots&-k_{n-1}\\ &&&-k_{n-1}&k_{n+1}+k_{n}\end{bmatrix},\] \[B^{\top} =C=\begin{bmatrix}1&0&\cdots&0\end{bmatrix}\] where \(m_{i}\), \(k_{i}\) and \(c_{i}\) are the masses, spring coefficients and damping coefficients, respectively, for \(i=1,...,n\). The external input \(u\) is the external force acting on the first mass \(m_{1}\), and we measure the displacement of the mass \(m_{1}\) as the output. For simulation, we set \(m_{i}=1\), \(c_{i}=0.1\), and \(k_{i}=1.5\). To interpolate the dynamics of the second-order system, we choose 6 points on the imaginary axis in a log-scale between \([10^{-2},10^{2}]\). To apply the Loewner framework in Theorem 6, we select the free parameter \(\hat{M}=\mathbb{L}\) in (85), and to implement Theorem 7, \(\hat{K}=\mathbb{L}_{ss}\) in (91). The results in the form of the Bode diagrams are shown in Fig. 4. The reduced-order model with order 6 can achieve a close behavior to the full-order model, and when the order of the reduced model increases to 12, the approximation errors in both cases become significantly smaller. ## VII Conclusion A time-domain moment matching approach for second-order dynamical systems has been presented. The moments of a given second-order system is characterized by the unique solution of a Sylvester equation, and families of parameterized reduced second-order models have been provided to match selected moments. Furthermore, we have also addressed the approaches that determine the free parameters to achieve moment matching at two distinct sets of interpolation points and matching at the first-order derivative of the transfer function of the original second-order system. Finally, we have further addressed the Loewner framework for the second-order systems, where two families of data-driven models are presented which not only interpolate the sets of tangential data but also retains the second structure from the original system.
2303.03915
The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset
As language models grow ever larger, the need for large-scale high-quality text datasets has never been more pressing, especially in multilingual settings. The BigScience workshop, a 1-year international and multidisciplinary initiative, was formed with the goal of researching and training large language models as a values-driven undertaking, putting issues of ethics, harm, and governance in the foreground. This paper documents the data creation and curation efforts undertaken by BigScience to assemble the Responsible Open-science Open-collaboration Text Sources (ROOTS) corpus, a 1.6TB dataset spanning 59 languages that was used to train the 176-billion-parameter BigScience Large Open-science Open-access Multilingual (BLOOM) language model. We further release a large initial subset of the corpus and analyses thereof, and hope to empower large-scale monolingual and multilingual modeling projects with both the data and the processing tools, as well as stimulate research around this large multilingual corpus.
Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, Jörg Frohberg, Mario Šaško, Quentin Lhoest, Angelina McMillan-Major, Gerard Dupont, Stella Biderman, Anna Rogers, Loubna Ben allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, Somaieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Muñoz, Jian Zhu, Daniel Van Strien, Zaid Alyafeai, Khalid Almubarak, Minh Chien Vu, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pedro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana Ilic, Margaret Mitchell, Sasha Alexandra Luccioni, Yacine Jernite
2023-03-07T14:25:44Z
http://arxiv.org/abs/2303.03915v1
# The BigScience ROOTS Corpus: ###### Abstract As language models grow ever larger, the need for large-scale high-quality text datasets has never been more pressing, especially in multilingual settings. The BigScience workshop, a 1-year international and multidisciplinary initiative, was formed with the goal of researching and training large language models as a values-driven undertaking, putting issues of ethics, harm, and governance in the foreground. This paper documents the data creation and curation efforts undertaken by BigScience to assemble the Responsible Open-science Open-collaboration Text Sources (**ROOTS**) corpus, a 1.6TB dataset spanning 59 languages that was used to train the 176-billion-parameter BigScience Large Open-science Open-access Multilingual (**BLOOM**)(BigScience Workshop, 2022) language model. We further release a large initial subset of the corpus and analyses thereof, and hope to empower large-scale monolingual and multilingual modeling projects with both the data and the processing tools, as well as stimulate research around this large multilingual corpus. ###### Contents * 1 Introduction * 1.1 Outline of the Paper * 1.2 Related Work * 2 (Crowd) Sourcing a Language Resource Catalogue * 2.1 Obtaining Data from the Identified Resources * 2.2 Processing Pipeline for Quality Improvement on Crowdsourced Datasets * 3 Processing OSCAR * 3.1 Data cleaning and filtering * 3.2 Deduplication * 3.3 Personally identifiable information * 4 A First look at ROOTS * 4.1 Natural Languages * 4.2 Programming Languages * 4.3 Tokenizer analysis of the component datasets * 5 Conclusion * A Ethical Considerations and Broader Impacts Statement * B Details on tools used to obtain crowdsourced dataset * B.1 Pseudocode to recreate the text structure from the HTML code * B.2 Visualization tool use cases * B.3 Exhaustive list of functions used in (Crowd)Sourced dataset * C Exhaustive list of human curated filters used on OSCAR * D PII filtering initiative * E Data Sources * F Author contributions ## 1 Introduction BigScience1 started in May 2021 as a one-year long open collaborative research initiative that gathered over a thousand participants around the world to study large language models (LLM). One of the founding goals of BigScience was to train an open-access, massively multilingual LLM, comparable in scale to GPT-3 (Brown et al., 2020) yet trained on a better documented and more representative multilingual dataset. The overall BigScience workshop was designed as a collaborative (Caselli et al., 2021; Bondi et al., 2021) and value-driven (Birhane et al., 2021) endeavor. Throughout the process of building this corpus we engaged in simultaneous investigation of ethical (Talat et al., 2022), sociopolitical (McMillan-Major et al., 2022), and data governance issues (Jernite et al., 2022) with the explicit goal of doing good for and by the people whose data we collected. Footnote 1: [https://bigscience.huggingface.co/](https://bigscience.huggingface.co/) Sourcing and building the dataset was organized around four working groups: **Data Governance** which helped define the project's values and design our approach to data usage and release in an international context, **Data Sourcing and Preparation** which was tasked with overseeing data collection, curation efforts, and **Privacy** for privacy risks and sanitizing the dataset, **Legal Scholarship** which helped define the multi-jurisdiction legal context in which the entire workshop was to operate, and we discuss practical implications throughout the paper where appropriate. An overview of the BigScience Corpus is provided in figure 1. The goal of the current paper is twofold: (1) we present a preliminary gated, subject to committing to the BigScience ethical charter2, release of a large subset of ROOTS3 (2) we release the numerous data tools4 that were developed along the way and enabled us to curate, source, clean and inspect all 498 constituent datasets that come together to constitute ROOTS. This includes a preliminary results of the analyses that are currently being developed to study the corpus. Footnote 2: [https://hf.co/spaces/bigscience/ethical-charter](https://hf.co/spaces/bigscience/ethical-charter) Footnote 3: [https://hf.co/bigscience-data](https://hf.co/bigscience-data) Footnote 4: [https://github.com/bigscience-workshop/data-preparation](https://github.com/bigscience-workshop/data-preparation) ### Outline of the Paper The remainder of this paper details our approach to curating a web-scale dataset covering 59 languages, 46 natural languages and 13 programming languages -- the language choice was chiefly driven by the communities who participated in the effort given the importance we placed on language expertise. Our final corpus is made up of two main components: 62% of the text comes from a community-selected and documented list of language data sources and its collection process is described in section 2, and Figure 1: Overview of ROOTS. Left: A treemap of natural language representation in number of bytes by language family. The bulk of the graph is overwhelmed by the 1321.89 GB allotted to Eurasia. The orange rectangle corresponds to the 18GB of Indonesian, the sole representative of the Papunesia macroarea, and the green rectangle to the 0.4GB of the Africa linguistic macroarea. Right: A waffle plot of the distribution of programming languages by number of files. One square corresponds approximately to 30,000 files. 38% consists of text extracted from a pre-processed web crawl, OSCAR Ortiz Suarez et al. (2020)), filtered with the help of native speakers, which is described in section 3. ### Related Work Large Language Models and Large Text CorporaThe current dominant paradigm in natural language processing relies heavily on pre-trained models: large language models that can then be fine-tuned on a downstream task Howard and Ruder (2018); Devlin et al. (2018) or even used as-is without additional data Radford et al. (2019); Brown et al. (2020). In this paradigm, performance is directly correlated on both the model size and the dataset size and quality Kaplan et al. (2020), with recent models trained on up to 1.4 trillion tokens Hoffmann et al. (2022) and dataset creation pipelines representing a significant part of large language model projects. Most such datasets, however, are not released, hindering further research. Exceptions include the Pile Gao et al. (2020), a curated corpus of datasets for language modeling that has become widely used for training state-of-the-art English-language models Lieber et al. (2021); Smith et al. (2022); Black et al. (2022); Zhang et al. (2022), and C4 and mC4 Raffel et al. (2020); Xue et al. (2020), which have powered the T5 family of models; CC100 Conneau et al. (2020) which has seen heavy use for multilingual modeling; and OSCAR Ortiz Suarez et al. (2019), which has enabled monolingual non-English models. Tooling, Visualization, and ReplicationUpstream from the finalized training datasets is the issue of processing methods and pipelines: both the operations that the datasets go through and the engineering effort required to apply them at terabyte scales. Existing work tends to fall on a spectrum from no details at all Brown et al. (2020) to detailed filtering instructions, with Raffel et al. (2020) or without the dataset release Rae et al. (2021) to detailed filtering instructions with the accompanying code Gao et al. (2020); Conneau et al. (2020); Ortiz Suarez et al. (2019). Even when the code is released, it tends to be built and tailored for the project's purpose. Consequently, large projects that do not re-use an existing dataset outright usually build their own pipeline rather than re-use an existing one on new data. However, data tools that were built and packaged in order to be used for other projects exist, such as OSCAR's Ungoliant and Goclassy Abadji et al. (2021); Ortiz Suarez et al. (2019), which provides a distributed Common Crawl processing pipeline; CCNet Wenzek et al. (2020), built for quality filtering of multilingual Common Crawl dumps; and OpenWebText Gokaslan and Cohen (2019), enabling Reddit dump processing. Documenting Textual Corpora in NLPAn inspiration for our work is a recent emphasis on a more in-depth documentation of what is included and what is not in the corpora used for training NLP models. The most notable example of this is the Pile, for which the authors themselves analyze and document a variety of syntactic and semantic properties of the dataset including structural statistics (n-gram counts, language, document sizes), topical distributions across its components, social bias and sentiment co-occurrence, pejorative content, and information about licensing and authorial consent, in addition to releasing a datasheet Biderman et al. (2022). Other LM pre-training datasets that have been documented and analyzed include C4 Dodge et al. (2021); Luccioni and Viviano (2021); Kreutzer et al. (2022), OSCAR Kreutzer et al. (2022) and BookCorpus Bandy and Vincent (2021). While this kind of documentation is far from standard practice, it is becoming increasingly common given recent calls for better documentation Rogers (2021); Bender et al. (2021) as well as empirical studies on data memorization in language models Carlini et al. (2019, 2022). ## 2 (Crowd) Sourcing a Language Resource Catalogue The first part of our corpus, accounting for 62% of the final dataset size (in bytes), was made up of a collection of monolingual and multilingual language resources that were selected and documented collaboratively through various efforts of the BigScience Data Sourcing working group. The first such effort consisted in creating a tool to support metadata collection through open submissions, called the BigScience Catalogue and running a series of hackathons in collaboration with locally-focused ML and NLP communities such as Masakhane, Machine Learning Tokyo and LatinX in AI where participants could add and document entries for their languages to the catalogue McMillan-Major et al. (2022). This yielded a set of 252 sources, including at least 21 per considered language category. We focused on metadata collection as a way to support selection of the sources for the final dataset and documentation of the final dataset. In parallel, working group participants gathered additional Arabic language resources in the Masader repository (Alyafeai et al., 2021), and proposed a list of websites of interest to increase the geographical diversity of our English, Spanish, and Chinese language data. Finally, in order to explicitly test large language models' ability to handle computer code along with natural language, we selected code data available on GitHub and StackExchange. ### Obtaining Data from the Identified Resources Gathering Identified Datasets and Collections.First, we leveraged the BigScience Catalogue and the Masader repository to start obtaining text from identified sources, which included both existing NLP datasets and collections of documents of various compositions. Given the diversity of sources, hosting methods, data custodians, and formats, collecting this text required a collaborative effort. To that end, we established a 2-phase approach: first, collect as many data sources as possible in an easily accessible location; second, map all of them to a common format to ease further processing. In the first phase, we organized an open hackathon to start gathering identified sources on the Hugging Face Datasets hub (Lhoest et al., 2021), in a dedicated organization5 (in order to manage access controls). In the second phase, the collected datasets were furthered processed via (1) _Language segmentation_, whereby data sources were split using metadata for each covered language in order to obtain monolingual datasets, and the use of (2) _Uniform interface_ whereby a document consisted of two fields: "text" for the actual text content, and "meta" with a JSON representation of metadata for a given document, containing sufficient information to trace documents back to their original sources. Footnote 5: [https://hf.co/bigscience-catalogue-data](https://hf.co/bigscience-catalogue-data) Pseudo-Crawled Data.Of the various categories of language resources identified through the data sourcing effort, websites stood out as one that required a particular effort and dedicated pipeline. We decided to design such a pipeline based on "pseudo-crawling": that is, rather than crawling the websites ourselves, we retrieved pages corresponding to the target domain names from 18 snapshots archived by Common Crawl in 2020 and 2021 in Web ARchive (WARC) format (Mohr et al., 2008). These domain names came from two main sources: the homepage field in the metadata of the 252 above-mentioned catalogue entries when available (192 in total), and the 456 websites proposed by participants asynchronously to improve the geographical diversity of our language sources; which yielded a total of 614 unique domain names after deduplication. We collected URLs contained within those domains using the Common Crawl index. The index provides metadata for every document including the page URL, WARC filename and record offsets, fetch status, content MIME type, etc. We ran a query matching all documents that share the domain name with a seed using Amazon Athena on Common Crawl's columnar index6. 48 of the 614 initial seed domain names had no matches in the index and were therefore left out. Once we obtained the document metadata, we fetched the WARC records using HTTP range requests with the start and end byte offsets. Since HTML web pages constitute the largest portion of pages contained in the Common Crawl dumps, we decided to only extract text from HTML pages. Documents in other formats were filtered out, ie XML, PDF, etc. 27 domain names were additionally removed from the list at this stage as we had not retrieved any HTML pages for them. Footnote 6: [https://commoncrawl.org/2018/03/index-to-warc-files-and-urls-in-columnar-format/](https://commoncrawl.org/2018/03/index-to-warc-files-and-urls-in-columnar-format/) To extract the text from the HTML pages, we first minified the HTML code. Minification is the removal of unnecessary characters from the source code of a website. Inspired by Aghajanyan et al. (2022), we removed from the DOM-HTML all the sub-trees contained in a <_script>_, <_style>_, <_header>_, <_iframe>_, <_footer>_ and <_form>_ tag as well as all the sub-trees associated with a <_body>_, <_div>_, <_p>_, <_section>_, <_table>_, <_ul>_, <_ol>_ or <_dl>_ tag whose textual content was less than 64 characters long. The text was then extracted from the nodes of this new DOM-HTML. While concatenating the text extracted, we applied a set of rules to reconstruct the structure of the text without its HTML code, inspired by what Common Crawl does to extract its WET files (Appendix B.1). The overall procedure enabled us to obtain text datasets for 539 domain names. GitHub Code.We collected a code dataset from BigQuery7 using the same language selection as AlphaCode (Li et al., 2022). The dataset was then deduplicated of exact matches and filtered for source files with between 100 and 200,000 characters, between 15-65% alphabetic characters, a max line length of 20-1000 characters, and a token length standard deviation of more than 3. Due to a bug in the pre-processing pipeline the dataset was also filtered for GPL licenses only. Merging and Deduplicating Sources.After gathering and processing language data via the three pipelines outlined above, we took a final step to manually inspect, deduplicate, and make a further selection of the sources. First, we addressed dataset overlap we found by looking through our sources. For example: _OpenIT_ was present in both its raw form as well as a processed version. Consensus was reached to choose the latter version. Non-trivial datasets overlap included _s2orc_(Lo et al., 2020), _Arxiv_(Clement et al., 2019) and the _PubMed Central_ subset of the Pile (Gao et al., 2020). We also performed cross-pipeline dataset deduplication, removing the pseudo-crawled Wikipedia and GitHub in favor of their other versions. We also removed datasets that we found had a high incidence of documents that were not fully in natural language (e.g. unexpected instances of SEO, HTML tags etc...), as well as very small datasets in the higher-resourced languages. Finally, pseudo-crawled sources were further processed to remove menus (with a heuristic consisting of removing lines that occurred in more than 1% of pages in a given domain) and pages that had a high incidence of character ngram repetition, low language identification confidence, or low proportion of closed class words (see Section 3). We then removed entire domains whose size was less than 2MB after this step, yielding 147 pseudo-crawl-based datasets, and a total of 517 datasets including all three pipelines. ### Processing Pipeline for Quality Improvement on Crowdsourced Datasets Once a text field was obtained, we attempted to improve the quality of that text. In the specific case of text extraction from HTML, we observe that not all text are relevant (menus, advertisements, repeated text on each page etc...). In order to remove noisy data from our dataset, we applied a processing pipeline for each dataset consisting of a sequence of functions. Functions were categorised as _document-scoped_ or _dataset-scoped_ functions. _Document-scoped_ functions are operations that modify a document independently of other documents and _dataset-scoped_ functions are operations that take into account the whole dataset. Orthogonal to this scope, functions were also separated into _cleaning_ and _filtering_ functions. _Cleaning functions_ aim to remove text considered not part of the main document. Document-scoped cleaning functions can for example target leftover HTML tags. On the other end, dataset-scoped cleaning functions need the whole dataset to calculate a heuristic to determine how to modify each document. For instance, advertisements vary across datasets, making it harder to define a dataset-agnostic classifier for advertisement. Instead, we can index all the lines in a dataset and identify repeated lines on multiple pages as likely advertisements. An example is displayed in Appendix B.2. _Filtering functions_ aim at removing an entire document from the corpus. The reasons for choosing to remove a document completely are diverse: it may be because the document is considered to be of too poor quality, to be too complex to automatically fix or too similar to other examples already present in the corpus. In the latter case, we speak of deduplication. Deduplication of a document is dependent on whether an equivalent document already exists somewhere else in the dataset and is thus necessarily a dataset-scope function. The notion of equivalent documents has been explored by Lee et al. (2022). In this case we provide deduplication via metadata (urls, normalised urls) and via text (exact string matching). An exhaustive list of functions is available in B.3. As datasets came from heterogeneous sources with different properties, each needs its own set of processing functions to correspond to our definition of natural language documents. In order to support participants in deciding what functions to apply to which, we built and released a _streamlist_-based visualization tool (figure 2 helps understand the impact of each function, displaying how a document was altered/removed as well as estimated dataset level metrics (quantity of data removed in bytes or samples)). This rapid feedback loop enabled us to update the pipeline consequently in an iterative process to finetune each processing pipelines across datasets and languages with the input of native speakers. A specific example is shared in Appendix B.2. This resulted in 485 non-empty datasets. ## 3 Processing OSCAR We chose to complement the data obtained at the end of the process described in the previous section with additional Common Crawl-based8 data motivated by two main reasons. First, given the project's overall goal of providing a trained LLM as a research artifact comparable to previously released ones that have relied extensively on this source, we assessed that not including it would constitute too much of a departure and risk invalidating comparisons. Relatedly, recent work has put a strong emphasis on the quantity of data being a strong factor in a trained model's performance on evaluation tasks (Kaplan et al., 2020; Hoffmann et al., 2022), and we were missing about one third of data in order to optimize our compute budget in this direction. With that in mind, we chose OSCAR version 21.09 (Ortiz Suarez et al., 2020), based on the Common Crawl snapshot of February 2021, to make up the remaining 38% of our final dataset. Footnote 8: [https://commoncrawl.org/](https://commoncrawl.org/) However, crawled data suffers from several known issues. First, we wanted to only select documents written by humans for humans, and exclude machine-generated content e.g. search engine optimization (SEO). Crawled content also over-represents pornographic text across languages (Kreutzer et al., 2022), especially in the form of spam ads. Finally, it contains personal information that may constitute a privacy risk. The present section outlines our approach to mitigating those issues. ### Data cleaning and filtering Our first approach to addressing the above consists in defining quality indicators for web content. These can then be used to filter out specific pages by defining cutoff thresholds. Extensive descriptions for reproduction are available in appendix C. We filtered out documents with: * Too high **character repetition** or **word repetition** as a measure of repetitive content. * Too high ratios of **special characters** to remove page code or crawling artifacts. * Insufficient ratios of **closed class words** to filter out SEO pages. * Too high ratios of **flagged words** to filter out pornographic spam. We asked contributors to tailor the word list in their language to this criterion (as opposed to generic terms related to sexuality) and to err on the side of high precision. * Too high **perplexity** values to filter out non-natural language. * Insufficient **number of words**, as LLM training requires extensive context sizes. The languages that we eventually considered in OSCAR were the languages for which we were able to obtain hyperparameters and the cutoff values for each of these indicators by native speakers. Specifically, we considered Arabic, Basque, Bengali, Catalan, Chinese, English, French, Hindi, Indonesian, Portuguese, Spanish, Urdu, and Vietnamese. The code used for filtering OSCAR, along with the language-specific parameters and cutoff values, are publicly available. We then asked native speakers of each language to use our visualization tool9 to establish the thresholds for each filter. The percentage of documents removed after applying all these filters is given in Table 1, and the percentage of documents discarded by each filter independently is given in 3. Figure 2: Partial screenshot of the visualization tool. Users can look at how each function in the processing pipeline influenced high-level statistics. Influence on specific samples can be monitored via the same tool, see Appendix B.2 ### Deduplication Data deduplication has become a key tool for language model projects following research showing that it both improves performance on downstream tasks (Lee et al., 2022; Zhang et al., 2021) and decreases memorization of training data (Kandpal et al., 2022). To remove near duplicate documents in OSCAR (which is already exact-deduplicated) we initially used SimHash (Charikar, 2002; Manku et al., 2007), a hashing function that associates to two similar texts hashes with a low Hamming distance, with 6-grams and a Hamming distance threshold of 4. About 0.7% of the documents on average (0.07% \(\sim\) 2.7%) were identified as near duplicates. However, because SimHash is essentially a bag-of-words algorithm, long documents are more likely to end up being similar to each other. In practice, we found false positives among long documents and decided not to discard documents in a same cluster of near-duplicates when they were longer than 6000 characters. Instead, we applied substring deduplication (Lee et al., 2022) based on Suffix Array (Manber and Myers, 1993) as a complementary method that clusters documents sharing a long substring, for documents with more than 6000 characters. We found on average 21.67% (10.61% \(\sim\) 32.30%) of the data (in bytes) being duplicated. ### Personally identifiable information We used a rule-based approach leveraging regular expressions (Appendix C). The elements redacted were instances of _KEY_ (numeric & alphanumeric identifiers such as phone numbers, credit card numbers, hexadecimal hashes and the like, while skipping instances of years and simple numbers), _EMAIL_ (email addresses), _USER_ (a social media handle) and _IP_ADDRESS_ (an IPv4 or IPv6 address). ## 4 A First look at ROOTS The efforts described in the previous sections come together in an assemblage of 1.6 Terabytes of multilingual text. Figure 4 puts that number into context by comparing the sizes of corpora typically used to train large language models. Documentation of the individual components of the corpus can be found in an interactive dataset card deck. In this section, we take initial steps towards further understanding of the corpus through statistical analyses of the aggregated data. ### Natural Languages The constitution of the corpus reflects the crowdsourcing efforts that enabled its creation. It comprises of 46 natural languages spanning 3 macroareas and 9 language families: Afro-Asiatic, Austro-Asiatic, Austronesian, Basque, Dravidian, Indo-European, Mande, Niger-Congo, Sino-Tibetan. At 30.03%, English constitutes the largest part of the corpus, followed by Simplified Chinese (16.16%), French (12.9%), Spanish (10.85%), Portuguese (4.91%) and Arabic (4.6%). A more detailed breakdown of the corpus can be found in the appendix and in an online interactive exploration tool10, \begin{table} \begin{tabular}{|c c c c c c c c c c c c|} \hline AR & EU & BN & CA & ZH & EN & FR & HI & ID & PT & UR & VI & ES \\ \hline \hline 20.3 & 5.2 & 48.8 & 21.1 & 23.1 & 17.2 & 17.0 & 25.7 & 10.4 & 12.6 & 15.8 & 21.3 & 16.9 \\ \hline \end{tabular} \end{table} Table 1: Percentage of documents removed by the filtering per language (ISO 639-1 code). Figure 3: Percentage of documents discarded by each filter independently for 5 languages a screenshot of which is included in figure 1 to depict the byte-distribution of linguistic genera of the Eurasian macroarea subset of the corpus. In order for the trained model to have an opportunity to learn long dependencies, the training corpus needs to contain long sequences of coherent text. At the same time, the previous post-processing steps only reduced the size of the documents. The median size of a document in our corpus is 1,129 bytes. Figure 5 shows the distribution of document sizes by language. A more detailed breakdown of the size of corpus on an online interactive tool.11 Footnote 11: [https://hfc.co/spaces/bigscience-data/document-sizes](https://hfc.co/spaces/bigscience-data/document-sizes) The distributions of the filter values for the different filters introduced in Section 3.1 and languages, for the Catalogue, Pseudo-Crawl and OSCAR (filtered) data are available in an online demo12. Examples for English are shown in figure 6. The different distributions reflect the diversity of sourcing and filtering of our main components. A notable example is the flagged word filter, for which the distribution for OSCAR is skewed right compared to the catalogue even after filtering. Footnote 12: [https://hfc.co/spaces/bigscience-catalogue-lm-data/filter_values_distributions](https://hfc.co/spaces/bigscience-catalogue-lm-data/filter_values_distributions) ### Programming Languages As depicted in the waffle plot in figure 1, the code subset of the corpus spans 13 programming languages, with Java, PHP, and C++ accounting for more than half of all documents. Figure 4: A raw size comparison to other corpora used to train large language models. The asterisk next to GPT-3 indicates the fact that the value in question is an estimate computed using the reported number of tokens and the average number of tokens per byte of text that the GPT-2 tokenizer produces on the Pile-CC, Books3, OWT2, and Wiki-en subsets of the Pile (Gao et al., 2020) Figure 5: Size in bytes of every document in the corpus per language. The y-axis is in logarithmic scale. Box-and-whisker diagrams illustrate median, the first and third quartiles, whiskers drawn within the 1.5 IQR value and outliers Configuration and test files are abundant in most GitHub repositories but not as interesting for code modeling. To that end, we use a heuristic whose first step examines the first 5 lines of a file for the presence of keywords such as "configuration file" or "test file". Failing that, the second step is to see whether the occurrence of the literals config and test in a given file exceeds 5% of the total number of lines of that file. We find that 5.23% of the data consists of configuration files and 7.88% of test files. Allamanis (2019) and Lopes et al. (2017) highlight the large fraction of near-duplicates present in code datasets and how they can inflate performance metrics. Exact match deduplication alone can miss a fair amount of near-duplicates. To detect them, we first compute the MinHash of all documents, then create a Locality Sensitive Hashing (LSH) index between files to find the duplicate clusters in linear time. We additionally evaluate the Jaccard similarities within duplicate clusters to remove some false positives. We find 10.9M duplicate files in the clusters and 4.1M unique files: almost 32% of the data consists of near-duplicates. Syntax checkers13 are used to validate 500K samples of Python and PHP code. We find that only 1% of the Python data and 2% of the PHP files do not pass the syntax check. Footnote 13: py_compile for Python and the \(\neg 1\) flag for PHP ### Tokenizer analysis of the component datasets A tokenizer trained on a dataset can be used as a proxy for its content (Gao et al., 2020). The relevant metric is the number of tokens produced for a byte of natural language. The more different the training corpus from the tokenized corpus, the more tokens will be produced as the tokenizer is forced to divide natural text in more numerous, more general, smaller tokens. This property has allowed us to spot errors associated with outlier values, such as incorrectly classified languages, or crawling error. In the following analysis, we use it in two ways: first, we can use tokenizers trained on different corpora to see how ours differs from them; and second, we can use a tokenizer trained on this corpus to assess which components are outliers. We exclude outliers smaller than 5 documents. Figure 6: Some distributions of filter values for English. A filter value is the value that the filter gives to a document. These values are generally used to filter out documents that are too low or too high rated and also inform about the composition of the datasets. Figure 7: Tokens per byte for each English-language component for tokenizers trained on this corpus (BLOOM), the Pile (GPT-NeOX 20B) and C4 (T5). Lower values mean the component (X axis) is more similar in aggregate to the compared training corpus. Figure 7 shows the tokens-per-byte measurement on English component datasets for the BLOOM tokenizer, trained on this corpus, the GPT-NeoX 20B tokenizer (Black et al., 2022), trained on the Pile, and the T5 tokenizer (Raffel et al., 2020), trained on C4. Those tokenizers may differ in algorithms and/or vocabulary size, but we won't be directly comparing them to each other. The figure is ordered by BLOOM tokenizer token-per-byte values, which shows that the ordering is very similar for BLOOM and GPT-NeoX. However, it shows several bumps for T5: component datasets that are out of domain in C4 but not our corpus, for example technical and academic datasets such as s2orc or royal_society_corpus, domains absent from C4's Common Crawl-sourced data. Other such datasets include global_voices, which contains news about non-English-speaking regions including quotes in the original languages and no_code_stackexchange, which contains forums which, although in English, may be dedicated to technical matters, foreign languages, or very specific domains. Both are similar to our corpus but not to the Pile or C4. Figure 8 additionally shows BLOOM fertilities for Simplified Chinese, French and Arabic components. Outlier, high-fertility components, e.g. datasets that differ from the rest of our corpus, tend to be the same for all languages. project_gutenberg contains old books with their original formatting (for example, "********" to denote page ends). wikivoyage of words in foreign languages. wikiviversity contains technical terms and LaTeX. wikivoyage contains tables formatted as text. Forums may contain the user and date information of the message, as well as internet slang or emoji. arabench is spoken Arabic, and habibi is classical Arabic with more diacritics than modern. We deem most of those deviations acceptable to represent the diversity of uses of text, which tokenizer analysis is able to surface from the rest of the dataset. ## 5 Conclusion We have presented ROOTS, a massive multilingual corpus that was the result of an international collaboration between multidisciplinary researchers studying large language models. The efforts to put the corpus together were value-driven and prompted by a data-first approach to training the BLOOM model. We further release the tooling developed throughout the project, and are currently implementing a release strategy that is informed by both the licensing and governance needs of every data source for the corpus itself. We hope this paves the way toward a more reflected use of the data that makes its way into large language models. ## Ethical Considerations and Broader Impacts Statement As discussed in Section 1, the BigScience Research Workshop was conceived as a collaborative and value-driven endeavor from the start. This approach shaped many of the decisions described in this paper, spurring many contextual discussions and consensus-seeking on how to articulate the project's core values, those of the contributors to the data efforts, and considerations of social impact on the people directly and indirectly impacted. Of particular relevance were the data release and governance strategy, the choice to center human selection of data while still using OSCAR web-crawled for a significant section of the corpus, and the tools we developed to manage the risks of the latter (including regarding privacy). Each of these were the occasion of moral exercises and technical contributions that we believe were useful and required, and each will require further research and progress. We provide a more detailed discussion of these aspects of our work in Appendix A. Figure 8: Tokens per byte for each French, Simplified Chinese, and Arabic component for tokenizers trained on this corpus. Lower values mean the component (X axis) is more similar in aggregate to the rest of the corpus. ## Acknowledgements **BigScience.** This work was pursued as part of the BigScience research workshop, an effort to collaboratively build a very large multilingual neural network language model and a very large multilingual text dataset. This effort gathered 1000+ reasearchers from 60 countries and from more than 250 institutions. **Compute.** The BigScience Workshop was granted access to the HPC resources of the Institut du developpement et des ressources en informatique scientifique (IDRIS) du Centre national de la recherche scientifique (CNRS) under the allocation 2021-A0101012475 made by Grand equipement national de calcul intensif (GENCI). Model training ran on the Jean-Zay cluster of IDRIS, and we thank the IDRIS team for their responsive support throughout the project, in particular Remi Lacroix.
2307.00733
Asymptotic properties of maximum likelihood estimators for determinantal point processes
We obtain the almost sure strong consistency and the Berry-Esseen type bound for the maximum likelihood estimator Ln of the ensemble L for determinantal point processes (DPPs), strengthening and completing previous work initiated in Brunel, Moitra, Rigollet, and Urschel [BMRU17]. Numerical algorithms of estimating DPPs are developed and simulation studies are performed. Lastly, we give explicit formula and a detailed discussion for the maximum likelihood estimator for blocked determinantal matrix of two by two submatrices and compare it with the frequency method.
Yaozhong Hu, Haiyi Shi
2023-07-03T03:42:05Z
http://arxiv.org/abs/2307.00733v3
# Asymptotic properties of maximum likelihood estimators for determinantal point processes ###### Abstract We obtain the almost sure consistency and the Berry-Esseen type bound of the maximum likelihood estimator for determinantal point processes (DPPs), completing and extending previous work initiated in Brunel, Moitra, Rigollet, and Urschel [1]. We also give explicit formula and a detailed discussion for the maximum likelihood estimator for blocked determinantal matrix of two by two submatrices and compare it with the frequency method. ## 1 Introduction Determinantal point processes (DPPs) arise from random matrix theory [14] and are first introduced to give the probability distribution of fermionic system in thermal equilibrium in quantum physics [10]. Since then, DPPs have been found in various aspects of mathematics, including for example, loop-free Markov chains [1] and edges of uniformly spanning trees [1]. In the seminal work [13], Kulesza and Taskar show that DPPs demonstrate the unique characteristics comparing to various other probabilistic models in the sense that they capture the global repulsive behavior between items, give polynomial-time algorithms for statistical inference, and have geometrical intuition. Due to these advantages DPPs have played very important roles in machine learning, especially in subset selection problems, such as documentary summarization, image search, and pose determination [13], and so on. These real world applications necessitate the estimation of parameters of determinantal point process models. In this context, maximum likelihood estimation is a natural choice, which in general leads to a non-convex optimization problem in our situation. Along this direction, Kulesza and Taskar split DPPs model into diversity part and quality part and only learn the quality part while the first part is fixed. They conjecture that the problem of learning the likelihood of DPPs is NP-hard, which has been proven by [14] a decade later. Brunel, Moitra, Rigollet, and Urschel [1] first studies the local geometry of the expected maximum likelihood estimation of DPPs, that is, the curvature of likelihood function around its maximum. Then they prove that the maximum likelihood estimator converges to true values in probability and establish the corresponding central limit theorem. Motivated by this work, our first result in this paper is to prove that the convergence of the maximum likelihood estimator to the true value also holds almost surely. Our second result is even more involved, we shall obtain the Berry-Essen type theorem of the maximum likelihood estimator, that is, the quantitative rate in the central limit theorem. Lastly, we present some special cases where all the parameters can be estimated analytically. The paper is organized as follows. In Section 2 we introduce some basic definitions and properties of DPPs. In Section 3 we present our main results for the almost sure consistency and the Berry-Esseen type theorem. In Section 4, we discuss the explicit MLE for the two by two ensembles. Some concluding remarks are given in Section 5. ## 2 Preliminary We first explain the notations that we are going to use in this work. Fix a positive integer \(N\) and denote \([N]=\{1,2,...,N\}\). For a \(J\subseteq[N]\), \(|J|=\#J\) denotes the number of element in \(J\). For a matrix \(A\in\mathbb{R}^{N\times N}\) and \(J\subseteq[N]\), denote by \(A_{J}\) the restriction of A to \(J\times J\), which is a \(J|\times|J|\) matrix. Sometimes \(A_{J}\) also refers to an \(N\times N\) matrix whose restriction to \(J\) is \(A_{J}\) and has zeros everywhere else. Let \(\mathcal{S}_{[N]}\), \(\mathcal{S}_{[N]}^{+}\), \(\mathcal{S}_{[N]}^{++}\) and \(\mathcal{S}_{[N]}^{(0,1)}\) be the sets of all symmetric matrices, positive semi-definite matrices, (strictly) positive definite matrices, and symmetric matrices whose eigenvalues belong to interval \((0,1)\) respectively, on \(\mathbb{R}^{N\times N}\). Let \(A\) and \(B\) be matrices in \(\mathcal{S}_{[N]}\). We say that \(B\preceq A\) if \(A-B\) is positive semidefinite. Similarly, we say that \(B\prec A\) if \(A-B\) is positive definite. By contrast, we say that \(B\leq A\) if \(A_{i,j}-B_{i,j}\geq 0\) for all \(i\) and \(j\). For a matrix \(A\in\mathbb{R}^{N\times N}\), let \(\|A\|_{F}\), \(\det(A)\), and \(\operatorname{Tr}(A)\) denote its Frobenius norm (Hilbert-Schmidt norm), determinant and trace respectively. If \(A\) is vectorized as an \(N\times N\) column vector then the Frobenius norm of \(A\) is \(\mathcal{L}^{2}\) norm \(\|A\|_{2}\). For \(A\in\mathcal{S}_{[N]}\), \(k\geq 1\) and a smooth function \(f:\mathcal{S}_{[N]}\to\mathbb{R}\), we denote by \(\mathrm{d}^{k}f(A)\) the \(k\)-th derivative of \(f\) evaluated at \(A\in\mathcal{S}_{[N]}\). This is a \(k\)-linear map defined on \(\mathcal{S}_{[N]}\); for \(k=1\), \(\mathrm{d}f(A)\) is the gradient of \(f\), \(\mathrm{d}^{2}f(A)\) the Hessian, etc. A matrix \(A\in\mathcal{S}_{[N]}\) is called block diagonal if there exists a partition \(\{J_{1},J_{2},...,J_{k}\}\), \(k\geq 1\), such that \(A_{ij}=0\) when \(i\) and \(j\) belong to different \(J_{a}\) and \(J_{b}\). The largest \(k\) such that the partition exists is called the number of blocks of \(A\) and consequently \(J_{1},...,J_{k}\) are called blocks of \(A\). For a subset \(A\subseteq\mathcal{Y}\), let \(\bar{A}\) denote the complement of \(A\), that is, set \(\mathcal{Y}\backslash A\). Let us recall that a point process \(\mathcal{P}\) on a ground set \(\mathcal{Y}\) is a probability measure over the subsets of \(\mathcal{Y}\). Random subsets drawn from the point process \(\mathcal{P}\) can be any subset between null set and full set \(\mathcal{Y}\). In this paper, we focus on the discrete and finite point process, where the ground set, without loss of generality, is \(\mathcal{Y}=\{1,2,\cdots,N\}\). The set of all subsets of \(\mathcal{Y}\) is denoted by \(\mathbb{Y}\). **Definition 2.1**.: _A point process \(\mathbf{Y}\) is called a determinantal point process if \(\mathbf{Y}\) is a \(\mathbb{Y}\)-valued random variable such that for every fixed set \(A\subseteq\mathcal{Y}\),_ \[\mathbb{P}(A\subseteq\mathbf{Y})=\det(K_{A})\,, \tag{2.1}\] _where \(K_{A}\) is the restriction of an \(N\times N\) symmetric matrix \(K\) to the subset \(A\), that is, \(K_{A}:=[K_{i,j}]_{i,j\in A}\)._ If we think of each of item in the ground set \(\mathcal{Y}\) as the Boolean variable, the left side of (2.1) is the marginal probability in certain sense and hence \(K\) is called marginal kernel. (2.1) satisfies the following necessary conditions: * Since the marginal probability of empty set is the total probability space, \(\mathbb{P}(\Omega)=\mathbb{P}(\emptyset\subseteq\mathbf{Y})=1\). We set \(\det(K_{\emptyset})=1\). * Since \(\mathbb{P}\) is a probability measure, all principal minors of \(K\), i.e. \(\det(K_{A})\) must be nonnegative, and thus K itself must be positive semidefinite, that is, \(K\succeq 0\). * From \(\mathbb{P}(\emptyset=\mathbf{Y})+\mathbb{P}(\bigcup_{i=1}^{N}\{i\in\mathbf{Y} \})=1\) and using inclusion-exclusion principle we get \[\mathbb{P}(\bigcup_{i=1}^{N}\{i\in\mathbf{Y}\}) = \sum_{i\in[N]}\mathbb{P}(i\in\mathbf{Y})-\sum_{\{i,j\}\subset[N]} \mathbb{P}(\{i,j\}\subseteq\mathbf{Y})+\ldots\] (2.2) \[\ldots +(-1)^{N-1}\mathbb{P}([N]\subseteq\mathbf{Y})\] \[= \sum_{|A|=1}\det(K_{A})-\sum_{|A|=2}\det(K_{A})+\ldots\] \[\ldots +(-1)^{N-1}\det(K)\] \[= 1-\det(I-K)\,.\] The above last equality follows from the characteristic polynomial. Equation (2.2) also means \[\mathbb{P}(\emptyset=\mathbf{Y})=\det(I-K)\geq 0. \tag{2.3}\] Similarly, we are able to show that \(\mathbb{P}(\emptyset=\mathbf{Y}\cap A)=\det(I_{A}-K_{A})\geq 0\) for any subset \(A\subseteq[N]\) and hence \(K\preceq I\). So the necessary condition for a symmetric matrix to give a determinantal process is \(0\preceq K\preceq I\). In particular, all the diagonal elements of the marginal kernel \(K_{i,i}\) should be in the interval \([0,1]\). We can assume \(K_{i,i}\) is always greater than \(0\), otherwise the element \(i\) can be excluded from the model. This condition turns out to be sufficient: any \(0\preceq K\preceq I\) defines a DPP. To prove this, it's sufficient to show that for every \(A\subseteq[N]\), the atomic probability is well-defined, that is, \(0\leq\mathbb{P}(A=\mathbf{Y})\leq 1\). The probability being less or equal to \(1\) holds since \(K\preceq I\). For the other inequality, we assume \(K_{A}\) is invertible.1 Then using Schur complement and characteristic polynomial, we have Footnote 1: if \(K_{A}\) is not invertible, we immediately get \(\mathbb{P}(A=\mathbf{Y})=0\). \[\mathbb{P}(A=\mathbf{Y}) = \mathbb{P}(A\subseteq\mathbf{Y})-\mathbb{P}(\bigcup_{i\in\bar{A}} \{A\cup\{i\}\subseteq\mathbf{Y}\}) \tag{2.4}\] \[= \det(K_{A})-\sum_{i\in\bar{A}}\det(K_{A\cup\{i\}})+\sum_{\{i,j\} \subseteq\bar{A}}\det(K_{A\cup\{i,j\}})+\] \[\ldots +(-1)^{|\bar{A}|}\det(K)\] \[= \det(K_{A})-\sum_{i\in\bar{A}}\det(K_{A})\det(K_{ii}-K_{\{i\},A} K_{\bar{A}}^{-1}K_{A,\{i\}})\] \[+ \sum_{\{i,j\}\subset\bar{A}}\det(K_{A})\det(K_{\{i,j\}}-K_{\{i, j\},A}K_{A}^{-1}K_{A,\{i,j\}})+\] \[\ldots +(-1)^{|\bar{A}|}\det(K_{A})\det(K_{\bar{A}}-K_{\bar{A},A}K_{A}^ {-1}K_{A,\bar{A}})\] \[= (-1)^{|\bar{A}|}\det(K_{A})\det((K_{\bar{A}}-K_{\bar{A},A}K_{A}^ {-1}K_{A,\bar{A}})-I_{\bar{A}})\] \[= (-1)^{|\bar{A}|}\det(K-I_{\bar{A}}),\] where \(K_{A,B}\) denotes the matrix obtained from \(K\) by keeping only those entries whose rows belong to \(A\) and whose columns belong to \(B\) (if \(A=B\) we simply have \(K_{A}\).), \(|A|\) denotes the cardinality of subset \(A\), and \(\bar{A}\) the complement of set \(A\). Here we use a slight abuse of notation of \(I_{\bar{A}}\). We refer it to an N \(\times\) N matrix whose restriction to \(\bar{A}\) is \(I_{\bar{A}}\) and has zeros everywhere else. Since \(0\preceq K\preceq I\), \(\mathbb{P}(A=\mathbf{Y})=|\det(K-I_{\bar{A}})|\geq 0\). Sometimes it is quite inconvenient to work with marginal kernels since their eigenvalues should be bounded by \(0\) and \(1\), and the marginal probability is not very appropriate to describe real world data. Here we introduce a slightly smaller class of DPPs called L-ensembles. **Definition 2.2**.: _A point process is called an L-ensemble if it is defined through a real, symmetric matrix \(L\):_ \[\mathbb{P}_{L}(A=\mathbf{Y})\propto\det(L_{A}), \tag{2.5}\] _where \(A\subseteq\mathcal{Y}\) is a fixed subset._ By the normalization, the proportion coefficient is equal to \[\frac{1}{\sum_{A\subseteq\mathcal{Y}}\det(L_{A})}. \tag{2.6}\] Though this seems very cumbersome, the following theorem gives us the closed form of (2.6). **Theorem 2.3** ([16]).: _For any \(A\subseteq\mathcal{Y}\),_ \[\sum_{A\subseteq Y\subseteq\mathcal{Y}}\det(L_{Y})=\det(L+I_{\bar{A}}). \tag{2.7}\] _In particular, when \(A=\emptyset\), we have \(\sum_{A\subseteq\mathcal{Y}}\det(L_{A})=\det\left(L+I\right)\)._ Thus we have \[\mathbb{P}_{L}(A=\mathbf{Y})=\frac{\det(L_{A})}{\det(L+I)}\,. \tag{2.8}\] Moreover, the following theorem proven by [14] shows that L-ensembles are indeed DPPs. **Theorem 2.4**.: _An L-ensemble is a DPP, and its marginal kernel is_ \[K=L(L+I)^{-1}=I-(L+I)^{-1}. \tag{2.9}\] However, not all DPPs are L-ensembles. By inverting the (2.9), we have \[L=K(I-K)^{-1}. \tag{2.10}\] We see that the equality fails when the eigenvalues of K achieve the upper bound 1. Also from (2.3) we observe that the existence of L-ensembles is equivalent to the point processes giving non-zero probability to the empty set. From Equation (2.1), if \(A=\{i\}\subseteq\mathcal{Y}\) is a singleton, then we have \[\mathbb{P}(i\in\mathbf{Y})=K_{ii}\,. \tag{2.11}\] So the diagonal of marginal kernel gives the probability of inclusion for individual elements. if \(A=\{i,j\}\subseteq\mathcal{Y}\), then the probability is given by the two by two principal minor \(\begin{pmatrix}K_{ii}&K_{ij}\\ K_{ji}&K_{jj}\end{pmatrix}\) \[\mathbb{P}(\{i,j\}\subseteq\mathbf{Y}) = K_{ii}K_{jj}-K_{ij}^{2} \tag{2.12}\] \[\leq K_{ii}K_{jj}\] \[= \mathbb{P}(i\in Y)\mathbb{P}(j\in Y).\] Inequality (2.12) implies that element \(i\) and \(j\) tend not to co-occur, especially when \(K_{ij}^{2}\) is close to \(K_{ii}K_{jj}\). This feature is called repulsive behavior of determinantal point processes and the off-diagonal elements characterize the degree of repulsion. Because of this major property, points tend to repel each other and hence induce point configurations that usually spread out evenly on the space. For example, let our ground set \(\mathcal{Y}\) be a 2-dimensional grid: set \(\{(i,j)\in\mathbb{Z}^{2}:1\leq i,j\leq 60\}\), and then the kernel should a 3600 by 3600 matrix. Let the matrix be a Gaussian kernel2, where each entry is given by \(L_{ij,kl}=\exp\{-\frac{1}{0.1^{2}}((i-k)^{2}+(j-l)^{2})\}\). Using the sampling algorithm proposed by Hough et al [1], we draw samples from the DPP. See Figures 1 and 2. Footnote 2: the Gaussian kernel defines an L-ensemble instead of marginal kernel. ## 3 Maximum likelihood Estimator of DPPs In the remaining part of this paper, we are only concerned with the estimation of the L-ensemble from the data. As we mentioned before, DPPs possess many nice properties, which make them very prevalent in mathematics. However, what makes DPPs more complicated is that they are not identifiable, that is, different ensembles could give the same DPP. Let DPP(\(L^{*}\)) denote the L-ensemble determined by the matrix \(L^{*}\). The identifiability problem is precisely described by Theorem 4.1 in [11]. Let \(\mathcal{D}\) be the collection of all diagonal matrices whose entry is either 1 or -1. **Theorem 3.1** ([11]).: _For \(L_{1}\) and \(L_{2}\in\mathcal{S}_{[N]}^{++}\), \(\mathrm{DPP}(L_{1})=\mathrm{DPP}(L_{2})\) if and only if there exists a \(D\in\mathcal{D}\) such that \(L_{2}=DL_{1}D\)._ We are interested in how many possible ensembles can a given DPP has, so [1] defines the degree of identifiability of DPP. **Definition 3.2**.: _Let \(L\in\mathcal{S}_{[N]}^{++}\). The degree \(\mathrm{Deg}(L)\) of identifiablity of L is the cardinality of the family \(\{DLD:D\in\mathcal{D}\}\). We say that \(L\) is irreducible if the cardinality is \(2^{N-1}\) and reducible otherwise. If \(\mathbf{Z}\sim\mathrm{DPP}(L)\), we also call \(\mathbf{Z}\) is irreducible if \(L\) is irreducible and reducible otherwise._ The next proposition shows that the degree of identifiability turns out to be completely described by the block structure of the matrix. **Proposition 3.3** ([1]).: _Let \(L\in\mathcal{S}_{[N]}^{++}\), \(Z\sim\mathrm{DPP}(L)\), and \(K\) be the corresponding marginal kernel. Let \(1\leq k\leq N\) and \(\{J_{1},J_{2},...,J_{k}\}\) be a partition of \([N]\). The following statements are equivalent:_ 1. \(L\) _is block diagonal with_ \(k\) _blocks_ \(J_{1},J_{2},...,J_{k}\)_,_ Figure 1: A sample from DPP with Gaussian Figure 2: A sample drawn independently kernel. _2. \(K\) is block diagonal with k blocks \(J_{1},J_{2},...,J_{k}\),_ _3. \(Z\cap J_{1},...,Z\cap J_{k}\) are mutually independent irreducible DPPs,_ _4. \(L=D_{j}LD_{j}\) for all \(j\in[k]\), where \(D_{j}\in\mathcal{D}\) whose diagonal element is 1 on \(J_{j}\) and -1 otherwise._ From the above proposition we know that \(L\) has \(k\) blocks if and only if the degree of identifiability of \(L\) is \(2^{N-k}\). In particular, \(L\) is irreducible if and only if it only has one block. Let \(Z_{1},...,Z_{n}\) be \(n\) independent subsets drawn from DPP(\(L^{\star}\)) for some unknown \(L^{\star}\in\mathcal{S}_{[N]}^{++}\). The scaled log-likelihood associated to this model for any \(L\in\mathcal{S}_{[N]}^{++}\) is \[\hat{\Phi}(L)=\frac{1}{n}\sum_{i=1}^{n}\log P_{L}(Z_{i})=\sum_{J\subseteq[N]} \hat{p}(J)\log\det(L_{J})-\log\det(L+I), \tag{3.1}\] where \[\hat{p}(J)=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}(Z_{i}=J).\] \(\mathbb{I}(\cdot)\) stands for the characteristic function. It is also useful to define the expected log-maximum likelihood function given the real kernel \(L^{\star}\) \[\Phi_{L^{\star}}(L)=\sum_{J\subseteq[N]}p_{L^{\star}}(J)\log\det(L_{J})-\log \det(L+I)\,, \tag{3.2}\] where \[p_{L^{\star}}(J)=\mathrm{E}(\hat{p}(J))=\frac{\det\left(L_{J}^{\star}\right)} {\det\left(L^{\star}+I\right)}.\] Basically, we take the expectation of \(\hat{p}(J)\) with respect to the true probability measure DPP(\(L^{\star}\)) and then get the expected maximum likelihood function. In the sequel let \(L^{\star}\) be fixed, let \(\hat{p}_{J}\) denote \(\hat{p}(J)\), \(p_{J}^{\star}\) denote \(p_{L^{\star}}(J)\) and \(\Phi\) denote \(\Phi_{L^{\star}}\). Let \(\mathrm{KL}\Big{(}\mathrm{DPP}(L^{\star}),\mathrm{DPP}(L)\Big{)}\) be the Kullback-Leibler divergence, which measures the difference between distributions of DPP(\(L^{\star}\)) and of DPP(\(L\)). Since Kullback-Leibler divergence is always non-negative, we have \[\mathrm{KL}\Big{(}\mathrm{DPP}(L^{\star}),\mathrm{DPP}(L)\Big{)}=\Phi(L^{ \star})-\Phi(L)\geq 0,\,\forall L\in\mathcal{S}_{[N]}^{++}.\] As a consequence \(L^{\star}\) is the global maxima of the expected maximum function \(\Phi(L)\). Due to non-identifiability of DPPs illustrated in Theorem 3.1, \(\Phi(L)\) achieves the maximum whenever \(L=DL^{\star}D\) for some \(D\in\mathcal{D}\) and hence the global maxima is the set \(\{DL^{\star}D:D\in\mathcal{D}\}\). Now we introduce a useful lemma. **Lemma 3.4**.: _The gradient of log-likelihood function \(\hat{\Phi}(L)\) defined in (3.1) is given by_ \[\mathrm{d}\hat{\Phi}(L)=\sum_{J\subseteq[N]}\hat{p}_{J}L_{J}^{-1}-(L+I)^{-1}. \tag{3.3}\] Proof.: We regard determinant as a multivariate function of \(N\times N\) variables and then the directional derivative of \(\det(L+I)\) along direction \(H\) is given by \[\mathrm{d}\det(L+I)(H)= \lim_{t\to 0}\frac{\det(L+I+tH)-\det(L+I)}{t}\] \[= \lim_{t\to 0}\det(L+I)\Big{[}\frac{\det(I+t(L+I)^{-1}H)-1}{t} \Big{]}\] \[= \lim_{t\to 0}\det(L+I)\Big{[}\frac{1+t\operatorname{Tr}((L+I)^{-1}H)+ o(t^{2})-1}{t}\Big{]}\] \[= \det(L+I)\operatorname{Tr}((L+I)^{-1}H), \tag{3.4}\] where the third equality follows from the power series representation of \(\det(I+A)\). Then the directional derivative of \(\hat{\Phi}(L)\) along direction \(H\) is \[\mathrm{d}\hat{\Phi}(L)(H)=\sum_{J\subseteq[N]}\hat{p}_{J} \operatorname{Tr}(L_{J}^{-1}H_{J})-\operatorname{Tr}((L+I)^{-1}H). \tag{3.5}\] In matrix form, the above equation becomes \[\mathrm{d}\hat{\Phi}(L)=\sum_{J\subseteq[N]}\hat{p}_{J}L_{J}^{-1}-(L+I)^{-1}. \tag{3.6}\] ### Strong consistency One critical issue for the maximum likelihood estimation is its consistency. Since determinantal point processes are not identifiable we measure the performance of maximum likelihood estimation by the distance between the likelihood maximizer \(\hat{L}_{n}\) and the set of true values: \[\ell(\hat{L}_{n},L^{\star})=\min_{D\in\mathcal{D}}\|\hat{L}_{n}-DL^{\star}D\|_ {F}.\] [1] proves that this distance converges to zero in probability. We shall prove a stronger version: the convergence also holds almost surely. The proof is based on [1, Theorem 14] and Wald's consistency theorem [20]. Even though the latter theorem originally requires the distribution to be identifiable, this is not a problem for this setting where we consider distance between \(\hat{L}_{n}\) and the set of true values instead of one value. We first show that \(\ell(\hat{L}_{n},L^{\star})\) converges to zero almost surely when parameters of matrices are restricted on a compact set. For \(0<\alpha<\beta<1\), define a set \(E_{\alpha,\beta}\) \[E_{\alpha,\beta}=\Big{\{}L\in\mathcal{S}_{[N]}^{++}:K=L(I+L)^{-1} \in\mathcal{S}_{[N]}^{[\alpha,\beta]}\Big{\}}.\] Choose appropriate \(\alpha,\beta\) such that \(L^{\star}\in E_{\alpha,\beta}\). \(E_{\alpha,\beta}\) is compact since it's bounded and closed in \(\mathbb{R}^{N\times N}\). **Lemma 3.5**.: _Let \(Z_{1},...,Z_{n}\) be \(n\) independent subsets of \(Z\sim\text{DPP}(L^{\star})\) for some unknown \(L^{\star}\in E_{\alpha,\beta}\). Let \(\hat{L}_{n}\) be the maximum likelihood estimator of \(\hat{\Phi}(L)\) defined on \(E_{\alpha,\beta}\), then \(\ell(\hat{L}_{n},L^{\star})\) converges to zero almost surely._ Proof.: Let \[\Delta\hat{\Phi}(L)=\hat{\Phi}(L)-\hat{\Phi}(L^{\star})=\frac{1}{n}\sum_{i=1}^ {n}\log\frac{P_{L}(Z_{i})}{P_{L^{\star}}(Z_{i})}\] and \[\Delta\Phi(L)=\Phi(L)-\Phi(L^{\star})=\mathrm{E}_{L^{\star}}\big{(}\log\frac{ P_{L}(Z)}{P_{L^{\star}}(Z)}\big{)}.\] \(\Delta\Phi(L)\) is the Kullback-Leibler Divergence between \(\text{DPP}(L^{\star})\) and \(\text{DPP}(L)\). By Jensen's inequality, \(\Delta\Phi(L)\leq 0\) for all \(L\) and \(\Phi(L)=\Phi(L^{\star})\) if and only if \(P_{L}(Z)=P_{L^{\star}}(Z)\) for all \(Z\in[N]\), which means \(L=DL^{\star}D\) for some \(D\in\mathcal{D}\). In the sequel let \(E\) denote \(E_{L^{\star}}\) For each \(L\in E_{\alpha,\beta}\), the strong law of large numbers implies \[\Delta\hat{\Phi}(L)\xrightarrow{a.s.}\Delta\Phi(L).\] However, the above convergence doesn't imply the convergence of maximum likelihood estimator to the true values. Thus the Wald's integrability condition is needed: for every \(L\in E_{\alpha,\beta}\), there exists \(\epsilon>0\) such that, \[\mathrm{E}\sup_{\begin{subarray}{c}N\in E_{\alpha,\beta}\\ \ell(L,N)<\epsilon\end{subarray}}\log\frac{P_{N}(Z)}{P_{L^{\star}}(Z)}<\infty. \tag{3.7}\] Since \(L\mapsto\log\frac{P_{L}(Z)}{P_{L^{\star}}(Z)}\) is continuous (the determinant function is continuous), for any arbitrary \(\delta>0\) there exists \(\epsilon>0\), when \(\ell(L,N)<\epsilon\) \[(1-\delta)\frac{P_{L}(Z)}{P_{L^{\star}}(Z)}<\frac{P_{N}(Z)}{P_{L^{\star}}(Z)}< (1+\delta)\frac{P_{L}(Z)}{P_{L^{\star}}(Z)}.\] Then the Wald's integrability condition is satisfied. Now for every sequence \(\{L_{n}\}\) converging to L, we show that \(\Delta\Phi(L_{n})\) is upper semicontinuous: \[\limsup_{n\to\infty}\Delta\Phi(L_{n}) = \limsup_{n\to\infty}\mathrm{E}\log\frac{P_{L_{n}}(Z)}{P_{L^{\star }}(Z)}\] \[\leq \mathrm{E}\limsup_{n\to\infty}\log\frac{P_{L_{n}}(Z)}{P_{L^{\star }}(Z)}\] \[= \mathrm{E}\frac{P_{L}(Z)}{P_{L^{\star}}(Z)}\] \[= \Delta\Phi(L).\] The second inequality follows from the Fatou's lemma and the third identity is the consequence of continuity of the function \(\log\frac{P_{L_{\eta}}(Z)}{P_{L^{\star}}(Z)}\). For every \(\eta>0\) we define the set \(K_{\eta}\) \[K_{\eta} =\Big{\{}L\in E_{\alpha,\beta}:\ell(L,L^{\star})\geq\eta\Big{\}}\] \[=\bigcap_{D\in\mathcal{D}}\Big{\{}L\in E_{\alpha,\beta}:\|L-DL^{ \star}D\|_{F}\geq\eta\Big{\}}, \tag{3.8}\] which is closed and hence compact. Since \(\Delta\Phi(L)\) is an upper semicontinuous function, it achieves maximum over the compact set \(K_{\eta}\). We denote the maximum by \(m(\eta)\). And we cannot have \(m(\eta)=0\) because that would imply there is a \(L\in K_{\eta}\) such that \(L=DL^{\star}D\) for some \(D\in\mathcal{D}\). The strong law of large numbers implies \[\sup_{\begin{subarray}{c}N\in E_{\alpha,\beta}\\ \ell(L,N)<\epsilon\end{subarray}}\Delta\hat{\Phi}(N) \leq \frac{1}{n}\sum_{i=1}^{n}\sup_{\begin{subarray}{c}N\in E_{\alpha,\beta}\\ \ell(L,N)<\epsilon\end{subarray}}\log\frac{P_{N}(Z_{i}))}{P_{L^{\star}}(Z_{i})} \tag{3.9}\] \[\xrightarrow{a.s.} \mathrm{E}\sup_{\begin{subarray}{c}N\in E_{\alpha,\beta}\\ \ell(L,N)<\epsilon\end{subarray}}\log\frac{P_{N}(Z)}{P_{L^{\star}}(Z)}.\] By continuity, \[\lim_{\epsilon\to 0}\sup_{\begin{subarray}{c}N\in E_{\alpha,\beta}\\ \ell(L,N)<\epsilon\end{subarray}}\log\frac{P_{N}(Z))}{P_{L^{\star}}(Z)}=\log \frac{P_{L}(Z)}{P_{L^{\star}}(Z)}\] and \(\sup_{\epsilon}\log\frac{P_{N}}{P_{L^{\star}}}\) is a decreasing function with respect to \(\epsilon\) because supremum over a smaller subset is smaller than over a bigger subset. And by (3.7) it is integrable for all small enough \(\epsilon\). Hence by the dominated convergence theorem, \[\lim_{\epsilon\to 0}\mathrm{E}\sup_{\begin{subarray}{c}N\in E_{\alpha,\beta} \\ \ell(L,N)<\epsilon\end{subarray}}\log\frac{P_{N}(Z))}{P_{L^{\star}}(Z)}=\mathrm{ E}\log\frac{P_{L}(Z)}{P_{L^{\star}}(Z)}=\Delta\Phi(L).\] Thus for any \(L\in K_{\eta}\) and any \(\gamma>0\) there exists a \(\epsilon_{L}\) such that \[\mathrm{E}\sup_{\begin{subarray}{c}N\in E_{\alpha,\beta}\\ \ell(L,N)<\epsilon_{L}\end{subarray}}\log\frac{P_{N}(Z)}{P_{L^{\star}}(Z)}<m( \eta)+\gamma. \tag{3.10}\] For each \(L\in K_{\eta}\), we define the open set: \[V_{L}=\{N\in E_{\alpha,\beta}:\ell(N,L)<\epsilon_{L}\}\] and then the family \(\{V_{L}:L\in K_{\eta}\}\) is an open cover of \(K_{\eta}\) and hence has a finite subcover: \(V_{L_{1}},V_{L_{2}},....,V_{L_{d}}\). On every \(V_{L_{i}}\) we use strong law of large numbers again to obtain \[\limsup_{n\to\infty}\sup_{N\in V_{L_{i}}}\Delta\hat{\Phi}(N) \leq \limsup_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}\sup_{N\in V_{L_{i}} }\log\frac{P_{N}(Z_{i})}{P_{L^{\star}}(Z_{i})} \tag{3.11}\] \[= \mathrm{E}\sup_{N\in V_{L_{i}}}\log\frac{P_{N}(Z)}{P_{L^{\star}} (Z)}.\] From (3.10) we get \[\limsup_{n\to\infty}\sup_{N\in V_{L_{i}}}\Delta\hat{\Phi}(N)<m(\eta)+\gamma\qquad i =1,2,...,d.\] Since \(\{V_{L_{i}}:i=1,2...,d\}\) cover \(K_{\eta}\) we have \[\limsup_{n\to\infty}\sup_{N\in K_{\eta}}\Delta\hat{\Phi}(N)<m(\eta)+\gamma\] which, since \(\gamma\) is arbitrary, implies \[\limsup_{n\to\infty}\sup_{L\in K_{\eta}}\Delta\hat{\Phi}(L)<\sup_{L\in K_{\eta }}\Delta\Phi(L)=m(\eta). \tag{3.12}\] Notice that \(m(\eta)<0\). From (3.12) there exists a constant \(N_{1}\) such that \[\sup_{L\in K_{\eta}}\Delta\hat{\Phi}(L)<\frac{m(\eta)}{2},\qquad n>N_{1}.\] But \[\Delta\hat{\Phi}(\hat{L}_{n})=\sup_{L\in E_{\alpha,\beta}}\Delta\hat{\Phi}(L) \geq\Delta\hat{\Phi}(L^{\star})\stackrel{{ a.s.}}{{\longrightarrow }}\Delta\Phi(L^{\star})=0,\] so there exists a constant \(N_{2}\) such that \[\Delta\hat{\Phi}(\hat{L}_{n})\geq\frac{m(\eta)}{2},\qquad n>N_{2}\] which implies that \(\hat{L}_{n}\notin K_{\eta}\), that is, \(\ell(\hat{L}_{n},L)<\epsilon\). Now we can remove the compactness condition. **Theorem 3.6**.: _Let \(Z_{1},...,Z_{n}\) be \(n\) independent sample subsets of \(Z\sim\text{DPP}(L^{\star})\). Let \(\hat{L}_{n}\) be the maximum likelihood estimator of \(L^{\star}\). Then \(\ell(\hat{L}_{n},L^{\star})\) converges to zero almost surely._ Proof.: The first step is to show that the event \(\{\hat{L}_{n}\in E_{\alpha,\beta}\}\) holds almost sure. We adopt the proof from [1]. Let \(\delta=\min_{J\subset[N]}P_{L^{\star}}(J)\). For simplicity, we denote \(P_{L^{\star}}(J)\) by \(p_{J}^{\star}\). Since \(L^{\star}\) is positive definite, \(\delta>0\). Define the event \(\mathcal{A}\) by \[\mathcal{A}=\bigcap_{J\subset[N]}\Big{\{}p_{J}^{\star}\leq 2\hat{p}_{J}\leq 3p_{J} ^{\star}\Big{\}}.\] Observe that \(\Phi(L^{\star})<0\) and we can find \(\alpha<\exp(3\Phi(L^{\star})/\delta)\) and \(\beta>1-\exp(3\Phi(L^{\star})/\delta)\) such that \(0<\alpha<\beta<1\). Then using [1, Theorem 14] we know that on the event \(\mathcal{A}\), \(\hat{L}\in E_{\alpha,\beta}\), that is, \[P(\hat{L}\in E_{\alpha,\beta})\geq P(\mathcal{A}).\] Because \[\hat{p}_{J}=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}(Z_{i}=J)\stackrel{{ a.s.}}{{\longrightarrow}}P_{L^{\star}}(Z=J)=p_{J}^{\star}\,,\] the event \(\mathcal{A}\) holds almost surely when \(n\) goes to infinity and hence \(\{\hat{L}_{n}\in E_{\alpha,\beta}\}\) holds almost surely. Let \(\mathbb{I}_{E_{n}}\) denote the characteristic function of the event \(\{\hat{L}_{n}\in E_{\alpha,\beta}\}\). Then \[\mathbb{P}\big{(}\lim_{n\to\infty}\ell(\hat{L}_{n},L^{\star})=0 \big{)} = \mathbb{P}\big{(}\lim_{n\to\infty}\ell(\hat{L}_{n},L^{\star})=0, \lim_{n\to\infty}\mathbb{I}_{E_{n}}=1\big{)}\] \[+\mathbb{P}\Big{(}\lim_{n\to\infty}\ell(\hat{L}_{n},L^{\star})=0, \lim_{n\to\infty}\mathbb{I}_{E_{n}}\neq 1\big{)}\] \[= \mathbb{P}\big{(}\lim_{n\to\infty}\ell(\hat{L}_{n},L^{\star})=0, \lim_{n\to\infty}\mathbb{I}_{E_{n}}=1\big{)}\] \[= \mathbb{P}\Big{(}\lim_{n\to\infty}\ell(\hat{L}_{n},L^{\star})=0 \Big{|}\lim_{n\to\infty}\mathbb{I}_{E_{n}}=1\Big{)}\mathbb{P}\big{(}\lim_{n\to \infty}\mathbb{I}_{E_{n}}=1\big{)}\] \[= \mathbb{P}\Big{(}\lim_{n\to\infty}\ell(\hat{L}_{n},L^{\star})=0 \Big{|}\lim_{n\to\infty}\mathbb{I}_{E_{n}}=1\Big{)}\] \[= 1.\] The last equality follows from the fact that \(\hat{L}_{n}\in E_{\alpha,\beta}\) almost surely and from lemma 3.5. ### Berry-Essen theorem We observe that an \(N\) by \(N\) matrix \([A_{ij}]_{N\times N}\) can also be viewed as an \(N\times N\) dimensional column vector: \((A_{11},A_{12},...,A_{1N},A_{21},...,A_{N1},...A_{NN})^{T}\). Then the Frobenius norm of the matrix is just the \(\mathcal{L}^{2}\) norm for its corresponding column vector. In the following we shall regard the matrix as the corresponding column vector. Because of non-identifiability of DPPs, maximum likelihood estimators are not unique. We choose the estimator \(\tilde{L}\) which is closest to the fixed true value \(L^{\star}\). In fact, let \(\hat{L}\) be one maximal likelihood estimator. let Let \(\hat{D}\in\mathcal{D}\) be such that \[\|\hat{D}\hat{L}\hat{D}-L^{\star}\|_{F}=\min_{D\in\mathcal{D}}\|D\hat{L}D-L^{ \star}\|_{F} \tag{3.13}\] and set \(\tilde{L}=\hat{D}\hat{L}\hat{D}\). Then the strong consistency of \(\tilde{L}\) immediately follows from the Theorem 3.6. Assume that \(L^{\star}\) is irreducible and then according to [1, Theorem 8], \(\mathrm{d}^{2}\Phi(L^{\star})\) is negative definite and hence invertible. Let \(V(L^{\star})\) denote its inverse. Here if we vectorize \(L\) then \(\mathrm{d}^{2}\Phi(L^{\star})\) is an \((N\times N)\times(N\times N)\) Hessian matrix. By [13, Theorem 5.41], \[\sqrt{n}(\tilde{L}-L^{\star}) = -(\mathrm{E}(\mathrm{d}^{2}\log P_{L^{\star}}(Z)))^{-1}\frac{1}{ \sqrt{n}}\sum_{i=1}^{n}\mathrm{d}(\log P_{L^{\star}}(Z_{i}))+o_{P}(1) \tag{3.14}\] \[= -V(L^{\star})\frac{1}{\sqrt{n}}\sum_{i=1}^{n}((L^{\star}_{Z_{i}}) ^{-1}-(I+L^{\star})^{-1})+o_{P}(1).\] In particular, [13, Theorem 5.41] states that the sequence \(\sqrt{n}(\tilde{L}-L^{\star})\) is asymptotically normal with mean \(\mathbf{0}\) and covariance matrix \(-V(L^{\star})\). Hence we get the following theorem from [1]. **Theorem 3.7**.: _Let \(L^{\star}\) be irreducible. Then, \(\tilde{L}\) is asymptotically normal:_ \[\sqrt{n}(\tilde{L}-L^{\star})\xrightarrow[n\to\infty]{}\mathcal{N}(\mathbf{0},- V(L^{\star})), \tag{3.15}\] _where the above convergence holds in distribution._ Next, let us take one step further. We want to find the rate of convergence of (3.15). Namely, we want to find an upper error bound on the rate of convergence of the distribution of \((-V(L^{\star}))^{-\frac{1}{2}}\sqrt{n}(\tilde{L}-L^{\star})\) to standard multidimensional normal distribution \(Z\sim\mathcal{N}(\mathbf{0},I)\). We argue that when \(\tilde{L}\in E_{\alpha,\beta}\), the bound of the maximal error is of order \(n^{-\frac{1}{4}}\). The condition is not of too much restriction. Indeed, since \(\alpha\) and \(\beta\) can be arbitrarily close to \(0\) and \(1\) respectively, \(E_{\alpha,\beta}\) converges to \(\mathcal{S}_{[N]}^{++}\). What's more, since from Theorem 3.6, \(\hat{L}\in E_{\alpha,\beta}\) almost surely, \(\hat{D}\hat{L}\hat{D}=\tilde{L}\in E_{\alpha,\beta}\) almost surely. **Theorem 3.8**.: _Let \(\tilde{L}\) be as defined as above and also belong to \(E_{\alpha,\beta}\) and \(Z\) be an \(N\times N\) standard Gaussian matrix. Then for every \(x\in\mathbb{R}^{N\times N}\),_ \[|\mathbb{P}((-V(L^{\star}))^{-\frac{1}{2}}\sqrt{n}(\tilde{L}-L^{\star})<x)- \mathbb{P}(Z<x)|\leq C\frac{1}{\sqrt[4]{n}},\] _where C is a sufficiently large constant, which is irrelevant to \(x\), subject to \(\alpha,\beta\) and proportional to \(N^{2}\)._ Proof.: We divide the proof into four steps. **Step 1**. According to (3.14), \((-V(L^{\star}))^{-\frac{1}{2}}\sqrt{n}(\tilde{L}-L^{\star})\) can be decomposed into a sum \[X_{n}=\sum_{i=1}^{n}\xi_{i}:=(-V(L^{\star}))^{\frac{1}{2}}\frac{1}{\sqrt{n}} \sum_{i=1}^{n}((L^{\star}_{Z_{i}})^{-1}-(I+L^{\star})^{-1}) \tag{3.16}\] and a term \(\rho_{n}=(-V(L^{\star}))^{-\frac{1}{2}}o_{P}(1)\) whose Frobenius norm converges to zero in probability. \[|\mathbb{P}(X_{n}+\rho_{n}<x)-\mathbb{P}(Z<x)|\] \[=|\mathbb{P}(X_{n}+\rho_{n}<x,\|\rho_{n}\|_{F}\geq k_{n})+\mathbb{ P}(X_{n}+\rho_{n}<x,\|\rho_{n}\|_{F}<k_{n})-\mathbb{P}(Z<x)|\] \[\leq\mathbb{P}(\|\rho_{n}\|_{F}\geq k_{n})+|\mathbb{P}(X_{n}+\rho _{n}<x,\|\rho_{n}\|_{F}<k_{n})-\mathbb{P}(Z<x)|\] \[\leq\mathbb{P}(\|\rho_{n}\|_{F}\geq k_{n})\] \[\quad+|\mathbb{P}(X_{n}+k_{n}\mathbb{1}<x,\|\rho_{n}\|_{F}<k_{n} )-\mathbb{P}(Z<x)|\] \[\quad+|\mathbb{P}(X_{n}-k_{n}\mathbb{1}<x,\|\rho_{n}\|_{F}<k_{n} )-\mathbb{P}(Z<x)|\] \[=:I1+I2+I3\,, \tag{3.17}\] where \(\{k_{n}\}\) is an arbitrary sequence of positive real number and \(\mathbb{1}\) is the \(N\times N\) matrix whose entries are all \(1\). **Step 2**. The Estimation of (I1). We claim \(\mathbb{P}(\|\rho_{n}\|\geq k_{n})\leq\frac{C_{4}}{\sqrt[4]{n}}\), where \(k_{n}=n^{-\frac{1}{4}}\) and \(C_{4}\) is a constant. In fact, from the proof of [23, Theorem 5.41], \(\rho_{n}\) has the following expression \[\rho_{n}=\sqrt{n}(-V(L^{\star}))^{\frac{1}{2}}\Bigg{(} \,{\bf d}^{2}\,\hat{\Phi}_{n}(L^{\star})-{\rm E}({\bf d}^{2}\,\hat{ \Phi}_{n}(L^{\star}))\] \[+\frac{1}{2}(\tilde{L}-L^{\star})^{T}\,{\bf d}^{3}\,\hat{\Phi}_{n} (L_{n})\Bigg{)}(\tilde{L}-L^{\star}), \tag{3.18}\] where \(L_{n}\) is a point on the line segment between \(\tilde{L}\) and \(L^{\star}\). To simplify notation, let \(\theta\) denote \[\Bigg{(}\,{\bf d}^{2}\,\hat{\Phi}_{n}(L^{\star})-{\rm E}({\bf d}^{2}\,\hat{\Phi }_{n}(L^{\star}))+\frac{1}{2}(\tilde{L}-L^{\star})^{T}\,{\bf d}^{3}\,\hat{\Phi }_{n}(L_{n})\Bigg{)}(\tilde{L}-L^{\star}).\] Then \[{\rm E}\|\rho_{n}\|_{F} = {\rm E}\|\sqrt{n}(-V(L^{\star}))^{\frac{1}{2}}\theta\|_{F} \tag{3.19}\] \[= \sqrt{n}{\rm E}\|(-V(L^{\star}))^{\frac{1}{2}}\theta\|_{F}\] \[\leq \sqrt{n}{\rm E}\|(-V(L^{\star}))^{\frac{1}{2}}\|_{op}\|\theta\|_ {2}\] \[= \sqrt{n\cdot\Lambda_{max}(-V)}\cdot{\rm E}\|\theta\|_{2}.\] \(\|\cdot\|_{op}\) denotes the operator norm induced by \({\cal L}^{2}\) norm and \(\Lambda_{max}\) denotes the largest eigenvalue. For the first inequality, we regard \(\theta\) as an \(N\times N\) column vector and \((-V(L^{\star}))^{\frac{1}{2}}\) is an \((N\times N)\times(N\times N)\) matrix. \[{\rm E}\|\phi\|_{2}= {\rm E}\Big{\|}\Big{(}\,{\bf d}^{2}\,\hat{\Phi}_{n}(L^{\star})-{ \rm E}({\bf d}^{2}\,\hat{\Phi}_{n}(L^{\star}))+\frac{1}{2}(\tilde{L}-L^{\star} )^{T}\,{\bf d}^{3}\,\hat{\Phi}_{n}(L_{n})\Big{)}(\tilde{L}-L^{\star})\Big{\|} _{2}\] \[\leq {\rm E}\Big{\|}\Big{(}\,{\bf d}^{2}\,\hat{\Phi}_{n}(L^{\star})-{ \rm E}({\bf d}^{2}\,\hat{\Phi}_{n}(L^{\star}))(\tilde{L}-L^{\star})\Big{\|}_{2}\] (I1-1) \[+ {\rm E}\|\frac{1}{2}(\tilde{L}-L^{\star})^{T}\,{\bf d}^{3}\,\hat{ \Phi}_{n}(L_{n})\Big{)}(\tilde{L}-L^{\star})\Big{\|}_{2}\,.\] (I1-2) Using Cauchy-Schwartz inequality to estimate (I1-1) we see \[{\rm I1-1} \leq {\rm E}^{\frac{1}{2}}\Big{\|}\Big{(}\,{\bf d}^{2}\,\hat{\Phi}_{n }(L^{\star})-{\rm E}({\bf d}^{2}\,\hat{\Phi}_{n}(L^{\star}))\Big{\|}_{op}^{2}{ \rm E}^{\frac{1}{2}}\|\tilde{L}-L^{\star}\|_{2}^{2} \tag{3.20}\] \[\leq \frac{N^{2}}{\sqrt{n}}\max_{i,j}(L^{\star-1})_{ij}^{2}{\rm E}^{ \frac{1}{2}}\|\tilde{L}-L^{\star}\|_{2}^{2}.\] Let \(h(x)\) be a multivariate function: \[h:\quad{\mathbb{R}}^{N\times N} \longrightarrow \,{\mathbb{R}}\] \[(x_{1},x_{2},...,x_{NN}) \longmapsto x_{1}^{2}+x_{2}^{2}+\cdots+x_{NN}^{2}\] Then \(h\) is a continuous function. What's more almost surely \(\tilde{L}\in E_{\alpha,\beta}\), which is a compact and convex set. Using Theorem 3.7 and portmanteau lemma we have \[{\rm E}\Big{(}h(\sqrt{n}(\tilde{L}-L^{\star}))\Big{)}=n{\rm E}\|\tilde{L}-L^{ \star}\|_{F}^{2}\longrightarrow{\rm E}\|\tilde{Z}\|_{F}^{2}, \tag{3.21}\] where \(\tilde{Z}\sim\mathcal{N}(\mathbf{0},-V(L^{\star}))\). \(\mathrm{E}\|\tilde{Z}\|_{F}^{2}\) is equal to \(\mathrm{E}(\tilde{Z}_{11}^{2}+\cdots+\tilde{Z}_{1n}^{2}+\tilde{Z}_{21}^{2}+ \cdots+\tilde{Z}_{nn}^{2})=\mathrm{Tr}(-V(L^{\star}))\). Then there exists a constant \(C_{1}\) subject to \(\alpha,\beta\) such that \[\mathrm{E}^{\frac{1}{2}}\|\tilde{L}-L^{\star}\|_{2}^{2}\leq C_{1}\frac{1}{ \sqrt{n}}. \tag{3.22}\] As a result, \[(\text{\ref{eq:1}-1})\leq C_{2}N^{2}\frac{1}{n}\,, \tag{3.23}\] where \(C_{2}\) is a suitable constant. Next, we estimate the second part, that is (I1-2): \[\mathrm{E}\|\frac{1}{2}(\tilde{L}-L^{\star})^{T}\,\mathbf{d}^{3}\,\hat{\Phi}_{ n}(L_{n})\big{)}(\tilde{L}-L^{\star})\|_{2}.\] Here \(\mathbf{d}^{3}\,\hat{\Phi}_{n}(L_{n})\) is an \(N\times N\) dimensional column vector whose entries are \(N\times N\) matrices. Since \(\hat{\Phi}(L)\) is infinitely many time differentiable, \(L_{n}\) is on the line segment between \(\tilde{L}\) and \(L^{\star}\), and \(E_{\alpha,\beta}\) is a convex and compact set, we conclude that every entry of \(\mathbf{d}^{3}\,\hat{\Phi}_{n}(L_{n})\) is bounded. Hence there exists a constant \(C_{3}\geq 0\) subject to \(\alpha\) and \(\beta\) such that \[\mathrm{E}\|\frac{1}{2}(\tilde{L}-L^{\star})^{T}\,\mathbf{d}^{3} \,\hat{\Phi}_{n}(L_{n})\big{)}(\tilde{L}-L^{\star})\big{\|}_{2}\leq C_{3}\mathrm{E}\|\tilde{L}-L^{\star}\|_{2}^{2}\] \[\leq \frac{C_{1}^{2}C_{3}}{n}. \tag{3.24}\] Now let \(k_{n}=n^{-\frac{1}{4}}\). Using Chebyshev's inequality we get: \[\mathbb{P}(\|\rho_{n}\|_{F}\geq k_{n})\leq\frac{\mathrm{E}\|\rho_{n}\|_{F}}{k _{n}}=\frac{C_{4}}{\sqrt[4]{n}} \tag{3.25}\] for a suitable constant \(C_{4}\). **Step 3**. Our next goal is to estimate (I2) as follows. Let \(k_{n}\) be \(\frac{1}{\sqrt[4]{n}}\). Then \[I2\leq\frac{C_{7}}{\sqrt[4]{n}}\] for some constant \(C_{7}\). Because \[\mathbb{P}(X_{n}+k_{n}\mathbb{1}<x)-\mathbb{P}(Z<x)\] \[\geq\mathbb{P}(X_{n}+k_{n}\mathbb{1}<x,\|\rho_{n}\|_{F}<k_{n})- \mathbb{P}(Z<x)\] \[=\Big{(}\mathbb{P}(X_{n}+k_{n}\mathbb{1}<x)-\mathbb{P}(X_{n}+k_{n }\mathbb{1}<x,\|\rho_{n}\|_{F}\geq k_{n})\Big{)}-\mathbb{P}(Z<x)\] \[\geq\mathbb{P}(X_{n}+k_{n}\mathbb{1}<x)-\mathbb{P}(\|\rho_{n}\|_ {F}>k_{n})-\mathbb{P}(Z<x),\] we have \[I2 \leq|\mathbb{P}(X_{n}+k_{n}\mathbb{1}<x)-\mathbb{P}(Z<x)|\] \[\quad+|\mathbb{P}(X_{n}+k_{n}\mathbb{1}<x)-\mathbb{P}(|\rho_{n}\|_{ F}\geq k_{n})-\mathbb{P}(Z<x)|\] \[\leq 2|\mathbb{P}(X_{n}+k_{n}\mathbb{1}<x)-\mathbb{P}(Z<x)|+\mathbb{ P}(\|\rho_{n}\|_{F}\geq k_{n})\] \[=2|\mathbb{P}(X_{n}+k_{n}\mathbb{1}<x)-\mathbb{P}(Z+k_{n}\mathbb{1 }<x)\] \[\quad+\mathbb{P}(Z+k_{n}\mathbb{1}<x)-\mathbb{P}(Z<x)|+\mathbb{P}( \|\rho_{n}\|_{F}\geq k_{n})\] \[\leq 2|\mathbb{P}(X_{n}+k_{n}\mathbb{1}<x)-P(Z+k_{n}\mathbb{1}<x)|\] (I2-1) \[\quad+2|\mathbb{P}(Z+k_{n}\mathbb{1}<x)-P(Z<x)|\] (I2-2) \[\quad+\mathbb{P}(\|\rho_{n}\|_{F}\geq k_{n}).\] (I2-3) By multidimensional Berry-Essen theorem in [1], \[\text{(I2-1)}\leq C_{5}\cdot\sqrt{N}\cdot n\cdot\text{E}\|\xi_{1}\|_{2}^{3} \tag{3.26}\] where \(C_{5}\) is a constant and \(\xi_{1}\) is defined in (3.16): \[\text{E}\|\xi_{1}\|^{3} = \text{E}\|\frac{1}{\sqrt{n}}(-V(L^{\star}))^{-\frac{1}{2}}\Big{(} (L^{\star}_{Z_{i}})^{-1}-(I+L^{\star})^{-1}\Big{)}\|_{2}^{3} \tag{3.27}\] \[\leq (\frac{1}{\sqrt{n}})^{3}\text{E}\|(-V(L^{\star}))^{-\frac{1}{2}} \Big{(}(L^{\star}_{Z_{i}})^{-1}-(I+L^{\star})^{-1}\Big{)}\|_{2}^{3}.\] Since \(\text{E}\|(-V(L^{\star}))^{-\frac{1}{2}}\Big{(}(L^{\star}_{Z_{i}})^{-1}-(I+L^{ \star})^{-1}\Big{)}\|_{2}^{3}\) is a constant we get \[\text{(I2-1)}\leq C_{6}\sqrt{\frac{N}{n}} \tag{3.28}\] For (I2-2), since Z can be viewed as a standard Guassian random vector, we have \[\text{(I2-2)} = 2|\mathbb{P}(x-k_{n}I<Z_{n}<x)| \tag{3.29}\] \[\leq 2\sum_{i,j=1}^{N}\mathbb{P}(x_{ij}-k_{n}\leq(Z_{n})_{ij}\leq x_{ ij})\] \[= \frac{2N^{2}}{\sqrt{2\pi}}k_{n}\] Combining (3.28), (3.29) with previous bound, where we take \(k_{n}=n^{-\frac{1}{4}}\) we conclude that \[I2\leq\frac{C_{7}}{\sqrt[4]{n}},\] where \(C_{5}\) is a constant. **Step 4**. As for I3 we can use the same argument as above and conclude that I3 is bounded by \(C_{8}\cdot n^{-\frac{1}{4}}\) for some constant \(C_{8}\). This completes the proof of the theorem. Two-by-two block kernel In this section we show that if the kernels of determinantal point processes are two-by-two symmetric positive semi-definite matrices, the maximum likelihood estimators can be solved analytically. This result can also be immediately extended to any two by two block matrices. However, this method effective to two by two matrices is difficult to apply to higher dimensional kernel. Let \(Z\sim\text{DPP}(L^{\star})\), where \(L^{\star}=\begin{pmatrix}a^{*}&b^{*}\\ b^{*}&c^{*}\end{pmatrix}\), and the ground set be \(\mathcal{Y}=[2]\). For our purpose, we assume \[a^{*},c^{*}>0\] and \[a^{*}c^{*}-b^{*2}\geq 0.\] We can always assume \(b\) is non-negative since by identifiability of DPPs, \(\begin{pmatrix}a&b\\ b&c\end{pmatrix}\) and \(\begin{pmatrix}a&-b\\ -b&c\end{pmatrix}\) give the same DPP. For ease of notation, let \(\hat{p}_{0},\hat{p}_{1},\hat{p}_{2},\hat{p}_{3}\) denote the empirical probability of the subset \(\{\emptyset\}\), \(\{1\}\), \(\{2\}\), \(\{1,2\}\) respectively and let \(p_{0},p_{1},p_{2},p_{3}\) denote the theoretical probability respectively. The relationship between \((a,b,c)\) and \((p_{0},p_{1},p_{2},p_{3})\) are given by \[(a,b,c)=\Big{(}\frac{p_{1}}{p_{0}},\frac{\sqrt{p_{1}p_{2}-p_{0}p_{3}}}{p_{0}}, \frac{p_{2}}{p_{0}}\Big{)},\] and \[p_{0}= \frac{1}{(a+1)(c+1)-b^{2}}\,,\qquad p_{1}=\frac{a}{(a+1)(c+1)-b^{2 }}\,,\] \[p_{2}= \frac{c}{(a+1)(c+1)-b^{2}}\,,\qquad p_{3}=\frac{ac-b^{2}}{(a+1)(c +1)-b^{2}}\,.\] The likelihood function defined in (3.1) becomes now \[\hat{\Phi}(L) =\sum_{J\in[2]}\hat{p}_{J}\log(L_{J})-\log\det(L+I)\] \[=\hat{p}_{1}\log a+\hat{p}_{2}\log c+\hat{p}_{3}\log(ac-b^{2})- \log[(a+1)(c+1)-b^{2}] \tag{4.1}\] To find the critical point of (4.1) we first let the partial derivative of \(\hat{\Phi}(L)\) with respect to \(b\) equal zero and get \[\frac{\partial\hat{\Phi}(L)}{\partial b}= \frac{2\hat{p}_{3}b}{ac-b^{2}}+\frac{2b}{(a+1)(c+1)-b^{2}}=0. \tag{4.2}\] Then we have \(b\) is either equal to \(0\) or \[b^{2}=\frac{ac-(a+1)(c+1)\hat{p}_{3}}{1-\hat{p}_{3}}. \tag{4.3}\] If \(b=0\), then by setting the partial derivative with respect to \(a\) and \(c\) to zero and notice that \(\hat{p}_{0}+\hat{p}_{1}+\hat{p}_{2}+\hat{p}_{3}=1\) we get the first critical point \[(\hat{a},\hat{b},\hat{c})=\Bigg{(}\frac{\hat{p}_{1}+\hat{p}_{3}}{\hat{p}_{0}+ \hat{p}_{2}},0,\frac{\hat{p}_{2}+\hat{p}_{3}}{\hat{p}_{0}+\hat{p}_{1}}\Bigg{)}. \tag{4.4}\] This critical point exists only if \(\hat{p}_{0}+\hat{p}_{2}\) and \(\hat{p}_{0}+\hat{p}_{1}\) is nonzero. Since empirical probability converges to its corresponding theoretical probability almost surely and \(p_{0}>0\), the strong law of large numbers implies the critical point exists almost surely when \(n\) is sufficiently large. If \(b\neq 0\), then we can use (4.3) to estimate \(\hat{b}\) once \(\hat{a},\hat{c}\) are obtained: \[\hat{b}=\sqrt{\frac{\hat{a}\hat{c}-(\hat{a}+1)(\hat{c}+1)\hat{p}_{3}}{1-\hat{ p}_{3}}}. \tag{4.5}\] To find the maximum likelihood estimators \(\hat{a}\) and \(\hat{c}\) of \(a\) and \(c\) we plug (4.3) into \(\hat{\Phi}(L)\) to obtain \[\hat{\Phi}(L)=\hat{p}_{1}\log a+\hat{p}_{2}\log c+(\hat{p}_{3}-1)\log(a+c+1)- (\hat{p}_{3}-1)\log\frac{\hat{p}_{3}}{1-\hat{p}_{3}}+\log\hat{p}_{3}. \tag{4.6}\] Letting \(\frac{\partial\hat{\Phi}(L)}{\partial a}\) and \(\frac{\partial\hat{\Phi}(L)}{\partial c}\) equal zero yields \[\begin{cases}\frac{\partial\hat{\Phi}(L)}{\partial a}=\frac{\hat{p}_{1}}{a}+ \frac{\hat{p}_{3}-1}{a+c+1}=0\\ \frac{\partial\hat{\Phi}(L)}{\partial c}=\frac{\hat{p}_{2}}{c}+\frac{\hat{p}_ {3}-1}{a+c+1}=0.\end{cases} \tag{4.7}\] The above system of function equations can be explicitly solved and combining it together with (4.5) yields \[(\hat{a},\hat{b},\hat{c})=\Big{(}\frac{\hat{p}_{1}}{\hat{p}_{0}},\frac{\sqrt{ \hat{p}_{1}\hat{p}_{2}-\hat{p}_{0}\hat{p}_{3}}}{\hat{p}_{0}},\frac{\hat{p}_{2 }}{\hat{p}_{0}}\Big{)}, \tag{4.8}\] from which we have this critical point exists only if \(\hat{p}_{0}>0\) and \(\hat{p}_{1}\hat{p}_{2}-\hat{p}_{0}\hat{p}_{3}\geq 0\). Again by strong laws of large numbers, the second critical point also exists and converges to the true value almost surely. In fact, we have almost surely, \[\frac{\hat{p}_{1}}{\hat{p}_{0}}\to\frac{p_{1}}{p_{0}}=a^{*},\quad\frac{\sqrt{ \hat{p}_{1}\hat{p}_{2}-\hat{p}_{0}\hat{p}_{3}}}{\hat{p}_{0}}\to\frac{\sqrt{p_ {1}p_{2}-p_{0}p_{3}}}{p_{0}}=b^{*},\quad\frac{\hat{p}_{2}}{\hat{p}_{0}}\to c^ {*}.\] Furthermore, we can establish the central limit theorem for the estimator (4.8), which corresponds to the result in Theorem 3.7. **Theorem 4.1**.: _Assume \(b>0\), then the estimator \((\hat{a},\hat{b},\hat{c})\) in (4.8) is asymptotically normal,_ \[\sqrt{n}((\hat{a},\hat{b},\hat{c})-(a^{*},b^{*},c^{*}))\xrightarrow[n\longrightarrow \infty]{}\mathcal{N}(\mathbf{0},-V(a^{*},b^{*},c^{*})), \tag{4.9}\] _where the convergence holds in distribution and \(V(a^{*},b^{*},c^{*})\) is the inverse of the Hessian matrix of the expected maximum likelihood function \(\Phi(a,b,c)=p_{1}\log a+p_{2}\log c+p_{3}\log(ac-b^{2})-\log[(a+1)(c+1)-b^{2}]\)._ Proof.: Let \(Z_{1},...,Z_{n}\) be n independent subsets of \(Z\sim\operatorname{DPP}(L^{*})\), where \(L^{*}=\begin{pmatrix}a^{*}&b^{*}\\ b^{*}&c^{*}\end{pmatrix}\). Let \(X_{i}\) be the random vector \((\mathbb{I}_{\{Z_{i}=\emptyset\}},\mathbb{I}_{\{Z_{i}=\{1\}\}},\mathbb{I}_{\{Z _{i}=\{2\}\}},\mathbb{I}_{\{Z_{i}=\{1,2\}\}})^{T}\), where \(\mathbb{I}_{\{\cdot\}}\) stands for the indicator random variable. Then \(X_{i}\) has mean \(\boldsymbol{\mu}=(p_{0},p_{1},p_{2},p_{3})^{T}\) and covariance matrix \[\boldsymbol{\Sigma}=\begin{pmatrix}p_{0}-p_{0}^{2}&-p_{0}p_{1}&-p_{0}p_{2}&-p_{ 0}p_{3}\\ -p_{0}p_{1}&p_{1}-p_{1}^{2}&-p_{1}p_{2}&-p_{1}p_{3}\\ -p_{0}p_{2}&-p_{1}p_{2}&p_{2}-p_{2}^{2}&-p_{2}p_{3}\\ -p_{0}p_{3}&-p_{1}p_{3}&-p_{2}p_{3}&p_{3}-p_{3}^{2}\end{pmatrix}\,.\] By central limit theorem, \(\sqrt{n}(\overline{X}_{n}-\boldsymbol{\mu})\) converges to a multivariate normal distribution with mean \(\boldsymbol{0}\) and covariance \(\boldsymbol{\Sigma}\). Let a function \(g:\mathbb{R}^{4}\to\mathbb{R}^{3}\) be defined by \[g(x_{1},x_{2},x_{3},x_{4})=(\frac{x_{2}}{x_{1}},\frac{\sqrt{x_{2}x_{3}-x_{1}x _{4}}}{x_{1}},\frac{x_{3}}{x_{1}}).\] Its Jacobi matrix \(\dot{g}(\boldsymbol{x})=\left[\frac{\partial g_{i}}{\partial x_{j}}\right]_{ 3\times 4}\) is given by \[\begin{pmatrix}-\frac{x_{2}}{x_{1}^{2}}&\frac{1}{x_{1}}&0&0\\ -\frac{x_{4}}{2x_{1}\sqrt{x_{2}x_{3}-x_{1}x_{4}}}-\frac{\sqrt{x_{2}x_{3}-x_{1 }x_{4}}}{x_{1}^{2}}&\frac{x_{3}}{2x_{1}\sqrt{x_{2}x_{3}-x_{1}x_{4}}}&\frac{x_{ 2}}{2x_{1}\sqrt{x_{2}x_{3}-x_{1}x_{4}}}&-\frac{1}{2\sqrt{x_{2}x_{3}-x_{1}x_{4 }}}\\ -\frac{x_{3}}{x_{1}^{2}}&0&\frac{1}{x_{1}}&0\end{pmatrix}.\] Now we are in the position to apply Delta method [13] to get \[\sqrt{n}\Big{(}(\hat{a},\hat{b},\hat{c})-(a^{*},b^{*},c^{*})\Big{)}=\sqrt{n} \Big{(}g(\overline{X}_{n})-g(\boldsymbol{\mu})\Big{)}\xrightarrow{d}\mathcal{ N}(\boldsymbol{0},\dot{g}(\boldsymbol{\mu})\boldsymbol{\Sigma}\dot{g}( \boldsymbol{\mu})^{\prime}).\] After tedious matrix computations, \(\dot{g}(\boldsymbol{\mu})\boldsymbol{\Sigma}\dot{g}(\boldsymbol{\mu})^{\prime}\) is found to be \[D\begin{pmatrix}(a^{*}+{a^{*}}^{2})&\sigma_{12}&\sigma_{13}\\ \sigma_{12}&\frac{\frac{a^{*}c^{*}}{b^{*2}}-1}{4}D+\frac{a^{*}+c^{*}+4a^{*}c^ {*}}{4}&\sigma_{23}\\ \sigma_{13}&\sigma_{23}&c^{*}+{c^{*}}^{2}\end{pmatrix},\] where \[\begin{cases}D=&(a^{*}+1)(c^{*}+1)-{b^{*}}^{2}\,;\\ \sigma_{12}=&(\frac{a^{*}c^{*}}{2b^{*}}+a^{*}b^{*}+\frac{a^{*}}{2b^{*}}(a^{*}c ^{*}-{b^{*}}^{2}))\,;\\ \sigma_{13}=&a^{*}c^{*}\,;\\ \sigma_{23}=&\frac{a^{*}c^{*}}{2b^{*}}+b^{*}c^{*}+\frac{c^{*}}{2b^{*}}(a^{*}c ^{*}-{b^{*}}^{2})\,.\end{cases}\] It is straightforward to verify the above matrix is the inverse of the Hessian matrix of the expected maximum likelihood function \(\Phi(L)\), that is, \(-V(a^{*},b^{*},c^{*})\), which in turn verifies Theorem 3.7 in this special case. However, in this two-by-two case, our maximum likelihood estimator is unique without the maneuver of the identifiability 3.13. This idea can be extended to blocked ensemble with the two by two block submatrices. If \(L^{\star}\) is a matrix with k two-by-two blocks \(J_{1},...,J_{k}\) \[\begin{pmatrix}a_{1}&b_{1}&&&&\\ b_{1}&c_{1}&&&\\ &&a_{2}&b_{2}&&\\ &&b_{2}&c_{2}&&\\ &&&\ddots&\\ &&&a_{k}&b_{k}\\ &&&&b_{k}&c_{k}\end{pmatrix}, \tag{4.10}\] where for each \(1\leq i\leq k\), \(a_{i}\), \(b_{i}\), \(c_{i}>0\) and \(a_{i}c_{i}-b_{i}^{2}\geq 0\). Let ground set \(\mathcal{Y}\) of this DPP be \(\{J_{1}^{1},J_{1}^{2},J_{2}^{1},J_{2}^{2},...,J_{k}^{1},J_{k}^{2}\}\) and for each \(1\leq i\leq k\), \[\hat{p}_{J_{i}}^{0}= \frac{1}{n}\sum_{m=1}^{n}\mathbb{I}\{J_{i}^{1}\notin Z_{m},J_{i}^ {2}\notin Z_{m}\} \tag{4.11}\] \[\hat{p}_{J_{i}}^{1}= \frac{1}{n}\sum_{m=1}^{n}\mathbb{I}\{J_{i}^{1}\in Z_{m},J_{i}^{2} \notin Z_{m}\}\] (4.12) \[\hat{p}_{J_{i}}^{2}= \frac{1}{n}\sum_{m=1}^{n}\mathbb{I}\{J_{i}^{1}\notin Z_{m},J_{i}^ {2}\in Z_{m}\}\] (4.13) \[\hat{p}_{J_{i}}^{3}= \frac{1}{n}\sum_{m=1}^{n}\mathbb{I}\{J_{i}^{1}\in Z_{m},J_{i}^{2} \in Z_{m}\}, \tag{4.14}\] where \(Z_{1},...,Z_{n}\) are n independent subsets drawn from DPP\((L^{\star})\). By Proposition 3.3, \(Z\cap J_{1},...,Z\cap J_{k}\) are mutually independent. Then the result of critical point for two by two matrix can be applied: \[(\hat{a}_{i},\hat{b}_{i},\hat{c}_{i})=\begin{pmatrix}\hat{p}_{J_{i}}^{1}, \frac{\sqrt{\hat{p}_{J_{i}}^{1}\hat{p}_{J_{i}}^{2}-\hat{p}_{J_{i}}^{0}\hat{p}_ {J_{i}}^{3}}}{\hat{p}_{J_{i}}^{0}},\frac{\hat{p}_{J_{i}}^{2}}{\hat{p}_{J_{i}}^ {0}}\end{pmatrix}, \tag{4.15}\] for every \(1\leq i\leq k\). However the above method is fraught with difficulties when the kernel has dimension higher than 2. For example, if the kernel is a \(3\times 3\) matrix \[\begin{pmatrix}a&d&e\\ d&b&f\\ e&f&c\end{pmatrix},\] the letting the gradient of likelihood function \(\hat{\Phi}(L)\) equal zero will pile \[\mathrm{d}\hat{\Phi}(L)=\sum_{J\subseteq[3]}\hat{p}_{J}L_{J}^{-1}-(L+I)^{-1}=0\,.\] Computing \(L^{-1}\) and \((L+I)^{-1}\) could be troublesome. For example, \(L^{-1}\) is: \[\frac{1}{a(bc-f^{2})-d(cd-ef)+e(df-be)}\begin{pmatrix}bc-f^{2}&-cd+ef&-be+df\\ -cd+ef&ac-e^{2}&de-af\\ -be+df&de-af&ab-d^{2}\end{pmatrix}\] which is difficult to use to obtain explicit maximum likelihood estimator. ## 5 Conclusion In this paper, we study the maximum likelihood estimation for the ensemble matrix for the determinantal process. Brunel et al show that the expected likelihood function \(\Phi(L)\) is locally strongly concave around true value \(L^{\star}\) if and only if \(L^{\star}\) is irreducible, since the Hessian matrix of \(\Phi(L)\) at \(L^{\star}\) is negative definite. Then they prove the maximum likelihood estimator (MLE) is consistent in terms of the convergence in probability and when \(L^{\star}\) is irreducible they also obtained the central limiting theorem for the MLE. Motivated by their results, we show that the MLE is also strongly consistent in terms of almost sure convergence. Moreover, we obtain the Berry-Esseen type result for the central limiting theorem and find the \(n^{-\frac{1}{4}}\) rate of convergence of the MLE to normality. Last, we obtain the explicit form of the MLE where \(L^{\star}\) is a two by two block matrix or a block matrix, whose blocks are two by two matrices. The strong consistency and central limit theorem follows from these explicit forms, which demonstrates the general strong consistency and central limit theorem proved earlier. It would be interesting to find the explicit form of some particular higher dimensional DPPs. However, as the learning of maximum likelihood of DPPs is proven to be NP-hard, the explicit form for general ensembles, even if was found, would be very difficult to compute. In addition to the maximum likelihood estimator there are also other approaches in lieu of MLE. Let us only mention one alternative approach. For all J such that \(|J|\leq 1\), we let \[\frac{\det(L_{J})}{\det(L+I)}=\hat{p}_{J}, \tag{5.1}\] where the left hand side is the theoretical probability of \(J\) and the right hand side is the empirical probability of \(J\). Taking \(J=\{i\}\) suggests us the following estimator for \(L_{ii}\). \[\hat{L}_{ii}=\frac{\hat{p}_{i}}{\hat{p}_{0}}. \tag{5.2}\] Using equations (5.1) for \(|J|=2\) again we are able to determine the off-diagonal elements up to the sign \[L_{ij}^{2}=\frac{\hat{p}_{i}\hat{p}_{j}-\hat{p}_{0}\hat{p}_{\{i,j\}}}{\hat{p}_ {0}^{2}}, \tag{5.3}\] where \(i\neq j\). Notice that this is the maximum likelihood estimator when \(L\) is two dimensional. There is a question on how to choose the sign for \(L_{ij}\) in (5.3), which has been resolved by [14] with graph theory.
2302.05095
A Review on Orbital Angular Momentum With the Approach of Using in Fifth-Generation Mobile Communications
In this paper, different generations of mobile communication have been concisely mentioned. The need for advanced antenna systems capable of sending and receiving massive data is felt in the fifth generation of mobile communication. The beamforming method and multi-input multi-output systems (MIMO) are the proposed solutions to increase the channel capacity of the communication network. Orbital angular momentum (OAM), an inherent feature of electromagnetic waves, is a suitable solution to increase channel capacity. This feature will increase the channel capacity by producing orthogonal modes. Using antenna arrays is an effective way to produce these modes. The results of FEKO simulations show the capability of this method.
Seyed Ali Khorasani
2023-02-10T07:37:13Z
http://arxiv.org/abs/2302.05095v1
A Review on Orbital Angular Momentum With the Approach of Using in Fifth-Generation Mobile Communications ###### Abstract In this paper, different generations of mobile communication have been concisely mentioned. The need for advanced antenna systems capable of sending and receiving massive data is felt in the fifth generation of mobile communication. The beamforming method and multi-input multi-output systems (MIMO) are the proposed solutions to increase the channel capacity of the communication network. Orbital angular momentum (OAM), an inherent feature of electromagnetic waves, is a suitable solution to increase channel capacity. This feature will increase the channel capacity by producing orthogonal modes. Using antenna arrays is an effective way to produce these modes. The results of FEKO simulations show the capability of this method. _Keywords-- fifth-generation of mobile communications, channel capacity of the communication network, array antennas, orbital angular momentum_ ## I Introduction The first generation of mobile communication networks was introduced in 1979. The defects of this technology did not cause it to be abandoned so that by 1990, more than 20 million users were using it. Due to challenges such as poor coverage, low voice quality, lack of inter-operator roaming services, and most importantly lack of encryption of conversations, the research into the second generation (2G) systems began. The second generation of communication was first accomplished in 1991 under the GSM standard. For the first time, conversations were encrypted and digital voice calls were much clear than before. That was the first time people sent short messages, video messages and multimedia messages to each other. Despite the slow speed of communication, this generation can be considered a revolution in the communication systems. The third generation of communication was used in 2001. The speed increase to more than 4 times of the second generation, and the possibility of making video calls and international roaming were among the features of this generation. The fourth generation had the capability of fast access to web pages, high-quality video calls and a speed of over 1 gigabit per second (1 Gbps) [1, 2]. The main point was the equipment required for this technology. This opportunity led to significant progress, especially in the design of mobile phones. ## II Fifth Generation Mobile Communications By comparing the fourth and fifth generations of mobile communication systems, it can be seen that the data transfer speed has improved 10 to 100 times, which means the data transfer speed will reach almost 10 gigabits per second (10 Gbps). 5G technology achieves a latency of less than 1 millisecond. To figure out this time, it is enough to know that it takes about 250 milliseconds for a human to react. It is possible to perform systems such as intelligent vehicles, remote surgeries, and smart power networks that require a platform with almost zero latency by this feature. Increasing the network capacity is one of the advances of the fifth generation compared to others, which enables another concept called the Internet of Things. Advanced antenna systems are suitable to use in current and future telecommunication networks [3]. Today is the best time to replace old systems with advanced antennas. The advantages of these antennas can be listed as follows: Proper performance in connection with satellites and ground users, economic justification, cost reduction of digital processing in beamforming technology, and in MIMO [4, 5, 6]. This is suitable for operators who want to improve their coverage and increase their capacity. The fifth-generation network standards should meet the requirements such as increasing capacity, being suitable for short-range communication with the least dissipations, controlling path dissipations limits, and simultaneous multi-user connection with the least latency. Millimeter waves are strongly influenced by atmospheric conditions and raindrops and may even be absorbed by oxygen molecules and water vapor. The solution to this problem is to reduce the range covered by each antenna; since the distance increases, the dissipations increase exponentially. Delay and throughput are two main factors in advanced communication systems. Antennas play an essential role in the fifth generation of mobile communication, so adaptable and dynamic antennas in different conditions are required. By designing a new configuration, it is possible to provide a mechanism for the antenna to select its frequency band. One way to increase the channel capacity is to use MIMO array antennas [7]. The above system can simultaneously support several communication channels, each of which works separately. The dimensions of the antennas are enlarged at frequencies less than 3 GHz and cause limitations for integrating a large number of antennas. The antenna's dimensions become smaller in the spectrum of millimeter waves, which helps multi-channel transmission. Integrating broadband antennas increases the total output power of the system. It is also necessary to use high-gain antennas to deal with dissipation caused by working at high frequencies. According to a general rule, increasing the dimensions of the antenna leads to an increase in the antenna's directivity. In fact, by arraying an antenna, the dimensions of the antenna increase. One of problems of array antennas is the presence of unwanted extra grating lobes in the horizontal and vertical planes of the radiation pattern, which correct placement will be a suitable solution to solve this. ## III Orbital angular momentum of electromagnetic waves In electrodynamics, fields can also carry momentum, unlike gravity which only bodies can. Electromagnetic fields can store energy which causes the conservation of momentum in electrodynamics. The electromagnetic force on a density of charge is: \[\vec{F}=\int_{\nu}(\vec{E}+\vec{v}\times\vec{B})\rho d\tau=\int_{\nu}(\rho\vec{ E}+\vec{j}\times\vec{B})d\tau \tag{1}\] We introduce Maxwell's stress tensor to describe the force per unit volume easier. \[T_{ij}=\epsilon_{0}\left(E_{i}E_{j}-\frac{1}{2}\delta_{ij}E^{2}\right)+\frac{1 }{\mu_{0}}(B_{i}B_{j}-\frac{1}{2}\delta_{ij}B^{2}) \tag{2}\] The above subscripts are related to the components of the Cartesian coordinates, so the tensor has 9 components. Using the above tensor, the force per unit volume is: \[\vec{f}=\vec{v}.\vec{T}-\epsilon_{0}\mu_{0}\frac{\partial\vec{S}}{\partial\tau} \tag{3}\] By integrating on both sides and using the divergence theorem, the total force on the charge density is: \[\vec{F}=\oint_{S}\vec{T}.\overrightarrow{da}-\epsilon_{0}\mu_{0}\frac{d}{dt} \int_{\nu}\vec{S}d\tau \tag{4}\] By comparing the above expression with Newton's second law, an expression similar to Poynting's theorem is obtained in which the first integral represents the momentum stored in electromagnetic fields and the second integral represents the momentum passing through the surface per unit of time. In general, the linear and angular momentum of electromagnetic fields are as follows: \[\vec{P}=\epsilon_{0}\big{(}\vec{E}\times\vec{B}\big{)} \tag{5}\] \[\vec{L}=\vec{r}\times\vec{P}=\epsilon_{0}[\vec{r}\times\big{(} \vec{E}\times\vec{B}\big{)}] \tag{6}\] The above expressions show that even in static fields if the product of \(\vec{E}\times\vec{B}\) is non-zero, they can still have linear and angular momentum [8]. As has been said before, electromagnetic waves carry energy and momentum. Momentum itself divides into linear momentum and angular momentum. Angular momentum, which consists of spin angular momentum (SAM) and orbital angular momentum (OAM), is related to wave polarization using these two. The connection between them can be seen in how electrons move around the nucleus of an atom. The momentum caused by the moving of the electrons around the nucleus is equal to the orbital angular momentum, and the one caused by the rotation of the electrons around themselves is equal to the spin angular momentum. For the first time, the idea of using orbital angular momentum was proposed along with optical vortices. In an optical vortex, constant phase planes of the electric and magnetic fields are moving spirally in the direction of propagation. One of the characteristics of the vortex is the topological charge number, which indicates the twists of light around the axis in a wavelength. The larger this number is, the more twists there are. Theoretically, the angular momentum carried by the optical vortex has an infinite number of eigenstates defined in the infinite-dimensional Hilbert space [9]. That is why orbital angular momentum has numerous applications in telecommunications [10, 11, 12, 13]. If the orbital angular momentum of photons is fully used to carry and share information, the capacity of a photon increases dramatically. As a result, the transmission capacity of single-mode and single-wavelength fibers also increases. The capabilities of using orbital angular momentum are not limited to the visible light spectrum. For example, we can point out to perform this concept in wireless communication and radio frequencies lower than visible light. Applications of orbital angular momentum in wireless communications and underwater acoustic communications (which are very difficult due to high dissipations, multiple propagation paths, small bandwidths, etc.) have created new research fields. The magnitude of the vortex wave field in the center of the propagation axis is zero; therefore, it is displayed in black. The spiral structure of these waves is defined by exponential functions \(exp(i\theta l)\), where \(\theta\) is the transverse angle and \(1\) is the topological charge (orbital angular momentum mode). One of the most significant features of orbital angular momentum beams is their orthogonality. As a result, the inner product of two beams with different topological charge numbers will be: \[\int_{0}^{2\pi}\varrho^{\ell_{1}\theta}\left(e^{\ell_{2}\theta}\right)^{*}d \theta=\begin{cases}0&\ell_{1}\neq\ell_{2}\\ 2\pi&\ell_{1}=\ell_{2}\end{cases} \tag{7}\] Due to this, waves with different topological charge numbers can be considered separate information sets in data transmission paths. These new extra dimensions lead to an increase in system capacity. One of the most fundamental ways to produce these waves is using the Spiral Phase Plate (Fig. 1) [14, 15]. In summary, it should be said that these plates create a spiral shape by making various phase differences on different parts of the wavefront. Using metamaterial reflectors is another way to produce these waves. We can consider antenna arrays as another way of generating waves based on orbital angular momentum. The uniform circular array antennas are the most common for producing electromagnetic vortex waves [16, 17]. Fig. 1: Topological Charge Number (Increasing from Left to Right) This array consists of \(N\) components that are uniformly placed on the circumference of a circle. Each component is fed by its corresponding signal, which is similar in magnitude to others but has a phase difference of \(\Delta\varphi=\frac{2\pi l}{n}\) with its adjacent component. \(l\) is the orbital angular momentum mode (topological charge) which means that the phase difference of the whole array is \(2\pi l\). These waves can be separated by using the orthogonality property explained earlier [18]. In the receiver, phase recovery is the main key to determining the different modes. One of the ways to recover waves is to use reverse spiral plates. ## IV Simulation In this section, the phase pattern of the electric field of the antenna arrays is extracted using FEKO software. At first, current point sources at the frequency of 10 GHz have been used as excitation sources. It can be seen that the phase pattern of the electric field caused by a single-point current source has a zero mode. The phase pattern of higher modes is obtained by increasing the number of sources. Fig. 2 shows the electric field phase pattern in 0, 1, 2, 3, 4, 5, and 6 modes. By considering how the number of sources changes with generated modes, a relationship can be reached to calculate the minimum number of sources to produce waves based on the orbital angular momentum. \[N\geq 2|l|+1 \tag{8}\] \(l\) and \(N\) are the topological charge number and the minimum number of array elements, respectively. Table 1. shows the minimum number of sources required to achieve the desired mode. Now, current point sources replace by dipole antennas. The length of this antenna is chosen in a way so that its electric field, based on the mean square error, is at most 10% different from the field caused by current point sources. The simulation has been done at two frequencies of 3 GHz and 86 GHz using this antenna array. Fig. 3 shows the phase pattern of this antenna array at the frequency of 3 GHz, with the length of each antenna equal to \(10\lambda\). As can be seen, the phase pattern of the electric field in desired modes was obtained by using this antenna array. By studying the pattern of the electric field magnitude in the electric and magnetic field planes, the big difference between them is determined. The reason is that the point sources emit all their power, while dipole antennas reflect most of it. The S-parameter diagram of the array confirms this. The electric field of the antenna arrays, based on the magnitude of the angular momentum, is proportional to a sum in the space in which various factors are effective. Angular momentum mode, the distance of each element from the computing point, and the spatial phase difference between the array elements are some of these factors [19]. \[E(r,\varphi,z)\propto\sum_{n=1}^{N}a_{n}.\frac{e^{-i\alpha R_{n}}}{R_{n}}exp(- il\phi_{n}) \tag{9}\] \[R_{n}=\sqrt{a^{2}+z^{2}+r^{2}-2ar\cos(\ \varphi-\ \varphi_{n})} \tag{10}\] From the theory of antennas, it is determined that the electric field caused by an array is always proportional to an expression called the array coefficient, which has its shape and depends on what type of array is used, whether linear, planar, or circular. In the following, the coefficient of a circular array will be compared with the above expression. \[E(r,\vartheta,\varphi)=\ \sum_{n=1}^{N}a_{n}.\frac{e^{-i\alpha R_{n}}}{R_{n}} \tag{11}\] R is the distance of the nth element from the calculation point, and \(a_{n}\) is the excitation coefficient of the nth element. By looking at the two expressions of the electric field, it can be seen that the electric field of an ordinary circular array; is a specific array for producing an electric field with different modes \(l=0\). In fact, by adding the phase caused by producing different modes to an ordinary array, we can obtain one with the ability to generate different modes. According to the above expression, by increasing the number of array elements, the magnitude of the field also increases. Generally, increasing the dimensions of the antenna leads to an increase in the magnitude and directivity of the field radiation pattern. Actually, by increasing the number of elements, the \begin{table} \begin{tabular}{|c|c|} \hline **Desired Mode** & **Min No. of Point Source** \\ \hline 1 & 3 \\ \hline 2 & 5 \\ \hline 3 & 7 \\ \hline 4 & 9 \\ \hline 5 & 11 \\ \hline \end{tabular} \end{table} Table 1: Minimum Number of Point Sources to Produce Desired Mode Figure 3: Phase Pattern of Electric Field due to Array of Antenna Figure 2: Phase Pattern of Electric Field due to Point Source Array dimensions of the antenna increase. Until now, array elements were regularly arranged on the circumference of a circle with a radius equal to the wavelength. Now, these elements are placed in the environment of a smart mobile phone, and their results are checked. The elements of the array are placed once on the center of the sides of the largest surface of the phone (irregular) and once again on the vertices of a square to the width of the phone (regular) (fig. 4). These simulations are performed at 3 GHz and 86 GHz frequencies. The phase distribution pattern of the electric field of the mobile phone antenna array in regular and irregular modes is shown in Fig. 5 and Fig. 6. By studying these two figures, it is clear that the phase pattern in regular mode has a better condition. In the following, the S-parameter diagram is drawn in regular array mode at 3 GHz frequency (fig. 7). Based on a comparison of this diagram, it can be concluded that the level of reflection of waves between each input is extremely low (non-diagonal regions of the matrix of dispersion characteristics), while the level of reflection between each input and itself (diagonal regions of the matrix of dispersion characteristics) is also almost zero decibels. As a result, almost all of the energy reaching the input is reflected, and very little of it is radiated. Due to this reason, dipole antenna arrays have lower electric field amplitude levels than arrays with (hypothetical) point sources. It is also observed that the regular cell array generally produces a better electric field phase pattern than the irregular cell array. All of the arrays above are not ideal radiation arrays since most of the power reaching the antennas is not radiated. It is not possible to achieve the desired electric field phase pattern at 86 GHz because of the very small antenna length compared to the array elements' distance. The simulation of circles of point sources results in a favorable pattern even at high frequencies. This is because with a frequency change, the distance between elements also changes. In the matrix of the S-parameter of arrays, the magnitude of the non-diagonal elements decreases dramatically with increasing frequency. In addition to increasing electric field amplitude level, increasing the number of array elements improves the phase distribution pattern. ## Conclusion Fifth generation mobile communication requires advanced antenna systems. Increasing the capacity of the communication channel is one of the main concerns in this generation. Using the characteristic of orbital angular momentum as an inherent characteristic of electromagnetic waves can solve this issue. Antenna arrays based on Orbital Angular Momentum (OAM) can be used to generate waves of desirable modes, which are orthogonal to one another. As a result of this feature, the fifth generation of mobile communication channels have a greater capacity. ## Acknowledgment The Author would like to express his very great appreciation to Mr. Seyed Amir Hossein Khorasani for his valuable and constructive suggestions during the planning and development of this research work. His willingness to give his time so generously has been greatly appreciated.
2305.15221
CHIANTI -- an atomic database for emission lines -- Paper XVII: Version 10.1, revised ionization and recombination rates and other updates
The CHIANTI atomic database provides sets of assessed data used for simulating spectral observations of astrophysical plasmas. This article describes updates that will be released as version~10.1 of the database. A key component of CHIANTI is the provision of ionization and recombination rates that are used to compute the ionization balance of a plasma over a range of temperatures. Parameters for calculating the ionization rates of all stages of ions from H through Zn were compiled and inserted into the CHIANTI database in 2009. These were based on all measurements that were available at the time and supplemented with distorted wave calculations. Since then, there have been a number of new laboratory measurements for ions that produce spectral lines that are commonly observed. Parameters have been fit to these new measurements to provide improved ability to reproduce the ionization cross sections and rate coefficients, and these are added to the database. CHIANTI 10.1 also includes new recombination rates for the phosphorus isoelectronic sequence, and the updated ionization and recombination rates have been used to calculate a new ionization equilibrium file. In addition, CHIANTI 10.1 has new electron collision and radiative datasets for eight ions in the nitrogen and oxygen isoelectronic sequences, and updated energy level and wavelength data for seven other ions.
Kenneth Dere, Giulio Del Zanna, Peter Young, Enrico Landi
2023-05-24T15:03:37Z
http://arxiv.org/abs/2305.15221v2
CHIANTI - an atomic database for emission lines - Paper XVII: Version 10.1, revised ionization and recombination rates and other updates ###### Abstract The CHIANTI atomic database provides sets of assessed data used for simulating spectral observations of astrophysical plasmas. This article describes updates that will be released as version 10.1 of the database. A key component of CHIANTI is the provision of ionization and recombination rates that are used to compute the ionization balance of a plasma over a range of temperatures. Parameters for calculating the ionization rates of all stages of ions from H through Zn were compiled and inserted into the CHIANTI database in 2009. These were based on all measurements that were available at the time and supplemented with distorted wave calculations. Since then, there have been a number of new laboratory measurements for ions that produce spectral lines that are commonly observed. Parameters have been fit to these new measurements to provide improved ability to reproduce the ionization cross sections and rate coefficients, and these are added to the database. CHIANTI 10.1 also includes new recombination rates for the phosphorus isoelectronic sequence, and the updated ionization and recombination rates have been used to calculate a new ionization equilibrium file. In addition, CHIANTI 10.1 has new electron collision and radiative datasets for eight ions in the nitrogen and oxygen isoelectronic sequences, and updated energy level and wavelength data for seven other ions. atomic data -- atomic processes -- Sun: UV radiation -- Sun: X-rays, gamma rays -- Ultraviolet: general -- X-rays: general ## 1 Introduction CHIANTI is an atomic database and software package used for modeling optically-thin emission from astrophysical plasmas. It has been an open-source project since the first release in 1996 (Dere et al., 1997), and the data and software are available at [https://chiantidatabase.org](https://chiantidatabase.org). The most recent previous release is CHIANTI 10 (Del Zanna et al., 2021), and the present article describes the updates and new features of the 10.1 release. Young et al. (2016) and Del Zanna and Young (2020) summarize the contents of the modern version of the database and provide applications. The spectral emission from ionized astrophysical plasmas is a key to our understanding of the processes that form and govern these regions. A key to the analysis of these spectra is a knowledge of the ionization state. For plasmas in a collisional ionization equilibrium, such as the solar corona, the equilibrium is largely controlled by electron collisional ionization and electron collisional recombination. Collisional ionization consists of both direct ionization (DI) and excitation followed by autoionization (excitation-autoionization or EA). Recombination also consists of two different processes, radiative recombination (RR) and dielectronic recombination (DR). Dere (2007) analyzed a large number of measured ionization cross sections in order to make ionization cross sections and their associated ionization rate coefficients readily available through the CHIANTI atomic database (Dere et al., 1997, 2009; Del Zanna et al., 2021). Since that analysis, a number of important ionization cross sections have been measured and published. Here, these new measurements are analyzed in order to improve the calculation of ionization cross sections and rate coefficients in CHIANTI. Recombination has been the focus of a project described by Badnell et al. (2003) to systematically generate new DR and RR data for entire isoelectronic sequences. Previous versions of CHIANTI have used these rates for sequences from hydrogen to silicon. New calculations for the phosphorus sequence were presented by Bleda et al. (2022) and these are discussed in Sect. 2.15. A new ionization balance is calculated from the updated ionization and recombination rates and is discussed in Sect. 2.16. The level balance within an ion is determined by electron collision strengths, radiative decay rates and energy levels, and updates to these datasets are described in Sect. 3. ## 2 Revised Ionization and Recombination Rate Coefficients ### Approach The approach used by Dere (2007) was to compile a set of parameters for each ion to be able to reproduce the available ionization cross sections measurements. The first priority was to use measured cross sections and when these were not available, cross sections were calculated with the Flexible Atomic Code (FAC: Gu, 2002). In this paper, new measurements for 13 ions are examined. In all cases, this leads to new fits to laboratory measurements or adjustments to existing theoretical cross sections. Dere (2007) developed a scaling for ionization cross sections similar to that of Burgess & Tully (1992) for collision strengths. We define a scaled energy \(U\), \[U=1-\frac{\ln f}{\ln(u-1+f)} \tag{1}\] and a scaled cross section \(\Sigma\), \[\Sigma=\frac{u\,\sigma\,I^{2}}{\ln u+1} \tag{2}\] where \(u=E/I\), \(E\) is the energy of the incident electron, \(I\) is the ionization potential, \(\sigma\) is the ionization cross-section, and \(f\) is a scaling parameter that is selected by the data assessor. The scaled energy \(U\) varies from 0 at \(I\) to unity for infinite energy. The scaling parameter adjusts the placement of the scaled energies between these two limits. The scaling is chosen to spread the region just above the ionization potential so that the variation of \(\Sigma\) becomes more apparent and the fit to the measured values easier to evaluate. We refer to this process as BTI (Burgess-Tully-Ionization) scaling. A spline fit to the \(\Sigma\) values is performed after \(f\) has been chosen, and the spline values are stored in the diparams CHIANTI file. Typically nine or more spline nodes are used. Since the low energy behavior of \(\Sigma\) is very apparent, it is possible to determine if additional spline nodes are needed. This is often the case at the energy region just above the ionization potential, that is important for determining the electron ionization rate coefficient. This is illustrated in Sect. 2.14. An example of fitting BTI scaled measured cross sections is shown in Fig. 1 for S xiii. Here the BTI scaled measurements of Hahn et al. (2012a) are shown together with the current fit to the BTI scaled measurements below 2360 eV (\(U=0.55\)). The fit to the measurements provides the parameters to reproduce the direct ionization cross sections of the ion in question. The high energy limit, \(U=1\), is given by the Bethe cross section as discussed by Dere (2007). The deviation of the measurements from the fit between \(U=0.55\) and \(U=0.60\) shown in Fig. 1 is due to the excitation-autoionization (EA) component. Dere (2007) computed the EA parameters using FAC and in the present work adjustments are made to the threshold and/or the amplitude to best match the new measurements. The procedure for each ion examined is discussed in more detail later in the following Sections. Finally, we note that Dufresne & Del Zanna (2019) and Dufresne et al. (2020) recently calculated ionization cross sections for all ions of carbon and oxygen using FAC and AUTOSTRUCTURE (Badnell, 2011), and provided comparisons to the available experimental data. The calculations were focused on ionization from metastable levels, and will be made available in a future CHIANTI release. Cross sections from the present work are retained for carbon and oxygen to produce a consistent set of ionization data for all elements. Cross sections for collisional ionization from ground states in those works were in most cases within 15% of Dere (2007). ### The beryllium isoelectronic sequence Crossed beams measurements of the ionization cross sections of C iii, N iv, and O v made at the Oak Ridge National Laboratory have been presented by Fogle et al. (2008). The measurements of C iii and N iv show considerable cross sections below the ionization potential, indicating a beam containing a significant population in the 2s2p \({}^{3}\)P\({}_{0}\) metastable level. These measurements will not be used in this analysis but the O v measurements do not indicate the presence of significant populations of excited levels. As in SS 2.2.2, the direct ionization cross sections are represented by a fit to the BTI scaled cross sections of Fogle et al. (2008) below about 520 eV where EA processes become significant. The EA cross sections of Dere (2007) have been increased by a factor of 2.5 to match the measurements but the two EA thresholds remain at 552 and 620 eV. The O v measurements of Fogle et al. (2008), Dere (2007) and the current analysis are presented in Fig. 2. The peak cross sections near about 310 eV show that the current values are reduced by a factor of about 0.84 from the FAC values of Dere (2007). We note that earlier experimental data had significant limitations. The Crandall et al. (1979) and Falk et al. (1983) experimental results were in agreement with each other, but they were not able to determine the metastable fraction present. Loch et al. (2003) produced similar experimental values, although significant readings were present below threshold, indicating a linear offset in readings or contamination in the beam. All of the above cross sections were higher than theory by 30-50%. The Fogle et al. (2008) experimental cross sections were in good agreement with their R-Matrix calculations at low and high energies, while at the peak they were about 25% below. The Dufresne et al. (2020) calculations are in good agreement with the Fogle et al. (2008) ones. #### 2.2.2 S Xiii Storage ring measurements of S xiii ionization cross sections have been presented by Hahn et al. (2012a). A major problem with experimental measurements of ions in the beryllium isoelectronic sequence is that the ion beams can contain a significant population of ions in the 2s2p \({}^{3}\)P\({}_{0}\) metastable level. This results in cross section measurements Figure 1: BTI scaled ionization cross sections of S xiii. The label _Hahn_ refers to Hahn et al. (2012a), and _present_ to the results reported here. that show an enhanced cross section below the ionization potential of the ground level. As the authors explain, they used a beam of the \({}^{33}\)S isotope that has a nuclear spin. "The resulting hyperfine interaction inducing a mixing of the \({}^{3}\)P\({}_{1}\) and \({}^{3}\)P\({}_{0}\) levels, decreasing the lifetime of the \({}^{3}\)P\({}_{0}\) level." The beam of S xiii ions were kept in the ion storage ring long enough that the metastable levels decay to the ground state. Fig. 3 shows the present and previous cross sections together with the values of Hahn et al. (2012). The original FAC calculations indicated excitation-autoionization contributions at energies of 2412 and 2786 eV for excitations to the 1s 2s\({}^{2}\) 2p and 1s 2s\({}^{2}\) 3\(l\) levels, respectively. The energies of the EA contributions are consistent with the measurements but the magnitudes of both have been decreased by a factor of 0.75 to be consistent with the measurements. The ratio of the maximum cross section at about 1600 eV is about 1.03 for the present results to the 2007 FAC values and the ratio at a lower energy of 1000 eV is about 1.10. ### The boron isoelectronic sequence: Mg VIII Hahn et al. (2010) reports measurements of the ionization cross sections of Mg viii performed at the heavy ion storage ring TSR at the Max-Planck-Institut fur Kernphysik. They compare their measurements of the FAC cross sections to Dere (2007) and find that, near the peak of the cross section, their measurements are below the FAC calculations but within the error bars. At energies just above threshold near 300 eV, their measurements are above the FAC values and at the very lowest energies the differences exceed their error bars. While the earlier FAC calculations were not too far from the new measurements, it is worthwhile to perform a fit to the Hahn et al. (2010) measurements. The FAC calculations of Dere (2007) only considered direct ionization from the 2s and 2p levels and no EA contributions. The BTI scaled experimental cross sections have been fit in the manner described in SS 2.2.2 with particular attention paid to the measurements just above threshold. The Mg viii measurements of Hahn et al. (2010), the FAC cross sections of Dere (2007) and the current fits to the experimental data are presented in Fig. 4. As the authors suggest, "the apparent structure in the experimental results near 600, 1400 and 1600 eV are due to run to run variations in the Figure 2: Ionization cross sections of O v. The label _Fogle_ refers to Fogle et al. (2008), _2007_ refers to Dere (2007), and _present_ to the results reported here. background levels." At peak cross sections near about 700 eV the current values are reduced by a factor of about 0.92 from the FAC values of Dere (2007). However, at an energy of about 300 eV the current values are about a factor of 1.3 above the FAC cross sections. ### The fluorine isoelectronic sequence: Fe XVIII The ion Fe xviii provides a number of strong lines between 14 and 18 A. These make important contributions to observations by Chandra and other missions that observe at X-ray wavelengths. Hahn et al. (2013) reported measurements of the cross sections of Fe xiv, Fe xvii and Fe xviii at the heavy ion storage ring TSR, and Fe xviii is discussed here. The BTI scaled experimental cross sections of Hahn et al. (2013) have been fit in the manner described in SS 2.1 with particular attention paid to the measurements just above threshold. A comparison of the measurements of Hahn et al. (2013), Dere (2007) and the present results are shown in Fig. 5. Hahn et al. (2013) suggest that an EA contribution to the cross section could arrive from excitation of an inner shell 2s electron but there does not seem to be any evidence of this in the measurements. Neither do the measurements of the ionization cross sections of Si vi, also in the fluorine sequence, by Thompson & Gregory (1994) indicate any significant EA contributions. The FAC calculations of Dere (2007) only considered direct ionization from the 2s and 2p levels and no EA contributions. At energies of about 2000 eV, somewhat above threshold, the present cross section is about a factor of 1.13 greater than the 2007 values, but the ratio becomes unity at 3200 eV and 0.94 at about 5000 eV. ### The neon isoelectronic sequence: Fe XVII Fe xvii is a strong contributor to the X-ray spectrum in the wavelength range 10 to 18 A and at extreme ultraviolet (EUV) wavelengths between 200 and 300 A. A number of the EUV lines have been observed in solar spectra between 180 and 400 A (Warren et al., 2008; Del Zanna & Ishikawa, 2009). Figure 3: Ionization cross sections of S xiii. The label _Hahn_ refers to Hahn et al. (2012a), _2007_ refers to Dere (2007), and _present_ to the results reported here. Measurements of the ionization cross section of Fe xvii are described by Hahn et al. (2013). A comparison of these measurements for Fe xvii, those of Dere (2007) and the present results are shown in Fig. 6. The FAC calculations of Dere (2007) only took into account direct ionization from the 2p shell. The new fits to the measured cross sections are able to reproduce the measurements quite well. ### The magnesium isoelectronic sequence: Fe xv Fe xv produces a strong line at 284 A in the EUV spectrum. In addition, it also provides density-sensitive line pair ratios such as \(\lambda\)234 to \(\lambda\)244 (Cowan & Widing, 1973). The ionization cross-section of Fe xv consists of both DI and EA components and the Dere (2007) cross sections are based on calculations with FAC. The EA components include transitions from the n=2 levels to the n=3 and n=4 levels and the various transitions are grouped into two EA components with excitation thresholds at 793 and 987 eV. These compare to the ionization potential of 457 eV. Laboratory measurements of the ionization cross sections of Fe xv are reported by Bernhardt et al. (2014) and show both DI and EA components although the measured EA components appear more complex than the two component description. The CHIANTI ionization cross sections have been revised by fitting the DI component to the Bernhardt et al. (2014) cross sections below 793 eV. The EA component to the n=3 levels has been reduced by a factor of 0.92. The measurements of Bernhardt et al. (2014) together with the Dere (2007) values and the present results are shown in Fig. 7. At energies just below the first EA threshold, the ratio of the present cross section to that of Dere (2007) is about 1.22. At energies where the EA component is present, the ratio is about 0.99 at 1000 eV and 0.95 at 2000 eV. Bernhardt et al. (2014) also provide parameters to reproduce their calculations of the ionization rate coefficient. The rates of Bernhardt et al. (2014) are very close to those of the revised parameters. The ratio of the present rates to those of Bernhardt et al. (2014) is 0.83 at 10\({}^{5}\) K, 0.91 at 2.2 \(\times\) 10\({}^{6}\) K (the temperature of maximum ionization of Fe xv) and 0.93 at 3 \(\times\) 10\({}^{6}\) K. Figure 4: Ionization cross sections of Mg viii. The label _Hahn_ refers to Hahn et al. (2010), _2007_ refers to Dere (2007), and _present_ to the results reported here. ### The aluminum isoelectronic sequence: Fe XIV Fe xiv produces the strong "green" line at 5304 A that is often observed with solar coronagraphs. It is also responsible for a number of strong lines at EUV wavelengths. Intensity ratios of some pairs of line are also sensitive indicators of electron densities between 10\({}^{9}\) and 10\({}^{11}\) cm\({}^{-3}\)(Dere et al., 1979). As with Fe xv, the Fe xiv cross section includes both DI and EA components, and both are prominent in both the measurements of Hahn et al. (2013) and the FAC cross sections of Dere (2007). A comparison of the measurements of Hahn et al. (2013), Dere (2007) and the present results are shown in Fig. 8. The present values consist of a fit to the DI measured cross sections below 783 eV where the EA components become apparent. The FAC cross sections of Dere (2007) included excitations of a 2s electron to the n=3 and n=4 levels. The new EA cross sections consist of adjustments to the EA threshold for the n=3 excitations and the magnitudes for both components are multiplied by a factor of 0.8. ### The silicon isoelectronic sequence: Fe XIII Fe xiii produces a number of strong spectral lines at EUV wavelengths. Some of these line pairs are useful diagnostics of electron densities between 3 \(\times\) 10\({}^{8}\) and 3 \(\times\) 10\({}^{10}\) cm\({}^{-3}\)(Flower & Nussbaumer, 1974). A comparison of the measurements of Hahn et al. (2011a, 2012b), Dere (2007) and the present results are shown in Fig. 9. As with Fe xiv, the cross section consists of strong DI and EA components. The DI fit parameters are arrived at by using the measurements below the first EA threshold at 775 eV. The EA components of the FAC cross sections included excitations from the n=2 levels to the n = 3, 4, and 5 levels. A better fit was obtained for the present cross section if the excitation to the n=5 levels was removed, the excitation to the n=4 levels reduced by a factor of 0.8 and the transition to the n=3 levels unchanged. At 400 eV, just above the threshold, the ratio of the present cross section to that of the 2007 cross section is about 2.0, at 500 eV the ratio is about 1.17, at 600 eV the ratio is about 1.05, and at 700 eV the ratio is about 0.99. Above 780 eV where the EA cross sections occur, the ratio of the present cross Figure 5: Ionization cross sections of Fe xviii. The label _Hahn_ refers to Hahn et al. (2013), _2007_ refers to Dere (2007), and _present_ to the results reported here. sections to the 2007 values is about 0.8. These ratios are important for understanding the changes in the ionization rate coefficients discussed in SS 2.14. ### The phosphorus isoelectronic sequence: Fe XII Fe xii also produces a number of strong lines at EUV wavelengths. Some are strong enough that they comprise the primary component of narrow band EUV images such as the Extreme-ultraviolet Imaging Telescope (EIT) on SOHO and the Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory (SDO). A number of Fe xii line pairs form ratios that are reliable diagnostics of electron densities between \(10^{8}\) and \(10^{11}\) cm\({}^{-3}\). They have been observed with the XUV Slitless Spectrograph on Skylab (Dere et al., 1979) and with the Extreme-ultraviolet Imaging Spectrometer (EIS) on the Hinode satellite (Dere et al., 2007; Warren & Brooks, 2009). A comparison of the measurements of Hahn et al. (2011b), Dere (2007) and the present results are shown in Fig. 10. The DI FAC cross sections of Dere (2007) took into account ionization from the n=2 levels and the 3s and 3p levels. These have been replaced with a fit to the Hahn et al. (2011b) measurements below the first EA threshold at 753 eV. The previous EA cross sections considered excitations from the n=2 levels to the n=3, 4, and 5 levels. The comparison in Fig. 10 indicates that measurements are best matched if the excitations to the n=4 and 5 levels are removed. Otherwise, it has not been necessary to change the energy of the n=3 threshold or magnitude. Just above threshold the ratio of the present cross section to that of Dere (2007) is about 2.5; at 400 eV, the ratio is about 1.4; and at 600 eV, it is about 1.08. Above 1000 eV where the EA cross sections occur, the ratio is about 0.8. Again, these ratios are important for understanding the changes in the ionization rate coefficients discussed in SS 2.14. ### The sulphur isoelectronic sequence: Fe XI The ion Fe xi contributes a number of strong lines to the EUV spectrum. It also provides density sensitive line ratios that are useful in the density range between \(10^{9}\) and \(10^{11}\) cm\({}^{-3}\)(Dere, 2020). A comparison of the measurements of Figure 6: Ionization cross sections of Fe xvii. The label _Hahn_ refers to Hahn et al. (2013), _2007_ refers to Dere (2007), and _present_ to the results reported here. Hahn et al. (2012c), Dere (2007) and the present results are shown in Fig. 11. The FAC EA cross sections included excitations from the n=2 to the n=4 and 5 levels. The summed rates have been collected to form three EA cross sections. The threshold energies have been adjusted to reproduce the measured cross sections. For the 2007 EA cross sections, the lowest energy threshold was 882 eV and this has been moved to 721 eV. The other two thresholds have been adjusted by a similar amount. The magnitudes of the three original EA cross sections have been scaled by factors of 1.3, 4.0 and 2.0. The ratio of the present cross sections to the 2007 values have been evaluated. At 300 eV, just above the threshold, the ratio is about 1.6, at 500 eV the ratio is 1.01 at 700 eV, just before the EA cross sections occur, the ratio is about 0.96. At energies above 1000 eV, the ratio is about 0.97. ### The chlorine isoelectronic sequence: Fe X Fe x produces a number of lines at EUV wavelengths as well as the coronal "red" line at optical wavelengths. Some EUV line pairs form ratios that are sensitive to electron densities between 3 \(\times 10^{8}\) and \(10^{11}\) cm\({}^{-3}\). The ionization cross sections of Fe x have been measured by Hahn et al. (2012c) and a comparison of these measurements, the FAC cross sections of Dere (2007) and the present results are shown in Fig. 12. From this figure, it is clear that both the DI and EA components of the original FAC calculations need to be modified to accurately reproduce the measured values. The original FAC calculations took into account DI ionization from the n=2 and n=3 levels. The EA components included excitation from the n=2 levels to the n=4 and n=5 levels and excitations from the n=3 to the n=4, 5, and 6 levels. The latter cross sections for excitations from the n=3 levels occur just above the ionization threshold but are not very apparent in the total cross section. For the DI component, a fit to the measurements has been obtained to the measurements at energies below 650 eV. For the EA component, only the cross sections from the n=2 levels are of importance and it has been necessary to rescale the magnitudes and shift the energy thresholds. The energy threshold for the lowest energy EA component has been moved from 862 eV to 720 eV and the second component by a similar amount. The magnitudes of the two EA components have been scaled by factors of 1.8 and Figure 7: Ionization cross sections of Fe xv. The label _Bernhardt_ refers to Bernhardt et al. (2014), _2007_ refers to Dere (2007), and _present_ to the results reported here. 2.4. The ratio of the present to the 2007 cross section is about 1.25 at an energy of 300 eV, just above the DI threshold. At 500 eV the ratio is 1.06 and at energies above 1000 eV, the ratio is about 1.02. ### The argon isoelectronic sequence: Fe IX Fe ix produces a very strong line at 171 A that is produced at about 8 \(\times\) 10\({}^{5}\) K and shows characteristics of the solar upper transition region and low corona. It comprises the primary component of narrow band EUV images such as the EIT on SOHO and the AIA on SDO. A comparison of the measurements of Hahn et al. (2016), Dere (2007) and the present results are shown in Fig. 13. The DI cross sections of Dere (2007) include ionization from the n=3 levels. These have been replaced by a fit to the measurements at energies below about 700 eV and the result is that the DI cross section is reduced by about 40 per cent. The EA components of the cross section in the original FAC calculations included excitations from the n=2 and n=3 levels and these cross sections were summed into three components with the lowest energy component at 842 eV. Hahn et al. (2016) found the dominant EA component to have a threshold at about 650 eV and attributed it to excitations from the n=2 level to the 3d level. This component was not included in the original FAC calculations. This component is now modeled by a fit to the measurements above 700 eV with a single EA component. At an energy of 250 eV, just above the ionization potential of 233 eV, the ratio of the present cross section to the previous FAC cross section is about 1.5. At 300 eV, the ratio is about 1.03, at 400 eV the ratio is about 1.04, and at 600 eV, near the peak of the DI component, the ratio is about 0.72. In the region above 700 eV, the ratio is about 0.85. ### The potassium isoelectronic sequence: Fe VIII Fe viii produces a number of lines at EUV wavelengths. These are generally not particularly strong but do provide information on plasmas around 5 \(\times\) 10\({}^{5}\) K. Intensity ratios of several lines provide density diagnostics between 10\({}^{6}\) and 10\({}^{8}\) cm\({}^{-3}\). A comparison of the measurements of Hahn et al. (2015), Dere (2007) and the present results are Figure 8: Ionization cross sections of Fe xiv. The label _Hahn_ refers to Hahn et al. (2013), _2007_ refers to Dere (2007), and _present_ to the results reported here. shown in Fig. 14. Procedures were followed during the experimental measurements to ensure that as few ions in metastable states were present in the beam. The authors state that ions in metastable states comprised only about 6% of the beam. The original FAC calculations included DI from the 3p and 3d shells and EA excitations from the 3p\({}^{6}\)3d to the 3p\({}^{5}\)3d nl levels with n=4,5,6. The ionization potential is about 156 eV and the excitations to the n=4,5 and 6 autoionizing levels occur just above the ionization potential. The various EA excitations were grouped into five EA cross sections with energies between 156 and 200 eV. These are evident as step-like structures in the calculated ionization cross section just above threshold. With the measurements of Hahn et al. (2015) now available, fits to the measurements were used to represent the cross sections. These cross sections are shown in Fig. 14. ### Ionization Rate Coefficients The ionization rate coefficient, \(R(T)\), is obtained by integrating the cross-section over a Maxwell-Boltzmann distribution, and is given by \[R(T)=\int_{v_{lP}}^{\infty}v\,\sigma(E)\,f(v,T)\,dv \tag{3}\] where \(v\) is the velocity of the electron, \(E\) (\(=\frac{1}{2}\,m\,v^{2}\)) is the energy of the electron, and \(f\) is the Maxwell velocity distribution at a temperature \(T\). The integration is carried out from the velocity of the electron with an energy equal to the ionization potential IP. The Fe xiv ionization rate coefficients for the present rates and those of Dere (2007) are shown in Fig. 15 together with the ratio of the rate coefficients. The displayed temperature range approximately corresponds to where the Fe xiv ionization fraction is within 0.01 of its peak value. It can be seen that the revised rate coefficients do not show a significant difference from the previous values even though there are considerable differences in the ionization cross sections (see Fig. 8). The largest difference is in the EA components that lie well above the ionization potential. In Figure 9: Ionization cross sections of Fe xiii. The label _Hahn_ refers to Hahn et al. (2011a, 2012b), _2007_ refers to Dere (2007), and _present_ to the results reported here. order to understand this lack of change, the Maxwell-Boltzmann distribution of electron velocities for a temperature \(1.91\times 10^{6}\) K has been plotted on Fig. 16. The ratios of the new rate coefficients to those previously in CHIANTI for the ions in the present work are shown in Table 1. In each case, the rate coefficients are calculated at the temperature of maximum ionization. \begin{table} \begin{tabular}{l l l} \hline Ion & \(T\) (\(10^{6}\) K) & Ratio \\ \hline S xiii & 2.7 & 1.11 \\ O v & 0.24 & 0.96 \\ Mg viii & 0.79 & 1.09 \\ Fe xviii & 7.9 & 1.09 \\ Fe xvii & 5.6 & 1.06 \\ Fe xv & 2.2 & 0.99 \\ Fe xiv & 1.9 & 0.97 \\ Fe xiii & 1.7 & 1.09 \\ Fe xii & 1.6 & 1.17 \\ Fe xi & 1.3 & 1.09 \\ Fe x & 1.1 & 1.14 \\ Fe ix & 0.79 & 0.72 \\ Fe viii & 0.56 & 1.00 \\ \hline \end{tabular} \end{table} Table 1: The ratio of the ionization rate coefficients reported here to those of Dere (2007) Figure 10: Ionization cross sections of Fe xii. The label _Hahn_ refers to Hahn et al. (2011b), _2007_ refers to Dere (2007), and _present_ to the results reported here. ### Recombination rate coefficients for the phosphorus isoelectronic sequence Recently Bleda et al. (2022) have reported radiative and dielectronic recombination coefficients for ions in the phosphorus sequence for ions with nuclear charge Z from 16 through 30 plus a few ions with higher nuclear charges. These rates refer to the recombination of S-like ions to the P-like ions. In the case of the recombination of Fe xii to Fe xi it was possible for the authors to compare their calculations with the measurements of Novotny et al. (2012). As the authors state, "Given the complexity of the problem,..., the overall agreement,..., is quite good." Bleda et al. (2022) provide fit parameters for their coefficients that are reproduced in the files released in the current CHIANTI release. In general, the changes from the rates in the version 10.0 CHIANTI database are often not that large. In the case of S ii, the version 10.0 coefficients were taken from the work of Mazzotta et al. (1998). The agreement between the two is very good at high temperature but below \(2\times 10^{4}\) K the rates of Bleda et al. (2022) include the significant lower temperature processes. However, the differences are mostly of importance for photoionized plasmas. For Ca vi the new DR rates of Bleda et al. (2022) are about 50% lower than the rates of Shull & van Steenberg (1982) that were included in CHIANTI version 10.0. For Ni xiv Bleda et al. (2022) show considerable difference between their DR rates and those of Mazzotta et al. (1998). Version 10.0 of CHIANTI still included the DR rates of Shull & van Steenberg (1982) and these are quite close to the new rates of Bleda et al. (2022). ### A revised ionization balance With the revised set of ionization and recombination rate coefficients, an updated ionization balance has been calculated and will be distributed as the default ionization equilibrium for CHIANTI 10.1. Figure 17 compares the new ionization fraction curves for the iron ions Fe x-xiv with those from CHIANTI 10. Comparing all of the ionization fraction curves between the new and previous tabulations, there are 11 ions with log \(T_{\rm max}\) values shifted by 0.05 dex Figure 11: Ionization cross sections of Fe xi. The label _Hahn_ refers to Hahn et al. (2012c), _2007_ refers to Dere (2007), and _2022_ to the results reported here. and seven ions with peak ionization fractions that are \(>10\%\) different compared to the previous ionization fraction curves. The latter include V viii,ix, Sc vi,vii, Ca vi and K v. The largest difference is a 30% decrease for S i. ## 3 New and updated atomic models ### Hydrogen isoelectronic sequence: C vi An error in the proton rate coefficients used for C vi was identified by C. M. Gunasekera and has been corrected. The rate coefficients for the transitions between the three levels of the \(n=2\) configuration were computed by Zygelman & Dalgarno (1987) and added to CHIANTI Version 6 (Dere et al., 2009). Zygelman & Dalgarno (1987) tabulated their rate coefficients with a scaling of the atomic number, \(Z\), to the power of three. In removing the scaling, the numbers were inadvertently multiplied by \(Z^{3}\) instead of divided, leading to rate coefficients too large by a factor \(4.7\times 10^{4}\). The proton rates for C vi provide a depopulation mechanism for the \(2s\ ^{2}S_{1/2}\) metastable level in addition to the two-photon decay mechanism. The erroneous rates resulted in proton excitation being the dominant depopulation mechanism. With the correct rates, the two-photon process becomes the dominant process. As a consequence, the two-photon continuum for C vi is now more than a factor 1000 stronger than previously for a temperature of 1 MK and an electron number density of \(10^{10}\) cm\({}^{-3}\). A second effect is that the erroneous proton rates resulted in the populations of the \(2p\ ^{2}P_{1/2,3/2}\) levels being enhanced by 44% and 10% respectively, for a temperature of 1 MK and an electron number density of \(10^{10}\) cm\({}^{-3}\). The Ly-\(\alpha\) line for C vi, which is a self-blend of the transitions from the \({}^{2}P\) levels to \(1s\ ^{2}S_{1/2}\), is consequently 18% weaker in the new C vi model compared to CHIANTI 10. ### Boron isoelectronic sequence: O iv Figure 12: Ionization cross sections of Fe x. The label _Hahn_ refers to Hahn et al. (2012c), _2007_ refers to Dere (2007), and _present_ to the results reported here. Errors were found in the energy levels for this ion such that experimental energies had been inadvertently assigned to the wrong levels. In particular, the energies for levels 27, 28 and 51 had been assigned to levels 29, 30 and 52, respectively, and vice versa. The energies have been updated and a new set of wavelengths have been derived from the updated energies. No other changes were made to the O iv model. ### Carbon isoelectronic sequence #### 3.3.1 Na vi The CHIANTI 10 energies were obtained from NIST which has the data compiled by Sansonetti (2008). As the \(2s2p^{3}\)\({}^{5}S_{2}\) energy was obtained by isoelectronic fitting performed by Edlen (1985), there is an uncertainty that propagates to all quintet levels in the ion, even though many give rise to experimental wavelengths. For the CHIANTI 10 model, none of the quintet levels were assigned an experimental energy for this reason. For the present release, the \(2s2p^{3}\)\({}^{5}S_{2}\) energy has been assigned, allowing a number of additional assignments for other quintet levels. Feldman et al. (1997) identified the \({}^{3}P_{2}\)-\({}^{5}S_{2}\) transition in solar spectra at 988.71 A, which implies a \({}^{5}S_{2}\) energy of \(102\,998\pm 3.0\) cm\({}^{-1}\). Sansonetti (2008) rejected this value because Feldman et al. (1997) did not also identify the \({}^{3}P_{1}\)-\({}^{5}S_{2}\) transition, which is the stronger of the two transitions. However, this line is predicted to be at 977.48 A (or 977.37 A if using the Edlen energy), placing it close to C iii 977.02 A. This is one of the strongest lines in the solar spectrum making it difficult to identify the Na vi line. Given that the \({}^{5}S_{2}\) energy derived from the Feldman et al. (1997) wavelength is only 12 cm\({}^{-1}\) from the Edlen estimated value we add it to CHIANTI since it accounts for a known solar emission line. As a consequence, all of the quintet level energies marked with "+x" in the NIST table were adjusted and added to CHIANTI. There are seven levels in all: \(3s\)\({}^{5}P_{J}\) (indices 35-37), \(3p\)\({}^{5}P_{J}\) (indices 59-61) and \(4s\)\({}^{5}P_{2}\) (index 189). The theoretical values of \(4s\)\({}^{5}P_{0,1}\) (indices 187 and 188) were adjusted based on the experimental energy of \({}^{5}P_{2}\). Figure 13: Ionization cross sections of Fe ix. The label _Hahn_ refers to Hahn et al. (2016), _2007_ refers to Dere (2007), and _present_ to the results reported here. Finally, the experimental energies for the \(2s^{2}2p^{2}\)\({}^{1}D_{2}\) and \({}^{1}S_{0}\) levels were updated to the values from Young et al. (2011). #### 3.3.2 Ca Xv Updated atomic models for the entire carbon isoelectronic sequence were added to CHIANTI 10. However, it was found that the Ca xv effective collision strengths were omitted from the update. This has been rectified for CHIANTI 10.1. ### Nitrogen isoelectronic sequence Mao et al. (2020) presented a large-scale scattering calculation using the \(R\)-matrix intermediate coupling frame transformation (ICFT) method, to calculate effective collision strengths of N-like ions from O ii to Zn xxiv. 725 fine-structure levels were included in both the configuration interaction target and close-coupling collision expansion, for states up to \(n=5\). We are providing new models based on these calculations for the O ii, Si viii, Ar xii and Ca xiv ions. As shown in Mao et al. (2020), the new rates for these ions are significantly different than those previously available in CHIANTI, and they affect some line ratio diagnostics. For the experimental energies we have used a combination of NIST values and those that were assessed in the previous CHIANTI versions. Only bound states have been retained for O II and Si VIII, as the model of the autoionizing states requires significant additions and these ions do not produce strong satellite lines. For O ii, we have replaced the effective collision strengths for the first five states of the ground configuration with those from Tayal (2007), up to \(10^{5}\) K, as we consider them to be more accurate at low temperatures. We note that the values of the two calculations at \(10^{5}\) K differ by only a few percent, as shown in Mao et al. (2020). The structure calculations carried out with autostructure (AS) for the scattering work are generally not as accurate as larger-scale ones, and generally the radiative data for the lower states can be improved, either by using Figure 14: Ionization cross sections of Fe viii. The label _Hahn_ refers to Hahn et al. (2015), _2007_ refers to Dere (2007), and _present_ to the results reported here. experimental energies or replacing them with more accurate values. We have compared the \(A\)-values with different values of \(A\) and \(B\), and found that the \(A\)-values are consistent with the \(A\)-values. We have also compared the \(A\)-values with different values of \(A\) and \(B\), and found that the \(A\)-values are consistent with the \(A\)-values. calculations and found generally small variations of the order of 10-20% for the important transitions within the lowest states. For O ii, Si viii, and Ar xii, we have replaced the \(A\)-values of the transitions within the 2s\({}^{2}\) 2p\({}^{3}\), 2s 2p\({}^{4}\) configurations with the non-relativistic multiconfiguration Hartree-Fock (MCHF) calculations of Tachiev & Froese Fischer (2002). For Ca xiv, we have replaced the \(A\)-values of the transitions within the 2s\({}^{2}\) 2p\({}^{3}\), 2s 2p\({}^{4}\) configurations with the many-body perturbation theory (MBPT) calculations by Wang et al. (2016). The AS radiative data only differed by typically 10-20%, compared to the MCHF or MBPT values. ### Oxygen isoelectronic sequence Mao et al. (2021) performed a large-scale scattering calculation using the \(R\)-matrix ICFT method, to calculate effective collision strengths of O-like ions from Ne III to Zn XXIII over a wide range of temperatures. The targets included 630 fine-structure levels up to \(nl=5d\) for each ion. As discussed in Mao et al. (2021), significant differences with the data that are present in CHIANTI were found for a few ions important for astrophysical applications. They include Si vii, S ix, Ar xi, and Ca xiii, for which CHIANTI had limited models and effective collision strengths calculated with the distorted wave approximation. Lines in the EUV, UV and near infrared have significantly different predicted intensities with the new atomic data. For this reason, we have included new models for these ions. Work is in progress to update the models for the other ions, which will be made available in a future CHIANTI version. For the experimental energies we have used a combination of NIST values and those that were assessed in the previous CHIANTI versions. Only bound states have been retained, as the model of the autoionizing states requires significant additions. For Si vii and S ix we have replaced the A-values of the transitions within the \(n=2\) complex (ground configuration 2s\({}^{2}\) 2p\({}^{4}\), 2s 2p\({}^{5}\) and 2p\({}^{6}\), ten lowest states) with those calculated with the Multiconfiguration Hartree-Fock (MCHF) Figure 16: Ionization cross sections of Fe xiv. The label _Hahn_ refers to Hahn et al. (2013), _2007_ refers to Dere (2007), _present_ to the results reported here. Also plotted is the Maxwell-Boltzmann (MB) velocity distribution for a temperature of 1.91 MK and \(\mathbf{m}<\mathbf{v}>^{2}/\mathbf{2}\) for the same temperature is displayed as a vertical line. codes by Tachiev & Froese Fischer (2002), noting that the Mao et al. (2021) A-values differed by typically 10-20% only. For Ar xi and Ca xiii we have replaced the AS values with those calculated by Song et al. (2021) with the Multiconfiguration Dirac-Hartree-Fock (MCDHF) GRASP2K codes, described in Jonsson et al. (2013). Differences with the AS values are about 10%. ### Silicon isoelectronic sequence: Fe XIII Zhang et al. (2021) carried out a large-scale MCDHF atomic structure calculation for Fe xiii, providing theoretical energies that are generally close to the experimental energies, typically within a few hundreds of cm\({}^{-1}\). Zhang et al. (2021) reviewed previous studies and confirm the new identifications suggested by Del Zanna (2012) (based on the Del Zanna (2011) calculations) for the \(n=4\to n=3\) transitions, with one exception. We adopt the Zhang et al. (2021) identification. The \(n=4\to n=3\) transitions fall in the soft X-ray region of 50 A - 150 A. A few weak EUV transitions from the 3s 3p\({}^{2}\) 3d levels were tentatively identified by Del Zanna (2011) and assigned experimental wavelengths, for their potential importance in analysing Hinode EIS observations. However, the analysis carried out by Zhang et al. (2021) provides evidence for alternative identifications. Some of them have been adopted for the present version, while other tentative identifications have not been included. However, we are providing as 'best' energies those from Zhang et al., hence the expected wavelengths should be close to the experimental ones. ### Argon isoelectronic sequence: Fe IX Ryabtsev et al. (2022) provided a new set of experimental energies for the \(3p^{4}3d^{2}\) and \(3p^{5}4f\) configurations and these have been added to CHIANTI with the modifications described below. The CHIANTI 10 model had experimental energies for all 12 of the \(3p^{5}4f\) fine-structure levels from O'Dwyer et al. (2012), and these have been updated with the values from Ryabtsev et al. (2022). Agreement is good for nine of the Figure 17: The revised ionization balance for Fe ions for stages X through XIV. The full line represents the present calculation and the dashed line the version 10.0 calculation levels, but larger differences of between 160 and 560 cm\({}^{-1}\) are found for the remaining levels due to differences in line identifications--see Ryabtsev et al. (2022) for more details. Comparisons of the \(A\)-values listed by Ryabtsev et al. (2022) with those in the CHIANTI model (from Del Zanna et al., 2014) showed that it was necessary to swap the Ryabtsev et al. (2022) labels for the \({}^{1}F_{3}\) and \({}^{3}F_{3}\) levels, and the \({}^{1}G_{4}\) and \({}^{3}F_{4}\) levels to get agreement. Both sets of authors found that these levels are strongly mixed, hence the \(LSJ\) labels are not accurate descriptors for them. The Del Zanna et al. (2014) and Ryabtsev et al. (2022)\(A\)-values are in very good agreement after this modification. The \(3p^{4}3d^{2}\) configuration has 111 fine-structure levels, but only five previously had experimental energies. Sixteen new experimental energies have been taken from Ryabtsev et al. (2022), and the five existing energies have been updated, with only small changes. Ryabtsev et al. (2022) provided experimental energies for nine additional levels, but three of these were not used as they were listed as 'questionable.' Six further energies were not used either because the identifications were not consistent with predictions from the CHIANTI atomic model, or because the identifications were uncertain due to line blending. Full details of how these decisions were made are given in Young (2023). Young & Landi (2009) performed a study of Fe ix lines observed by the Extreme Ultraviolet Imaging Spectrometer (EIS Culhane et al., 2007), and they identified seven lines that could be attributed to Fe ix but for which it was not possible to assign atomic transitions. Two of these lines were independently identified as Fe ix lines by Del Zanna (2009), but also without atomic transitions. Del Zanna et al. (2014) provided transition information for two lines, and Ryabtsev et al. (2022) identified six of them. The two works agree for the line at 192.63 A, but differ for the line at 194.80 A. The Ryabtsev et al. (2022) identification is supported by multiple lines from the same upper level that are measured in laboratory spectra. The CHIANTI wavelengths in the present version have been derived from the new energy levels. ### Calcium isoelectronic sequence: Fe VII Experimental energies have been updated using the compilation of Kramida et al. (2022) that provided energies for previously unassigned levels and updated energies of known levels. A number of levels are highly mixed, particularly for the \(3p^{5}3d^{3}\) configuration and it was not always possible to match the Kramida et al. (2022) level with a level in the existing CHIANTI model. In these cases the Kramida et al. (2022) level was not used. Details on which level energies were not chosen for the CHIANTI model and how the decisions were made are described in Young (2023). In summary, the new model has 17 levels with newly-assigned experimental energies, and 10 levels for which the updated energy is at least 100 cm\({}^{-1}\) different from the previous energy. The wavelengths in the present version have been updated with the new energies. However, as pointed out in the previous CHIANTI release paper, some inconsistencies remain between the predicted intensities of the strongest lines and those observed by Hinode EIS. Del Zanna (2009) used an alternative set of atomic data and identifications to find relatively good agreement with observations. As different calculations provide very different energies and rates (even branching ratios), the solution of this complex problem will require further calculations and further assessments. ## 4 Elemental Abundances The CHIANTI software uses solar photospheric elemental abundances as the default when computing synthetic spectra. For CHIANTI versions 9 and 10, the photospheric abundances were stored in the file sun_photospheric_2015_scott.abund. This contains the abundances of Asplund et al. (2009), supplemented with values from Scott et al. (2015, 20) and Grevesse et al. (2015) for some elements. For CHIANTI 10.1 the default abundance file--named sun_photospheric_2021_asplund.abund--contains data from the compilation of Asplund et al. (2021). These authors provide a comprehensive and detailed review of the photospheric abundance of all elements up to uranium, complementing recent re-evaluations and new measurements in the literature for the abundances of many elements with original results. In both cases, improved values were obtained mostly from the coupling of photospheric observed spectra with 3D, NLTE hydrodynamical simulations of the outer convective zone and atmospheric layers of the Sun, and improved atomic data and collisional rates for many species. Additional data were taken from laboratory analysis of meteorites, return samples from the Genesis mission, helioseismological data, and sunspot observations. In comparison to the previous default abundance set, the Asplund et al. (2021) abundances show differences of 10% or more for four elements: lithium (\(-19\%\)), neon (\(+35\%\)), chlorine (\(-19\%\)) and titanium (\(+10\%\)). The most significant for the spectral modeling of ionized plasmas is the neon abundance. The new dataset provides a Ne/O relative abundance ratio of 0.23, larger than the previous value of 0.17. The new value is in line with recent spectroscopic re-evaluations of the Ne/O abundance ratio in the transition region (Young, 2018) and in the solar corona during solar minimum (Landi & Testa, 2015). The Ne/O ratio, and the absolute abundances of both Ne and O are very important for solving the current discrepancy between helioseismology determinations of the structure of the solar interior with model predictions, that need accurate absolute abundance measurements to calculate solar interior opacities (Christensen-Dalsgaard, 2021). We finally note that the coronal abundance data sets provided in earlier CHIANTI versions (e.g., sun_coronal_1992_feldman.abund and sun_coronal_2012_schmelz.abund) were largely based on solar active region observations where the relative abundances of low vs. high first ionization potential (FIP) elements showed an increase (the so-called FIP bias), compared to their (relative) photospheric values (see, e.g. the reviews by Laming, 2015; Del Zanna & Mason, 2018). The FIP bias was found to have averaged values around 3-4. Those coronal abundance data sets were obtained by taking into account those averaged values and applying empirical corrections (the FIP bias) to the older photospheric abundance data sets. The previous coronal abundance datasets are therefore not consistent with the new photospheric abundance file discussed here. We have therefore introduced a new coronal abundance file sun_coronal_2021_chianti.abund that is derived from the new photospheric abundance file by multiplying abundances for low-FIP elements (FIP \(\leq\) 10 eV) by a factor \(10^{0.5}\). This file is provided as a representative coronal abundance file that may be useful for investigating how abundances affect synthetic spectra, for example. Users should be aware that the magnitude of the FIP bias has been found to vary amongst different solar structures, and also with temperature. Abundance files (both photospheric and coronal) previously distributed with CHIANTI will continue to be available but are moved to the abundance/archive directory within the database. ## 5 Conclusions This article has described the most recent updates to the CHIANTI atomic database that will be distributed as version 10.1. Ionization rates for 13 ions have been updated, as well as the complete set of dielectronic recombination data for the phosphorus isoelectronic sequence. The core datasets of energy levels, radiative decay rates and electron excitation rates have been updated for 15 ions. Dere (2007) assembled the experimental measurements of ionization cross sections available at that time. These were supplemented with calculations of the ionization cross sections with the FAC package (Gu, 2002) where measurements did not exist or were unreliable. Together with recombination rates from the scientific literature, version 9.0 of the CHIANTI atomic database was assembled with a full set of rates covering all ions of all elements from H through Zn. Since that time, there have been a number of new experimental measurements of ionization cross sections, as well as new calculations of recombination rates. The new ionization measurements have been used to revised the CHIANTI parameterization of these cross sections and these have been inserted into the latest version of the CHIANTI atomic database (Version 10.1). In most cases, the differences between the revised rates and the previous CHIANTI rates is not that great. The parameters for recombination of Bleda et al. (2022) have also been inserted into the latest version of CHIANTI. In addition, a new ionization equilibrium has been calculated from these revised rates. A number of atomic models have been updated. For the nitrogen isoelectronic sequence, new models for the O ii, Si viii, Ar xii and Ca xiv ions are now based on the recent calculations of Mao et al. (2020). For the oxygen isoelectronic sequence, the calculations of Mao et al. (2021) have allowed us to improve the previously limited models for Si vii, S ix, Ar xi, and Ca xiii. All of these are now included in the latest release of the CHIANTI atomic database version 10.1. We thank Dr. Stefan Schippers for supplying the measurements of Bleda et al. (2022) in a machine-readable format. KPD and PRY work has been supported by NASA grants 80NSSC21K0110 and 80NSSC21K1785. GDZ acknowledges support from STFC (UK) via the consolidated grants to the atomic astrophysics group (AAG) at DAMTP, University of Cambridge (ST/P000665/1. and ST/T000481/1). EL has been supported by NASA grants 80NSSC22K0750 and 80NSSC20K0185.
2306.03113
Generalizing the Concept of Bounded Variation
Let $[a,b]\subset\mathbb{R}$ be a non empty and non singleton closed interval and $P=\{a=x_0<\cdots<x_n=b\}$ is a partition of it. Then $f:I\to\mathbb{R}$ is said to be a function of $r$-bounded variation, if the expression $\overset{n}{\underset{i=1}{\sum}}|f(x_i)-f(x_{i-1})|^{r}$ is bounded for all possible partitions like $P$. One of the main result of the paper deals with the generalization of Classical Jordan decomposition theorem. We have shown that for $r\in]0,1]$, a function of $r$-bounded variation can be written as the difference of two monotone functions. While for $r>1$, under minimal assumptions such functions can be treated as approximately monotone function which can be closely approximated by a nondecreasing majorant. We also proved that for $0<r_1<r_2$; the function class of $r_1$-bounded variation is contained in the class of functions satisfying $r_2$-bounded variations. We go through approximately monotone functions and present a possible decomposition for $f:I(\subseteq \mathbb{R_+})\to\mathbb{R}$ satisfying the functional inequality $$f(x)\leq f(x)+(y-x)^{p}\quad (x,y\in I\mbox{ with $x<y$ and $ p\in]0,1[ $}).$$ A generalized structural study has also be done in that specific section. On the other hand for $\ell[a,b]\geq d$; a function satisfying the following monotonic condition under the given assumption will be termed as $d$-periodically increasing $$f(x)\leq f(y)\quad \mbox{for all}\quad x,y\in I\quad\mbox{with}\quad y-x\geq d.$$ we establish that in a compact interval any bounded function can be decomposed as the difference of a monotone and a $d$-periodically increasing function.
Angshuman R. Goswami
2023-06-05T09:06:59Z
http://arxiv.org/abs/2306.03113v1
# Generalizing the concept of bounded variation ###### Abstract. Let \([a,b]\subseteq\mathbb{R}\) be a non empty and non singleton closed interval and \(P=\{a=x_{0}<\cdots<x_{n}=b\}\) is a partition of it. Then \(f:I\to\mathbb{R}\) is said to be a function of \(r\)-bounded variation, if the expression \(\sum\limits_{i=1}^{n}|f(x_{i})-f(x_{i-1})|^{r}\) is bounded for all possible partitions like \(P\). One of the main result of the paper deals with the generalization of Classical Jordan decomposition theorem. We have shown that for \(r\in]0,1]\), a function of \(r\)-bounded variation can be written as the difference of two monotone functions. While for \(r>1\), under minimal assumptions such functions can be treated as approximately monotone function which can be closely approximated by a non decreasing majorant. We also proved that for \(0<r_{1}<r_{2}\); the function class of \(r_{1}\)-bounded variation is contained in the class of functions satisfying \(r_{2}\)-bounded variations. We go through approximately monotone functions and present a possible decomposition for \(f:I(\subseteq\mathbb{R}_{+})\to\mathbb{R}\) satisfying the functional inequality \[f(x)\leq f(x)+(y-x)^{p}\quad(x,y\in I\text{ with }x<y\text{ and }p\in]0,1[).\] A generalized structural study has also be done in that specific section. On the other hand for \(\ell[a,b]\geq d\); a function satisfying the following monotonic condition under the given assumption will be termed as \(d\)-periodically increasing \[f(x)\leq f(y)\quad\text{for all}\quad x,y\in I\quad\text{with}\quad y-x\geq d.\] we establish that in a compact interval any bounded function can be decomposed as the difference of a monotone and a \(d\)-periodically increasing function. The core details related to past results, motivation, structure of each and every sections are thoroughly discussed below. Key words and phrases:Approximate Monotonicity, Bounded Variation, Decomposition 2000 Mathematics Subject Classification: Primary 26A45; Secondary 26A48, 26B30 ## 1. Introduction The aim of this paper is to study the generalize notion of bounded variation. The paper can be subdivided into two parts. In the first part, we focused on power oriented bounded variation. While on the second half we studied distance dependent variation. At the beginning of each section, a brief discussion about structural characterization and various inclusion properties are also provided. Let \(r\in]0,\infty[\) be fixed. A function \(f:[a,b]\to\mathbb{R}\) is said to be satisfying \(r\)-bounded variation if for any partition \(P=\{a=x_{0},\cdots,x_{n}=b\}\) with \(x_{0}<\cdots<x_{n}\); the expression \(\sum\limits_{i=1}^{n}|f(x_{i})-f(x_{i-1})|^{r}\) is bounded. It can be easily observed that for the case \(r=1\), this notion implies the well-known definition of bounded variation. The term functions of bounded variation was first introduced by Jordan in his paper [3]. Later the concept was generalized by Wiener in the paper [4] where he investigated several topics related to Fourier Analysis. His generalized form is widely known \(p\)-variation and the definition of it is same as above mentioned \(r\)-variation with \(r\geq 1\). Since then many more
2307.13029
Ro-vibrational Spectroscopy of CI Tau -- Evidence of a Multi-Component Eccentric Disk Induced by a Planet
CI Tau is currently the only T Tauri star with an inner protoplanetary disk that hosts a planet, CI Tau b, that has been detected by a radial velocity survey. This provides the unique opportunity to study disk features that were imprinted by that planet. We present multi-epoch spectroscopic data, taken with NASA IRTF in 2022, of the ${}^{12}$CO and hydrogen Pf$\beta$ line emissions spanning 9 consecutive nights, which is the proposed orbital period of CI Tau b. We find that the star's accretion rate varied according to that 9~d period, indicative of companion driven accretion. Analysis of the ${}^{12}$CO emission lines reveals that the disk can be described with an inner and outer component spanning orbital radii 0.05-0.13~au and 0.15-1.5~au, respectively. Both components have eccentricities of about 0.05 and arguments of periapses that are oppositely aligned. We present a proof-of-concept hydrodynamic simulation that shows a massive companion on a similarly eccentric orbit can recreate a similar disk structure. Our results allude to such a companion being located around an orbital distance of 0.14~au. However, this planet's orbital parameters may be inconsistent with those of CI Tau b whose high eccentricity is likely not compatible with the low disk eccentricities inferred by our model.
Janus Kozdon, Sean Brittain, Jeffrey Fung, Josh Kern, Stanley Jensen, John Carr, Joan Najita, Andrea Banzatti
2023-07-24T18:00:02Z
http://arxiv.org/abs/2307.13029v2
# Ro-vibrational Spectroscopy of CI Tau -- ###### Abstract CI Tau is currently the only T Tauri star with an inner protoplanetary disk that hosts a planet, CI Tau b, that has been detected by a radial velocity survey. This provides the unique opportunity to study disk features that were imprinted by that planet. We present multi-epoch spectroscopic data, taken with NASA IRTF in 2022, of the \({}^{12}\)CO and hydrogen Pf\(\beta\) line emissions spanning 9 consecutive nights, which is the proposed orbital period of CI Tau b. We find that the star's accretion rate varied according to that 9 d period, indicative of companion driven accretion. Analysis of the \({}^{12}\)CO emission lines reveals that the disk can be described with an inner and outer component spanning orbital radii 0.05-0.13 au and 0.15-1.5 au, respectively. Both components have eccentricities of about 0.05 and arguments of periapses that are oppositely aligned. We present a proof-of-concept hydrodynamic simulation that shows a massive companion on a similarly eccentric orbit can recreate a similar disk structure. Our results allude to such a companion being located around an orbital distance of 0.14 au. However, this planet's orbital parameters may be inconsistent with those of CI Tau b whose high eccentricity is likely not compatible with the low disk eccentricities inferred by our model. accretion, circumstellar matter - planetary systems: protoplanetary disks - stars: individual (CI Tau) ## 1 Introduction The protoplanetary disks of T Tauri stars often host substructures such as rings/gaps (e.g., Long et al., 2018), spiral arms (e.g., Phuong et al., 2020), and large-scale asymmetries (e.g., Dyda et al., 2015) (see Bae et al. (2022) for a review). These features may be produced by processes such as condensation fronts or snow lines (e.g., Pinilla et al., 2017), radiation pressure (e.g., Bi & Fung, 2022), gravitational instabilities (e.g., Dong et al., 2018), and dynamical interactions with planets (Paardekooper et al., 2022, and references therein). Amongst these possibilities, planet-disk interactions have generated the most interest because of the connections with exoplanet discoveries. Exoplanets are now known to be ubiquitous (e.g., Batalha, 2014, van der Marel & Mulders, 2021 & Christiansen, 2022). The majority of detected exoplanets were found by transit and radial velocity surveys that target mature stars well beyond the stage when gas-rich, planet-building disks are present. On the other hand, detections of planets accompanying T Tauri Stars (TTSs) and other Young Stellar Objects (YSOs) that still have their protoplanetary disks are quite rare. This disparity arises because YSOs exhibit substantial stellar activity that, alongside an optically-thick disk, obscures and dampens planetary signatures. Techniques that characterize planets through their imprints on the protoplanetary disks can supplement traditional planet detection techniques. PDS 70 b & c are two known protoplanets around a TTS (Keppler et al., 2018; Muller et al., 2018; Haffert et al., 2019), while AB Aur b (Currie et al., 2022) and CI Tau b (Johns-Krull et al., 2016; Clarke et al., 2018) are two of the next likely candidates. Amongst these systems, CI Tau b is unique because it is a potential hot Jupiter that has been detected by Doppler monitoring whereas PDS 70 b & c and AB Aur b were detected by direct imaging at large separations. This allows for CI Tau b to have a well-constrained mass and orbital parameters (Johns-Krull et al., 2016; Flagg et al., 2019). As such, one should be able to apply theories of planet-disk interactions to observations of the inner disk of CI Tau, and directly test the theoretical predictions. For example, simulations indicate that a massive companion should induce an eccentricity in the disk (Kley and Dirksen, 2006; Teyssandier and Ogilvie, 2017) and that the disk drives an eccentricity on the planet (Papaloizou et al., 2001; Rosotti et al., 2017; Duffell and Chiang, 2015; Ragusa et al., 2018; Muley et al., 2019). CI Tau is a \(\sim\)2 Myr old TTS of spectral type K5.5 located at a distance of \(d\) = 160\(\pm\)10 pc (Gaia Collaboration et al., 2022) with a mass of \(M_{*}\) = 1.02\(\pm\)0.001 \(M_{\odot}\)(Law et al., 2022). Its hot Jupiter companion, CI Tau b (Johns-Krull et al., 2016), has a mass of 11.6\(\pm\)2.8 \(M_{\rm J}\)(Flagg et al., 2019) and an eccentricity of \(e\) = 0.25\(\pm\)0.16 (Flagg et al., 2019). CI Tau b's orbital period of \(P_{\rm orb}\) = 9\(\pm\)0.5 d (Biddle et al., 2021) translates to a semi-major axis of \(a\) = 0.085 au. The system parameters utilized in our equations are presented in Table 1. It is expected for massive companions to carve out deep and wide gaps which are identifiable as a lack of NIR emission in the system's Spectral Energy Distribution (SED) (Gonzalez et al., 2012). This is not the case for CI Tau as it has a substantial NIR excess (McClure et al., 2013). However, Muley and Dong (2021) found that a gap carved by a companion like CI Tau b can potentially be indistinguishable from an undepleted disk -- at least at the NIR end of the SED. The planet's existence was directly questioned by Donati et al. (2020) who suggested that the signatures that have been attributed to the planet can, instead, be replicated by stellar activity. However, Biddle et al. (2018) found separate signatures for the planet's orbital period and the star's rotational period. They argued that both signatures cannot be fully attributed to the star alone. Massive companions may, to some degree, mediate stellar accretion (Fouchet et al., 2010). Teyssandier and Lai (2020) found that such modulation can even be driven to the point of matching the companion's orbital period (see also Artymowicz and Lubow, 1996; Munoz and Lai, 2016). Such periodic accretion has been observed in binary systems before, e.g. in photometric monitoring of the young stars DQ Tau (Tofflemire et al., 2017) and TWA 3A (Tofflemire et al., 2017). We seek to capture similar behavior in the Pf\(\beta\) line, which has been calibrated against accretion luminosity (Salyk et al., 2013), originating from CI Tau. We utilize high resolution spectroscopy for detailed analysis of the emission line profiles originating from CI Tau to characterize its inner protoplanetary disk. Specifically, we study the ro-vibrational NIR emission of the \({}^{12}\)CO; which CTTS systems like CI Tau commonly emits (Najita et al., 2003). The distribution of the emitting gas can be determined from the line profiles given a known stellar mass and disk inclination. Asymmetric features can elude to the presence of disk sub-structures such as disk winds (Pontoppidan et al., 2011), circumplanetary disks (CPDs; Brittain et al. (2019)) and disk eccentricities (Liskowsky et al., 2012). Also, the disk's atmospheric temperature can be estimated from the relative transition strengths. The following section (Sec. 2) provides a brief overview of the data collection routine and reduction process. Sec. 3 describes the asymmetries in the emission line profiles and the variabilities are discussed in Sec. 3.1. CI Tau's average 2022 profile is analyzed and compared to simulated models in Sec. 3.2; first fitted to a circular disk model (Sec. 3.2.1) and then a two-component disk model (Sec. 3.2.2). Results from the profile fitting are compared to a hydrodynamic simulation in Sec. 3.3. Sec. 4 addresses lingering questions and how our results contribute to the debate around CI Tau's hot Jupiter companion. Lastly, Sec. 5 highlights our main results. ## 2 Observations and Reductions The spectra were collected with the iSHELL cross-dispersion echelle spectrograph at the NASA Infrared Telescope Facility (IRTF) (Rayner et al., 2022) for 11 of the 12 epochs in this study. Two of the epochs, from 03 Oct 2018 and 03 Jan 2019, were first included in Banzatti et al. (2022) and the rest, obtained during the period of 21-29 Jan 2022, are presented in this work for the first time. In addition, archival data was downloaded for one epoch, 10 Oct 2008, from an older survey with NIRSPEC (McLean et al., 1998) at the W. M. Keck Observatory. See Table 2 for an observation log. In 2022, we observed CI Tau for 9 consecutive nights to cover one full orbital period of the companion CI Tau b. The spectra were acquired with an ABBA nodding pattern that allowed for the images to be combined in an A-B-B+A pattern. This removes sky emission to first order. The position angle of the slit on the sky was along the semimajor axis of the system's outer disk. For CI Tau we took 5 second exposures with 12 coadds while the telluric standard was observed with 5 second exposures with 5 coadds. For the iSHELL observations, the slit width of 0.375\({}^{\prime\prime}\) provided a spectral resolution of \(\sim\)92,000 (Banzatti et al., 2022). The average during the 2022 observation run was 0.84\({}^{\prime\prime}\). The NIRSPEC observations were acquired with a 0.432\({}^{\prime\prime}\) slit width that provided a spectral resolution of \(\sim\)25,000. The iSHELL data were reduced using the SpeXtool5 pipeline (Cushing et al., 2004; Vacca et al., 2003) with which the calibration frames (sky frame and master flat) are prepared and the spectra orders are then straightened, extracted and wavelength calibrated using a sky model. The orders are then stitched together to construct the full spectrum. The telluric corrections are carried out by ratioing the wavelength calibrated science and standard stars. This cancels out the sky as well as stellar contributions. Utilizing a model generated by the Sky Synthesis Program (Kunde & Maguire, 1974), the same process is followed to reduce the NIRSPEC data with a custom routine described in Brittain et al. (2003). ## 3 Results The average \(M\)-band spectrum from the 2022 observation run is presented in Fig. 1. It covers the \({}^{12}\)CO v=1-0 transitions from R(15)-P(42) as well as the hydrogen Pf\(\beta\) transition. The low-J lines where J\(\leq\)20 (J being the rotational quantum number of the lower state) and \(\tilde{\nu}_{\rm J}\geq 2060\) cm\({}^{-1}\) (where \(\tilde{\nu}_{\rm J}\) is the central wavenumber of a transition) have a narrow absorption component superimposed on the broad emission features. This is an indication of foreground CO at a lower temperature than the emitting gas of the same species. The average emission line profiles of the \({}^{12}\)CO low-J (J\(\leq\)20) lines, high-J (J\(>\)20) lines and the hydrogen Pf\(\beta\) line are presented in Fig. 2. The epochs start at the bottom and progress upwards (the 2022 epochs begin with #4) along with the 2022 average (red) plotted over each for comparison. To construct the average profile, each individual profile is adjusted so that their centers are aligned with each other in Doppler shifted velocity space (typically less then 0.5 km s\({}^{-1}\)). This is done because the wavelength calibrations may not be the same between nights. The profiles are then shifted again to correct for their respective dates' Doppler motion to center the profiles at V = 0 km s\({}^{-1}\). Finally, the average profile is calculated while excluding the profiles that have significant corruption. The average stacked line profiles for the \({}^{12}\)CO low-J lines are presented in the left column of Fig. 2. They possess an absorption component and a telluric contribution that we remove during the reduction pipeline-this is seen as a gap in the profiles. Because of these external features the majority of the profile analysis is done on the high-J lines (center: Fig. 2). Emission from a CPD would appear in the profiles as an asymmetric feature that oscillates over an orbital period with Keplerian velocities. The high-J line profiles (center; Fig. 2) displayed no clear asymmetric feature that varied over the 9 days. The lack of a variable feature places a limit on the emission from a CPD associated with CI Tau b and can be used to constrain the CPD radius. To estimate the lower limit of our ability to detect CPDs, we estimate the minimum radius (R\({}^{\rm limit}_{\rm CPD}\)) a CPD must have to be detected in our data. A best-fit temperature profile of the inner disk (Sec. 3.2.1) indicates the temperature at CI Tau b's location is about 2300 K. This temperature is assumed to correspond to that of the CPD for simplicity. Using the M-band centered wavenumber (2127.7 cm\({}^{-1}\)), the CPD's flux density is \(B_{\rm J}(\tilde{\nu}_{\rm J},T)=4.1\times 10^{4}\) erg s\({}^{-1}\) cm\({}^{-2}\) ster\({}^{-1}\) cm, where \(B_{\rm J}(\tilde{\nu}_{\rm J},T)\) is the Planck function. The intensity of the CPD, \(I_{\rm CPD}\), and \(B_{\rm J}(\tilde{\nu}_{\rm J},T)\) are related by \(I_{\rm CPD}=\Omega B_{\rm J}(\tilde{\nu}_{\rm J},T)\), where \(\Omega\) is the solid angle \(\Omega=\pi(R_{\rm CPD}/d)^{2}\) and \(d\) is the distance to the system (see Table 1). We set \(I_{\rm CPD}\) to our detection limit of 5\(\sigma_{\rm F}\), where \(\sigma_{\rm F}=4.1\times 10^{-15}\) erg s\({}^{-1}\)cm\({}^{-2}\)cm is the standard deviation of the spectrum noise. Finally, \(R^{\rm limit}_{\rm CPD}\) can be estimated as \[R^{\rm limit}_{\rm CPD}\approx d\sqrt{\frac{5\sigma_{F}}{\pi\rm B_{\rm J}( \tilde{\nu}_{\rm J},\,T)}}\,, \tag{1}\] (see also Pineda et al. (2019)). We find \(R^{\rm limit}_{\rm CPD}=0.013\) au, or about 27 Jupiter radii. A CPD encompasses a fraction of its Hill radius (\(R_{\rm H}\)) and may be as large as 0.5\(R_{\rm H}\)(Machida et al., 2008). Using the system parameters from Table 1, and \(a=0.085\) au, we obtain \[R_{\rm CPD}\approx 0.5R_{\rm H}=0.5a(1-e)\left(\frac{M_{\rm P}}{3M_{*}} \right)^{\frac{1}{3}}\approx 0.005\ {\rm au}\,. \tag{2}\] Since this value falls below our detection limit \(R^{\rm limit}_{\rm CPD}\), we are not able to confirm whether the CPD around CI Tau b is present or not. Although short term variability associated with a CPD was not observed in the 2022 epochs (#4-12), there is a clear overall difference when compared to the earlier epochs (#1-3). This is further discussed in Sec. 4. The hydrogen Pf\(\beta\) profiles are presented in the right column of Fig. 2 with the 2022 average overplotted. Since this is an individual line at 2148.7 cm\({}^{-1}\), no averaging was performed. Because hydrogen probes regions much closer to the star their maximum velocities are much higher. These profiles also do not exhibit the double peaked structure that is often seen from disks. This transition suffers from a degree of blending because it occurs between the \({}^{12}\)CO R(0) and R(1) lines. Because of this we limit the hydrogen profiles to velocities from V = -250 km s\({}^{-1}\) to +200 km s\({}^{-1}\). Similar to the CO profiles, the earlier epochs of the hydrogen profiles differ from the 2022 ones. Unlike CO, though, they do display some variability through the consecutive 2022 epochs. Since hydrogen elucidates stellar behavior, this may be an indication of the companion modulating stellar accretion rates (see Sec. 3.1). Figure 1: 2022 average M\(-\)band spectrum of CI Tau (described in Sec. 3). Captured are the \({}^{12}\)CO emissions originating from the disk’s surface. The low-J transitions (\(\bar{\nu}\geq 2058\) cm\({}^{-1}\)) have a superimposed narrow absorption feature indicative of lower temperature gas of the same species being present in the foreground. The most prominent feature is the hydrogen Pf\(\beta\) line at 2147.8 cm\({}^{-1}\) which is used to infer stellar accretion rates (See 3.1). ### Variability The equivalent widths of the average line profiles of the CO transitions are presented in Table 3 and plotted as red points in Fig. 3. The 2022 average value is 21.8 km s\({}^{-1}\) represented as a horizontal red dashed line. During the 2022 observation run, fluctuations about the average value are generally consistent with the measurement uncertainties of about 0.4 km s\({}^{-1}\); as a result, we have not detected any significant variability over the 9d period. We do note that epoch #7 has a notably lower value of 19.46 km s\({}^{-1}\) while epoch #11 has a higher value of 24.30 km s\({}^{-1}\). We consider them outliers but they may be physically meaningful if future observations find similar patterns. If there is a pattern over the course of an orbit then it is not clear. Overall, the 2022 measurements (#4-12) are markedly lower than those seen in earlier epochs (#1-3). This indicates changes in the disk over timescales much longer than the 9 d period of the 2022 run. This is discussed further in Sec. 4. Also tabulated in Table 3 are the equivalent widths of the hydrogen Pf\(\beta\) line alongside derived accretion rates (described later in this Sec.). The equivalent widths are plotted in Fig. 3 as blue points with the same units as CO. The hydrogen equivalent widths have a 2022 average of 73.7 km s\({}^{-1}\) and it is plotted as a horizontal dashed blue line. The equivalent widths do display a no Figure 2: The normalized line profiles of CI Tau (described in Sec. 3) with the 2022 average (red). The two left panels displays the \({}^{12}\)CO low-J and high-J lines where The 2022 epochs (#4-12) remained consistent but a clear variation is present when compared to the earlier epochs #1-3. The right panel displays the hydrogen Pf\(\beta\) line where variation was observed between every epoch. ticeable amount of variation over the 9 d period. Starting with the higher value at epoch #4 with 99.2 km s\({}^{-1}\), the equivalent width then gradually dips to 64.3 km s\({}^{-1}\) by epochs #8/#9 and then rises again to 86.6 km s\({}^{-1}\) at epoch #12. The small uncertainties in our measurements lend confidence to this trend being physical. The 2022 measurements are higher than the earlier measurements in general. Because the 2022 measurements are themselves variable and the earlier epochs have large uncertainties, it is inconclusive whether the prior epochs truly differ from the 2022 values. This is discussed further in Sec. 4. Since we lack an absolute flux calibration, it remains uncertain during the 2022 observation run how much variability is attributable to the star itself. The stellar variability over the course of a day can be gauged by studying previous surveys. The AllWISE Multiepoch Photometry Database provides 4.6 \(\mu\)m photometry with 21 hrs of coverage on 24 Feb 2010 and 24 hrs on 4 Sept 2010. The variation during those nights averaged 7% and that is smaller than what is captured in the Pf\(\beta\) line. Thus, fluctuations caused by the star are likely insufficient to explain the variability we observe in the Pf\(\beta\) line. Nonetheless, a photometrically calibrated spectroscopic study would help clarify the nature of the observed variability. The luminosity of the Pf\(\beta\) line has been correlated to UV-excess-derived accretion luminosities and, thus, can be used as a reliable tracer of stellar accretion rates (Salyk et al., 2013). The hydrogen equivalent widths are converted to their corresponding line luminosities utilizing the ALLWISE photometric data available and the GAIA DR3 distance. We do not include any extinction correction because it is expected to be negligible. While an M-band extinction coefficient, \(A_{\rm M}\), for CI Tau is not available, one may estimate it using the J-band extinction coefficient, \(A_{\rm J}\), of 0.51 reported by Kenyon and Hartmann (1990). Assuming ISM-like grain properties, we have \(A_{\rm M}/A_{\rm J}=0.095\)(Mathis, 1990), which translates to \(A_{\rm M}=0.048\), or a flux correction of only about 4%. The Pf\(\beta\) line luminosity (\(L_{\rm Pf\beta}\)) is converted to an accretion luminosity (\(L_{\rm acc}\)) using the relation given by Salyk et al. (2013): \[\log\left(\frac{L_{\rm acc}}{L_{\odot}}\right)=0.91\log\left(\frac{L_{\rm Pf \beta}}{L_{\odot}}\right)+3.29\,, \tag{3}\] where we have elected to drop the uncertainties on the numerical constants as they are largely systematic and only serve to shift all data points equally. It is more meaningful to look at the uncertainties due to the measured equivalent widths alone. The accretion luminosities are converted to stellar accretion rates (\(\dot{M}\)) by equating \(L_{\rm acc}\) to the release of gravitational energy from \(R_{\rm cor}\), the co-rotation radius in the disk, to \(R_{*}\), the host star's radius: \[\dot{M}=\frac{R_{*}L_{\rm acc}}{\rm G\dot{M}_{*}}(1-R_{*}/R_{\rm cor})^{-1}\,. \tag{4}\] The \((1-R_{*}/R_{\rm cor})^{-1}\) reduces to a factor of 1.25 when \(R_{\rm cor}\) is assumed to be 5\(R_{*}\)(Hartmann et al., 1998). We find an average accretion rate of \(\dot{M}=\) 3.1\(\pm\)2.2 \(\times 10^{-8}\)\(M_{\odot}\) yr\({}^{-1}\) which agrees with the previously reported value of \(\dot{M}=\) 2.5\(\pm\)1.8 \(\times 10^{-8}\)\(M_{\odot}\) yr\({}^{-1}\)(Donati et al., 2020). The accretion rate varied by 35% over the 9 d orbital period. Comparing to theoretical hydrodynamic models by Teyssandier and Lai (2020), this level of variability can be caused by a companion with a mass of 9.4 M\({}_{\rm J}\) and an orbital eccentricity of about 0.05. Although that eccentricity is lower than that reported for CI Tau (Flagg et al., 2019), it is not beyond the realm of possibility given the large uncertainty in the reported eccentricity. Also, one should bear in mind that Teyssandier and Lai (2020) studied the disk accretion rate, which may have quantitative differences from the stellar accretion rate. While the companion may drive strong modulation in the flow of gas near its orbit, for that gas to travel further inward and ultimately accrete onto the star, it must still be subjected to some disk transport mechanisms. Transport by turbulence, for example, is diffusive and would likely smooth out the accretion flow. Detailed modeling (e.g., Fouchet et al., 2010) is needed to verify whether the modulation we have detected here is consistent with the measured parameters of CI Tau b. We estimate an upper limit to CI Tau b's accretion rate (\(\dot{M}_{\rm p}^{\rm limit}\)) by applying Eqn. 4 where the \((1-R_{*}/R_{\rm cor})^{-1}\) factor is now 1 by assuming, for simplicity, the gas falls onto the planet from a distance much larger than the planet's radius. We can write \[\dot{M}_{\rm p}^{\rm limit}=\frac{R_{\rm p}}{\rm G\dot{M}_{\rm p}}\,L_{\rm acc }^{\rm limit}\,, \tag{5}\] where we choose \(R_{\rm p}\) to be 2 Jupiter radii. According to Spiegel and Burrows (2012, 2013), a 2 Myr giant companion with a mass of 10 M\({}_{\rm J}\) will have a radius of 1.94 R\({}_{\rm J}\) or 1.14 R\({}_{\rm J}\) depending on how it formed. Since CI Tau b has a slightly larger mass we used a value of 2 R\({}_{\rm J}\) for simplicity. L\({}_{\rm acc}^{\rm limit}\) in Eqn. 5 is related to our 5\(\sigma_{\rm F}\) detection limit (see the sixth paragraph in Sec. 3), or \[L_{\rm acc}^{\rm limit}\approx 5\sigma_{\rm F}4\pi d^{2}\Delta V\,, \tag{6}\] where \(\Delta V\) is the width of the planet's hydrogen line. Assuming the planet's magnetospheric accretion flow speed is of order the free fall speed \(\sqrt{GM_{\rm p}/R_{\rm p}}\approx 100\) km s\({}^{-1}\) and the line width is roughly two times that, we get \(\Delta V\approx 200\) km s\({}^{-1}\) or about 1.3 cm\({}^{-1}\) in wavenumber units. Finally, plugging these values into Eqn. 5 and 6, the upper limit for CI Tau b's accretion rate is about \(1\times 10^{-11}\)\(M_{\odot}\) yr\({}^{-1}\), or \(1\times 10^{-8}\)\(M_{\rm J}\) yr\({}^{-1}\). ### Emission Line Analysis #### 3.2.1 Circular Disk Model We begin our analysis of the emission lines by assuming a circular disk. The spectrum (Fig. 1) is fitted with a simple two-dimensional slab model that treats the emission as if it arises from a geometrically thin disk. A radial grid is defined from \(r\)=[\(r_{\rm in}\),\(r_{\rm out}\)] which translates to the Keplerian velocity \[V_{\rm K}(r)=\sqrt{GM_{*}\over r}\,. \tag{7}\] We choose to make the step sizes in the radial grid correspond with a step size of 1 km s\({}^{-1}\) in Keplerian velocities. This way the grid has a finer resolution closer to the star where the disk is more luminous. At each radius, the annulus is further divided by an angular array bounded from \(\theta\)=[0,2\(\pi\)]. \(\theta\) is defined as the angle from the line-of-sight that has been projected onto the plane of the disk. Similarly, we made the angular step sizes correspond to a change in the projected velocities of 1 km s\({}^{-1}\). The projected velocities are \[V_{\rm p}(r,\theta)=V_{\rm K}(r)\sin\theta\sin i\,, \tag{8}\] where \(i\) is the system's inclination. The projected velocities are then made the center of normalized Gaussian profiles which are represented as \[G(V-V_{\rm p})={1\over b\sqrt{\pi}}\exp\left({-(V-V_{P})^{2}\over b^{2}} \right)\,. \tag{9}\] Here the Gaussian line width, \(b\), is given the value 8.8 km s\({}^{-1}\) to emulate a realistic level of blending between discrete projected velocities. Each annulus corresponds to a radially dependent temperature and surface number density. Assuming basic power laws, these are expressed as \[T(r)=T_{0}\left({r\over r_{\rm in}}\right)^{\alpha}\,, \tag{10}\] and \[N(r)=N_{0}\left({r\over r_{\rm in}}\right)^{\beta}\,, \tag{11}\] Assuming thermodynamic equilibrium, the populations of the ro-vibrational states are \[N_{\rm J}={Ng_{\rm J}\over Q(T)}\exp\left({-E_{\rm J}\over k_{\rm B}T}\right)\,, \tag{12}\] where \(E_{\rm J}\) and \(g_{\rm J}\) are the energy and degeneracy of state J, respectively. \(Q(T)\) is the partition function as a function of temperature. The optical depths, \(\tau_{\rm J}\), are \[\tau_{\rm J}={N_{\rm J}\over 8b\pi^{3\over 2}}{g_{\rm J}A_{\rm J}\over g_{\rm J }^{\prime}\tilde{r}_{\rm J}^{3}}\,, \tag{13}\] where \(A_{\rm J}\) and \(\tilde{\nu}_{\rm J}\) are the Einstein \(A\) coefficient and central wavenumber of the transition. \(g_{\rm J}^{\prime}\) is the degeneracy of the adjacent state. Figure 3: The equivalent widths (Sec. 3.1) of the average \({}^{12}\)CO lines and the hydrogen line (see Table 3) with the 2022 averages plotted as the horizontal dashed lines (\({}^{12}\)CO\({}_{\rm avg}\) = 21.8 km s\({}^{-1}\) & H\({}_{\rm avg}\) =73.7 km s\({}^{-1}\)). In the 2022 epochs (#4-12) we see the hydrogen vary with a possible 9 d period whereas the \({}^{12}\)CO stayed consistent. The 2022 epochs are noticeably different from the earlier ones. The flux densities are \[F_{\rm J}=(1-e^{-\tau_{\rm J}})\,\frac{\tilde{\nu}_{\rm J}}{\rm c}\,{\rm B}_{\rm J }(\tilde{\nu}_{\rm J},T)\,, \tag{14}\] where \(\rm c\) is the speed of light in a vacuum, and \({\rm B}_{\rm J}(\tilde{\nu}_{\rm J},T)\) is the Planck function expressed in units of wavenumbers (\(\rm cm^{-1}\)). The Planck function is evaluated at the central wavenumber of each transition, \(\tilde{\nu}_{\rm J}\), and at the local temperature, \(T\). The factor \(\tilde{\nu}_{\rm J}\)/c converts the units to Doppler-shifted velocities (\(\rm km~{}s^{-1}\)). Finally, the line profiles, \(I_{\rm J}(V)\), is obtained by convolving the emission from every grid point with \(G(V-V_{\rm p})\) and integrating over the full disk, or \[I_{\rm J}(V)=f_{\rm norm}\int_{0}^{2\pi}\int_{r_{\rm in}}^{r_{\rm out}}F_{\rm J }(r)\,{\rm G}(V-V_{\rm p}(r,\theta))\,r\,{\rm d}r\,{\rm d}\theta\,, \tag{15}\] where \(f_{\rm norm}\) normalizes the \(I_{\rm J}(V)\) to our observed line profiles. The profiles of the high-J lines are then averaged together and compared to CI Tau's profile (top left panel of Fig. 4). We perform a \(\chi^{2}\) optimization over six parameters: \(T_{0}\) and \(\alpha\) in the temperature profile (Eqn. 10); \(N_{0}\) and \(\beta\) in the surface density profile (Eqn. 11); \(r_{\rm in}\); and \(r_{\rm out}\). Our best-fit has a reduced \(\chi^{2}\) value of 3.3. The best-fit parameters are presented in Table 4. We find that the temperature and surface density at the companion's location (\(r=0.085\) au) are 2297 K and 2.9\(\times 10^{19}\) cm\({}^{-2}\), respectively. The radial extent of the disk goes from \(r_{\rm in}=0.048\)\(\pm\)\(0.001\) au to \(r_{\rm out}=0.898\)\(\pm\)\(0.001\) au. In the top left panel of Fig. 4 we see that the circular disk model does broadly describe the observed line structures. However, when comparing just the red side of the profiles (\(V>0\) km s\({}^{-1}\)), we see velocities where the model overshoots the data (\(V\sim+50\) km s\({}^{-1}\)) but then undershoots it (\(V\sim+80\) km s\({}^{-1}\)). This is peculiar because it cannot be matched even if we assign an eccentricity to the disk. Applying an eccentricity shifts the profile towards one direction effectively raising/lowering the entirety of one side to create a typical asymmetric line profile, but it cannot simultaneously increase and decrease the fluxes at different velocities on the same side. This discrepancy motivates us to propose a more complex disk model in the following section. #### 3.2.2 Multi-component Disk Model CI Tau's profile is better modeled when its disk is instead composed of multiple components each having their own eccentricities and arguments of periapses. This is motivated by the fact that the line core appears blue-shifted while the wings are red-shifted (Fig. 4). If CI Tau's disk is divided into inner and outer components, the outer component can account for the blue-shifted core while the inner component accounts for the red-shifted wings of CI Tau's profile. In this two-component model, we introduce a break radius, \(r_{\rm b}\), that separates the inner and outer components. Within this radius the inner component will have an argument of periapse \(\omega_{\rm in}\) that is different from that of the outer component \(\omega_{\rm out}\). In our model, \(\omega_{\rm in}\) is defined to be the angle between the inner disk's semi-major axis and our line-of-sight projected onto the plane of the disk. The disks are required to be oppositely oriented to some degree in order to replicate CI Tau's profile (Fig.4) so, to reduce the number of free parameters, we chose to fix them to be anti-parallel (\(\omega_{out}=\omega_{in}+\pi\)). The validity of this assumption is discussed in Sec. 3.3. The two components are also given their own eccentricities, \(e_{\rm in}\) and \(e_{\rm out}\), which are treated as constants throughout their respective component. We assign a width parameter \(\Delta\) around the break radius where neither disk will be contributing flux. This may capture any potential disk gap that might be present. Unlike in Sec. 3.2.1 where we fitted the overall spectrum with a temperature and surface density profile (Eqns. 10 and 11) as a means to compute the disk's intensity (Eqn. 15), here we directly fit the intensity as a function of radius \(I(r)\). Since we are now fitting only to the high-J lines there is not enough information to be able to constrain the temperature and surface density. \(I(r)\) is fitted as a broken-power law that follows one exponent for the inner component \(\iota_{\rm in}\) and another one for the outer component \(\iota_{\rm out}\). The broken power law is defined as follows: \[I(r)=I_{0}\left(\frac{r}{r_{\rm b}}\right)^{-\iota_{\rm in}}\left(\frac{1}{2} \left(1+\left(\frac{r}{r_{\rm b}}\right)^{\frac{1}{2}}\right)\right)^{(\iota_ {\rm in}-\iota_{\rm out})\delta}\,, \tag{16}\] where \(I_{0}\) is a normalization constant. We fix \(\iota_{\rm in}=0\) because the inner disk is likely narrow and it helps simplify our fit. \(\delta\) determines the "smoothness" of transition between power-laws. We set it to be \(\delta=\log(1+\Delta/r_{\rm b})\) in order to emulate a smooth transition that occurs over the distance \(\Delta\). In the intensity profile, \(\iota_{\rm out}\) is the only parameter that we vary. Since our model now considers eccentricity, it is simpler to operate in semi-major axis space, where the distance to the star is now dependent on the semi-major axis, eccentricity and azimuthal angle: \[r=\frac{a(1-e^{2})}{1+e\cos(\theta-\omega)}\,, \tag{17}\] where the variables \(e\) and \(\omega\) are \(e_{in}/e_{out}\) or \(\omega_{in}/\omega_{out}\) for the inner/outer components, respectively. Like with the circular disk model, \(\theta\) is the angle between the grid element and the line-of-sight in the disk's plane. The projected velocities also change depending on the eccentricity and argument of periapse of the annulus. The new expression for the projected velocities is \[V_{\rm P}(a,\theta)=\sqrt{\frac{{\rm G}M_{*}}{a(1-e^{2})}}(\sin\theta+e\sin \omega)\sin i\,. \tag{18}\] An \(I(r,\theta)\) is calculated for every grid element and transformed into a line profile in velocity space by multiplying it with the Doppler-shifted normalized Gaussian profile described in Eqn. 9. This is done over the extent of the disk, summed together, and normalized to CI Tau's profile by adjusting \(I_{0}\) in Eqn. 16. Our \(\chi^{2}\) optimization finds a best-fit that has a reduced \(\chi^{2}\) of 0.984. This is a significant improvement over the circular disk model that has a reduced \(\chi^{2}\) of 3.3 (Sec. 3.2.1). The best-fit profile of the multi-component disk is displayed in the bottom left panel of Fig. 4 together with the contributions from the inner and outer disks. We confirm that the multi-component model can replicate the blue-shifted line core and the red-shifted wings seen in CI Tau's profile; which is not possible for a circular disk. The "bump" at V = 80 km s\({}^{-1}\) is an exaggerated by-product of our model because of the lack of flux from the region centered on the discontinuity. A more detailed analysis can be performed in the future where this is filled with some realistic emission that should aid in smoothing out the 'bump' and provide an even better fit. The best-fit parameters for the multi-component disk are presented in Table 4. The CO extends from \(r_{\rm in}\) = 0.052\(\pm\)0.002 au to \(r_{\rm out}\) = 1.53\(\pm\)0.16 au. Compared to the circular disk (Sec. 3.2.1), the inner radius marginally changed while the outer radius increased by 70%. The inner radius is also close to the truncation radius estimated by GRAVITY Collaboration et al. (2023) (\(0.034\pm 0.014\) au) and may extend to within the star's co-rotation radius \[R_{\rm rot}=\left(\frac{{\rm G}M_{*}P_{\rm rot}^{2}}{4\pi^{2}}\right)^{\frac{ 1}{3}}\,. \tag{19}\] Assuming Keplerian rotation, we calculate the co-rotation radius to be 0.069 au. Similar results have been found for other T Tauri stars (e.g., Carr, 2007). The same calculation for CI Tau's co-rotation radius was done by GRAVITY Collaboration et al. (2023) but there they assumed a different rotational period and stellar mass. The best-fit value for the eccentricities of the inner and outer components are \(e_{\rm in}\) = 0.056 \(\pm\) 0.015 and \(e_{\rm out}\) = 0.048 \(\pm\) 0.008. These values can be seen as luminosity-weighted averages that are more representative of the inner edges of the components, which are more luminous. Overall, these eccentricities are much smaller than that of CI Tau b's orbit (\(e\) = 0.25; Flagg et al. (2019)). This may be an indication of the planet's orbit being less eccentric, since one might expect the planet's and disk's eccentricities to be similar. This is discussed further in Sec. 3.3. Our best-fit gives \(\Delta=0.009\pm 0.003\) au, which is the half-width of the disk break. As a reminder, it also represents a region in the disk (i.e., a "gap") where we assigned no emission. This is about 6% of \(r_{\rm b}\) = 0.14 au, the location of the break, which implies that the transition occurs sharply. Taking into account the eccentricities of the disks, the full width of the gap represented by \(\Delta\) varies between 0.0035 au at the apoapsis (periapsis) of the inner (outer) disk, to 0.032 au at the periapsis (apoapsis) of the inner (outer) disk. Considering the uncertainties in our fit, this gap is too narrow for us to definitively confirm its existence. It mainly helps to separate the two disks just enough to avoid overlapping (see the right panel of Fig. 4). We find the argument of periapsis of the inner disk (\(\omega_{\rm in}\)) to have an upper limit at 40\({}^{\circ}\). Again this describes the angle between the semi-major axis of the inner component and the line-of-sight projected onto the plane of the disk. This would correspond to the outer disk having a periapsis of \(\omega_{\rm out}=\omega_{\rm in}+\pi=220^{\circ}\). The outer component's intensity profile follows a \(\iota_{\rm out}\) = 3.11 dependence. These values are positive because a negative sign is incorporated in Eqn. 16. The large value of \(\iota_{\rm out}\) indicates a steep drop in flux--a sign that a large portion of the disk's emission originates from the inner component. We would like to remind the reader that our results are sensitive to the mass of the star and inclination of the disk. Here we assume an inclination of \(i\) = 71\({}^{\circ}\)(GRAVITY Collaboration et al., 2023) and a stellar mass of \(M_{*}\) = 1.02 \(M_{\odot}\)(Law et al., 2022) but these parameters have changed multiple times now. For example, the mass of the star was originally 0.8 \(M_{\odot}\)(Guilloteau et al., 2014) but then it was updated once to 0.9 \(M_{\odot}\)(Simon et al., 2019) and then again to the current value we used. It is possible that these values can be updated further and, if they do, then our best-fit radii would change by the following factor \[\frac{r_{\rm new}}{r_{71^{\circ}}}=\left(\frac{\sin(i_{\rm new})}{\sin(71^{ \circ})}\right)^{2}\left(\frac{M_{\rm new}}{1.02M_{\odot}}\right)\,, \tag{20}\] where \(r_{71^{\circ}}\) corresponds to the values assigned to the \(r_{\rm in}\), \(r_{\rm out}\) and \(r_{\rm b}\) parameters. Changing these quantities by a constant factor will have no qualitative effect on our overall disk model. ### Hydrodynamic Simulation In this section, we present a proof-of-concept simulation that qualitatively demonstrates how a planet might be able to generate the disk features inferred by our model. The simulation is performed using PEnGUIn (Fung, 2015) with a setup similar to the one used by Muley et al. (2019), but here the planet's mass (\(M_{\rm P}\)) and orbit (\(a\) and \(e\)) are fixed. We pick a representative planet mass \(M_{\rm P}=1\) M\({}_{\rm J}\), and assign it an orbital semi-major axis that equals \(r_{\rm b}\) in our multi-component model (Table 4), and an orbital eccentricity of 0.05, similar to the eccentricities of both the inner and outer disk. We choose not to use a planet mass as large as the proposed CI Tau b's mass (11.6 \(M_{\rm J}\)) because Kley & Dirksen (2006) had previously demonstrated that planets more massive than about 5 \(M_{\rm J}\) would excite disk eccentricities much larger than those inferred by our model. The simulation is locally isothermal and follows Eqn. 10 and Table 4 for the temperature profile. The surface density has an initial power-law profile the same as Eqn. 11 with \(\beta=-2\), but the normalization is set to 1 in code units. Because we do not consider the self-gravity of the disk, the normalization to the surface density profile plays no role in the dynamics simulated. In Fig. 5, we assign a physical value to the normalization based on the fact that we did not detect a planetary gap (Sec. 3.2.2). Also we assume a constant CO/H\({}_{2}\) ratio. The Sunyaev-Shakura viscosity parameter is assumed to be 0.01. Our choice describes a turbulent disk, which is plausible at short distances from the star where the ionization fraction is expected to be high and the magnetorotational instability (MRI; Balbus & Hawley, 1991) is expected to be active; the choice of 0.01 is roughly similar to the most turbulent MRI-active disks (e.g., Simon & Hawley, 2009; Guan & Gammie, 2009). The final snapshot of the hydrodynamic simulation after 1000 planetary orbits is presented in Fig. 5. The top left and top right panels are in Cartesian and polar coordinate systems, respectively. The inner and outer disks have developed a subtle eccentricity that, generally, appears similar in magnitude to the planet's eccentricity, but only near the planet's orbit. A slightly higher planet mass and/or a higher planetary eccentricity might bet Figure 4: The 2022 average stacked emission line profile of the high-J \({}^{12}\)CO lines is plotted with best-fit synthetic data alongside residuals (Sec. 3.2). The top-left panel is a circular disk (Sec. 3.2.1) fit that fails to replicate certain asymmetric features (see Table 4 for the best-fit parameters). The bottom-left panel is the multi-component disk fit (Sec. 3.2.2) which allowed for the inner+outer components of the disk to contribute independently and provide a much better fit (see Table 4 for the best-fit parameters). The right figure is a visual of the best-fit multi-component disk where the inner+outer components are eccentric and anti-parallel. ter match our observation. However, the disk would develop an eccentricity that is too high if the planet was given the same parameters as CI Tau b. Although the arguments of periapses of the inner and outer disks do not appear to be exactly anti-parallel as they were in our model (Fig. 4), they are far from aligned. The precise geometry of the inner and outer disks is complicated. While parts of the disk further away from the planet's orbit may be approximated by eccentric orbits (Kley & Dirksen, 2006), gas streamlines that co-orbit with the planet should follow horseshoe orbits that are deformed by the planet's eccentricity (Pan & Sari, 2004). Even though our slab model does not capture all the nuances, we are confident that it does capture the fact that the inner and outer disks have distinct geometries. Figure 5: The top row is the final snapshot of the hydrodynamic simulation (Sec. 3.3) with a hypothetical \(1M_{\rm J}\) planet that may reproduce the disk eccentricities we observed (0.05; Fig. 4). The color scale is logarithmic and in units of CO molecules cm\({}^{-2}\). The left and right panels are the same picture in Cartesian and polar coordinates, respectively. The white dashed line traces the orbit of the simulated planet. In this case, the outer disk develops an eccentricity similar to the planet’s, while the eccentricity of the inner disk is more subtle. The bottom row plots the azimuthally averaged CO column density profile. The orange solid curve is taken from the same simulation and time-averaged over the last orbit; the orange dashed curve represents the unperturbed disk; and the green solid curve is the emission we detected (Table 4; Sec. 3.2.1). A \(\tau=\)1 region was calculated for the P(20)-P(40) transitions and plotted as the red region (Sec. 3.3). The simulated profile (orange) is normalized so that, even at the bottom of the planetary gap, it lies above our detected CO column, illustrating that even though the gap is depleted by a factor of \(\sim\)100, its \({}^{12}\)CO emission can remain optically thick (Sec. 3.3). In this simulation, the planet carves out a gap that is depleted in gas by about two orders order of magnitude and about 0.05 au in width. If the disk is sufficiently dense, the CO emitting layer can remain undepleted, producing the optically thick emission we observed, even inside the gap; but what defines "sufficiently" dense is a complicated matter that involves not only the H/CO ratio, but also the expected abundances of CO at low column densities (e.g., Bruderer, 2013; Doppmann et al., 2017). To estimate how much CO is needed to produce optically thick emissions, we first set \(\tau_{\rm J}=1\) in Eqn. 13. From there, we obtain the ro-vibrational populations \(N_{\rm J}\), which is then converted to the CO surface density, \(N\), using Eqn. 12. In the bottom row of Fig. 5 we plot the \(\tau_{\rm J}=1\) region ("optically thick boundaries") for the high-J P(20)-P(40) lines. The azimuthally averaged column density profile of the simulation (orange: Fig. 5) is scaled so that it lies just above the \(\tau_{\rm J}=1\) regime and the detected CO surface density profile (green). This represents a possible version of the disk where a planet has carved out a gap that our observation is not sensitive to, though it is possible for the disk to have an even higher density. Taking this further, we can see what constraint this assumption implies for the disk mass. In the bottom of Fig. 5, we see that the unperturbed profile (orange dashed) of the simulated model needs to be about two orders of magnitude higher than Eqn. 11, the column density of the emitting layer. Or, more precisely, the full column density may be expressed as \(\Sigma=9.0\times 10^{21}(r/r_{\rm in})^{-2}\) cm\({}^{-2}\). The total number of CO molecules can then be calculated by integrating this over the extent of the disk. Or, \[N_{\rm CO}=2\pi\int_{r_{\rm in}}^{r_{\rm out}}9.0\times 10^{21}\ {\rm cm}^{-2} \left(\frac{r}{r_{\rm in}}\right)^{-2}r\,{\rm d}r\,. \tag{21}\] Using the parameters from Table 4, we get \(N_{\rm CO}\sim 10^{47}\) CO molecules. Assuming a CO/H\({}_{2}\) ratio of 1.6\(\times 10^{-4}\)(France et al., 2014), this translates to a mass of about \(10^{-6}\ M_{\odot}\) or \(10^{-3}\ M_{\rm J}\). This low disk mass may help explain why the planet can remain on an eccentric orbit over many orbital periods--eccentricity damping due to planet-disk interaction scales linearly with disk gas density (Artymowicz, 1992). Future detailed modeling can better evaluate the strength of eccentricity damping and explore the implications on the planet's orbital evolution. Because this disk mass estimate uses a density profile that lies just above our detected CO column, it can be seen as a lower limit. ## 4 Discussion The hot Jupiter companion CI Tau b was discovered by Johns-Krull et al. (2016) via IR radial velocity monitoring and Flagg et al. (2019) detected CO directly from the planet's atmosphere. These studies cross-validated the planet's orbital period to be 9 d. The host star's rotational period was found by Biddle et al. (2018) to be 6.6 d through analysis of the K2 lightcurves. However, the origin of these signatures was questioned by Donati et al. (2020) who claimed that both signatures can be attributed to the star itself. A follow-up analysis of the K2 photometry by Biddle et al. (2021) affirms that signature strengths cannot be replicated by stellar activity alone. Whether CI Tau b as a planet exists remains an on-going investigation. One piece of our results that may validate CI Tau b is the seemingly 9 d period captured in the stellar accretion rate, derived from the hydrogen equivalent widths (Sec. 3.1). We also find some indirect evidence for a planet's existence (but not necessarily CI Tau b) through planet-disk interaction, where we find an inner and outer disk that have separate eccentricities and arguments of peripases (Sec. 3.2.2). Further investigations are needed to explain some other features we observed. Below we list a few that we find the most puzzling. As illustrated in Fig. 3, we observed the hydrogen Pf\(\beta\) equivalent widths to vary by 35% during the 2022 epochs (#4-12; Sec. 3.1). This variation also encompasses the values observed in epoch #3 but not #1-2. Since it is ambiguous how representative our data is of the overall behavior, it remains unclear if epochs #1-2 are atypical or not. For instance, they are around 45% from the 2022 mean and have relatively large uncertainties; statistically, it is possible that they are consistent. However, if they are meaningfully different then that may indicate changes in the star's luminosity and accretion rates. CTTSs are known to be quite variable over a wide array of time periods so CI Tau could have simply changed its luminosity or accretion rate between observation runs. Another possible explanation is that if the accretion flow is not isotropic, its direction might be related to the disk's eccentricity. Since eccentric disks driven by planet-disk interaction are known to precess over hundreds of planetary orbits (e.g., Kley and Dirksen, 2006), the direction of the accretion flow may change with it and we would observe a variable accretion luminosity as the accretion flow goes in and out of alignment with our light-of-sight. Unlike hydrogen, the \({}^{12}\)CO equivalent widths (Fig. 3) appear less variable (Sec. 3.1). In 2022 they varied by 12% and that, along with having much smaller uncertainties, leads us to believe that the earlier epochs (#1-3), which are \(>\)20% from the 2022 mean, are dif ferent. Similar to the discussion in the previous paragraph, this may be due to the disk precessing and altering the amount of emission directed along our line-of-sight. Changes in the disk's eccentricity was observed over many planetary orbits (Sec. 3). However, there remains the possibility, again, that the star changed its luminosity and, as such, the temperature of the disk. If this were the case, then it means that from 2008 to 2022, the disk's luminosity decreased while the star's increased. CI Tau's average emission line profile of 2022 was fitted with a disk that is constructed from two components (Fig. 4) that have their own eccentricities (Sec 3.2.2). Our derived eccentricities, which are about 0.05 (Table 4), are lower than that of CI Tau b, which is \(e=0.25\pm 0.16\). We also find that the disk has a very narrow gap if it exists; in other words, we find no evidence of gas at any radius traveling with an eccentricity as large as CI Tau b's. It is not a stable configuration for a planet to have a highly eccentric orbit and be simultaneously embedded in a near-circular disk--either the disk will damp the planet's eccentricity (e.g., Duffell and Chiang, 2015) or the planet will force the disk to become more eccentric (e.g., Bitsch et al., 2013). Given the large uncertainty in CI Tau b's eccentricity, it may be possible that its true value is only about 0.05. According to the analysis by Donati et al. (2020), it is also possible that CI Tau b does not exist, and here we are instead observing the influence of another planet with a much lower eccentricity, such as the one simulated in Sec. 3.3. ## 5 Summary and Conclusions CI Tau was observed for 9 consecutive nights with the NASA IRTF in January of 2022 (Sec. 2). This data was reduced (Fig. 1) and paired with older data taken in 2008 using Keck and 2018/2019 also with IRTF, thus giving us a total of 12 epochs (Table 2). For each epoch, we constructed the emission line profiles of the \({}^{12}\)CO ro-vibrational transitions and the hydrogen Pf\(\beta\) line (Fig. 2). We fitted CI Tau's average high-J line profile from 2022 with flat, two-dimensional disk models, where we considered both a circular model (Sec. 3.2.1) and an eccentric model with two components (Sec. 3.2.2). From these fits, we constrained the disk properties (Table 4), such as the temperature (Eqn. 10) and surface density (Eqn. 11) profiles, and the eccentricities (\(e_{\rm in}\) and \(e_{\rm out}\)) of the different components. Our main conclusions are highlighted below: * The core of the average stacked \({}^{12}\)CO line profile is blue-shifted while the wings are red-shifted (Sec. 3.2.2). This is well modeled by introducing an inner and outer disk component that are both eccentric but oppositely oriented (Fig. 4). In our best-fit model, the components are separated near \(r_{\rm b}=0.14\) au (Table 4). Both components also have their own eccentricities of about 0.05. The disk's structure may be explained by an embedded giant companion around \(r_{\rm b}\) with a similar eccentricity (Sec. 3.3). * A 9 d variability was observed in the hydrogen Pf\(\beta\) line (Fig. 3), which indicates variations in the stellar accretion rate (Sec. 3.1). A 9 d periodicity has been reported before by multiple groups, who attributed it to the presence of a giant companion (Johns-Krull et al., 2016; Biddle et al., 2021) or stellar activity (Donati et al., 2020). * The \({}^{12}\)CO lines displayed varying asymmetries over hundreds of orbits (Fig. 2). This might indicate that the inner disk around CI Tau is eccentric and precessing (Sec. 3). * We did not detect \({}^{12}\)CO emission from the circumplanetary disk around the proposed planet CI Tau b (Johns-Krull et al., 2016; Flagg et al., 2019; Biddle et al., 2021), and showed that we were likely not sensitive to it (Sec. 3). We also did not detect hydrogen Pf\(\beta\) emission from the planet, which allowed us to place an upper limit on the planet's accretion rate of 1\(\times 10^{-8}\) Jupiter-mass per year (Sec. 3.1). * The inner radius of the protoplanetary disk was fitted to be \(0.052\pm 0.002\) au which is consistent with the truncation radius of \(0.034\pm 0.014\) au reported by GRAVITY Collaboration et al. (2023) and may extend within the star's corotation radius of 0.069 au. * The average accretion rate of CI Tau in 2022 is 3.1\(\times 10^{-8}\)\(M_{\odot}\) yr\({}^{-1}\). This aligns with what was previously reported by Donati et al. (2020) (2.5\(\times 10^{-8}\)). Being potentially the youngest host of a hot Jupiter companion, the CI Tau system can serve as the testing ground for theories pertaining to planet-disk interactions. Below, we chart a few directions to further investigate this system. The CO and hydrogen equivalent widths (Table 3; Fig. 3) were observed to be markedly different in prior years. Future observations of the CI Tau system separated by a similar time span may be able to confirm and characterize this long-term behavior. Explaining it, such as whether it is caused by disk precession, would require detailed modeling. Our analysis of the \({}^{12}\)CO emissions (Sec. 3.2) utilized a simple two-dimensional slab model that does not adequately account for three-dimensional effects such as disk flaring and vertical temperature variations. Future analysis may incorporate a more rigorous disk geometry that may capture more complexities. The hydrodynamic simulation in this study (Sec. 3.3) was a proof-of-concept for how an embedded giant planet may create the disk features observed. A more rigorous numerical study may better constrain the parameters of the planet-disk system. ## 6 Acknowledgements We thank Jean-Francois Donati for a helpful discussion and providing input on the CI Tau system. This work includes data gathered at the Infrared Telescope Facility, which is operated by the University of Hawaii under contract 80HQTR19D0030 with the National Aeronautics and Space Administration. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
2306.05562
AircraftVerse: A Large-Scale Multimodal Dataset of Aerial Vehicle Designs
We present AircraftVerse, a publicly available aerial vehicle design dataset. Aircraft design encompasses different physics domains and, hence, multiple modalities of representation. The evaluation of these cyber-physical system (CPS) designs requires the use of scientific analytical and simulation models ranging from computer-aided design tools for structural and manufacturing analysis, computational fluid dynamics tools for drag and lift computation, battery models for energy estimation, and simulation models for flight control and dynamics. AircraftVerse contains 27,714 diverse air vehicle designs - the largest corpus of engineering designs with this level of complexity. Each design comprises the following artifacts: a symbolic design tree describing topology, propulsion subsystem, battery subsystem, and other design details; a STandard for the Exchange of Product (STEP) model data; a 3D CAD design using a stereolithography (STL) file format; a 3D point cloud for the shape of the design; and evaluation results from high fidelity state-of-the-art physics models that characterize performance metrics such as maximum flight distance and hover-time. We also present baseline surrogate models that use different modalities of design representation to predict design performance metrics, which we provide as part of our dataset release. Finally, we discuss the potential impact of this dataset on the use of learning in aircraft design and, more generally, in CPS. AircraftVerse is accompanied by a data card, and it is released under Creative Commons Attribution-ShareAlike (CC BY-SA) license. The dataset is hosted at https://zenodo.org/record/6525446, baseline models and code at https://github.com/SRI-CSL/AircraftVerse, and the dataset description at https://aircraftverse.onrender.com/.
Adam D. Cobb, Anirban Roy, Daniel Elenius, F. Michael Heim, Brian Swenson, Sydney Whittington, James D. Walker, Theodore Bapty, Joseph Hite, Karthik Ramani, Christopher McComb, Susmit Jha
2023-06-08T21:07:15Z
http://arxiv.org/abs/2306.05562v1
# AircraftVerse: A Large-Scale Multimodal Dataset of Aerial Vehicle Designs ###### Abstract We present AircraftVerse, a publicly available aerial vehicle design dataset. Aircraft design encompasses different physics domains and, hence, multiple modalities of representation. The evaluation of these cyber-physical system (CPS) designs requires the use of scientific analytical and simulation models ranging from computer-aided design tools for structural and manufacturing analysis, computational fluid dynamics tools for drag and lift computation, battery models for energy estimation, and simulation models for flight control and dynamics. AircraftVerse contains 27,714 diverse air vehicle designs - the largest corpus of engineering designs with this level of complexity. Each design comprises the following artifacts: a symbolic design tree describing topology, propulsion subsystem, battery subsystem, and other design details; a STandard for the Exchange of Product (STEP) model data; a 3D CAD design using a stereolithography (STL) file format; a 3D point cloud for the shape of the design; and evaluation results from high fidelity state-of-the-art physics models that characterize performance metrics such as maximum flight distance and hover-time. We also present baseline surrogate models that use different modalities of design representation to predict design performance metrics, which we provide as part of our dataset release. Finally, we discuss the potential impact of this dataset on the use of learning in aircraft design and, more generally, in CPS. AircraftVerse is accompanied by a data card, and it is released under Creative Commons Attribution-ShareAlike (CC BY-SA) license. The dataset is hosted at [https://zenodo.org/record/6525446](https://zenodo.org/record/6525446), baseline models and code at [https://github.com/SRI-CSL/AircraftVerse](https://github.com/SRI-CSL/AircraftVerse), and the dataset description at [https://aircraftverse.onrender.com/](https://aircraftverse.onrender.com/). ## 1 Introduction Datasets of complex cyber-physical systems (CPS) are difficult to build and large CPS datasets are not publicly available. Their availability is limited for multiple reasons, such as: proprietary restrictions; difficulty in assembling the right mix of experts; and the slow manual design process due to their complexity. However, a huge opportunity exists to apply data-driven approaches to CPS once such a dataset becomes widely available. Electric Vertical Take-Off and Landing (eVTOL) aircraft represent an emerging class of CPS. The use of electrical propulsion and the growing energy density of available batteries have fueled rapid growth in eVTOL aircraft designs from food delivery/logistics [11], to the detection of sharks along highly populated beaches [17], and to air taxis. We expect this variation to only increase as battery technology continues to improve [9]. The diversity within the electric aerial vehicle design space is large due to the many choices available for selecting structural, mechanical, and electrical components. The multiphysics nature of CPS designs necessitate heterogeneity in representation, such as the use of stereolithographic (STL) files for computational fluid dynamics (CFD) analysis and the use of symbolic descriptions of motors and speed controllers for electrical analysis. Data-driven learning methods for CPS must be able to handle this diversity and multimodality of CPS designs. AircraftVerse provides such a dataset to enable further research into data-driven methods for characterizing or designing CPS. AircraftVerse (Figure 1) contains **27,714 diverse aircraft designs** where each design is represented in multiple modalities, including a computer aided design (CAD) model and symbolic description. Each design is also accompanied with detailed evaluation results. Thus, AircraftVerse is naturally amenable to emerging neurosymbolic and other deep learning methods for generative modeling, surrogate learning and sequence-to-sequence models. Overall, our paper is structured as follows. In Section 2 we provide a summary of the few existing related datasets. We then introduce AircraftVerse in Section 3 and describe the constituent parts that make up a design. We highlight the potential of AircraftVerse in Section 4 by providing an overview of the diversity of designs and by displaying experiments on surrogate modeling, while noting that this covers just a small part of what kind of experimentation is possible with this new dataset. We then conclude in Section 5. ## 2 Related work The use of machine learning in CAD has gained significant attention, and a few datasets have been proposed in recent literature to enable development and benchmarking of machine learning approaches. SketchGraphs dataset [26] is a collection of sketches extracted from parametric CAD models which begin as two-dimensional (2D) sketches consisting of geometric primitives (e.g., line Figure 1: Existing CAD datasets (SketchGraphs [26], DeepCAD [33], ABC [18]) are focused on CAD for mechanical parts. Aircraft designs in AircraftVerse include CAD models as one of the modalities. A CAD model (STEP or STL) is an assembly of several components such as propellers, wings, connectors, beams, motors, batteries, and hubs. The CAD designs in AircraftVerse are more complex compared to the existing datasets. The inset magnifications of a couple of parts of one of the designs in the figure above demonstrates this complexity. In addition to CAD models, each design also includes a symbolic design tree with additional details such as propulsion and battery subsystems that are needed for performance analysis (e.g. electrical and flight dynamics analysis). AircraftVerse also contains the result from the evaluation of each design using high-fidelity scientific and engineering tools. Thus, AircraftVerse is a CPS dataset and not just a CAD dataset. segments, arcs) and explicit constraints between them (e.g., coincidence, perpendicularity) that form the basis for three-dimensional (3D) construction operations. This dataset has been used for generative model of CAD sketches [32], and other applications of learning in physical design [27, 22]. Another example of a CAD dataset that is focused on physical structure is SimJEB [31], which is a dataset of crowdsourced mechanical brackets and accompanying structural simulations. DeepCAD [33] is a dataset of 3D shapes corresponding to objects such as flanges, pipes and screws, represented as a sequence of operations used in a CAD framework to generate these shapes. Another dataset for 3D engineering shapes is the ABC dataset [18], which comprises geometric models, each defined by parametric surfaces and associated with ground truth information on the decomposition into patches. These datasets are excellent resources for their target application domains such as extrapolating 2D sketch to CAD designs, and generating mechanical parts. In contrast, AircraftVerse is a dataset that covers the more complex design space of electric aircraft focused on system-level CPS design and thus, complements existing CAD datasets. In Figure 1, we illustrate the complexity of the aircraft designs in AircraftVerse compared to mechanical components from the existing CAD datasets. The CAD models of an aircraft design in AircraftVerse are an assembly of several mechanical components, such as propellers, motors, wings, hub and connectors. Further, the AircraftVerse dataset includes additional description of a design beyond its CAD model, such as its propulsion subsystem and its electrical subsystem. This is crucial for creating a description that can predict the performance (flight dynamics, electrical analysis) of a design as designs with similar CAD structure can have very different performances depending on the used components. We also provide the performance of each design using detailed high-fidelity physics and engineering simulation tools to enable the use of this dataset not just for generative modeling, but also for learning surrogates and design characterization and optimization. ## 3 AircraftVerse dataset The key characteristics of AircraftVerse are as follows: * The number of designs in AircraftVerse is 27,714 making it a uniquely **large-scale CPS dataset** with high design complexity. Design curation required finding valid aircraft configurations followed by detailed simulations to evaluate its performance objectives. We have attempted to create a balanced dataset that includes a sufficient number of aircraft designs across a range of different flight performance metrics, such as hover times and maximum flight distances. Our search for designs included over a hundred thousand candidates from which we selected these 27,714 designs to ensure diversity in design and performance. Our design process itself is diversity-preserving and ensures optimization does not lead to very similar designs. * AircraftVerse represents design using **multiple modalities** that include: a symbolic design tree describing the design topology, propulsion subsystem, battery subsystem, and other design details; a Standard for the Exchange of Product (STEP) model that is a decomposable CAD model showing each part separately; and a stereolithographic (STL) CAD model that is ideal for computational fluid dynamics analysis, and the corresponding 3D point cloud for the shape of the design. * AircraftVerse uses **results from high-fidelity physics models** and an evaluation pipeline developed as a part of DARPA's Symbiotic Design of CyberPhysical System's program1 to evaluate different aircraft designs. These physics models characterize performance metrics such as maximum flight distance and hover-time. The evaluation pipeline uses a mixture of custom flight dynamics simulators [29] and commercial tools such as Creo [23]. The evaluation of a design to determine performances such as its drag, lift, maximum flight time and hover time requires a significant compute infrastructure and requires subject-matter expertise. We include these evaluation results as part of the dataset. Footnote 1: [https://www.darpa.mil/program/symbiotic-design-for-cyber-physical-systems](https://www.darpa.mil/program/symbiotic-design-for-cyber-physical-systems) * The designs in AircraftVerse exhibit a very **high degree of diversity** in their topology, the choice of energy subsystem and the choice of propulsion. The designs are a mixture of rotorcrafts, winged aircrafts and hybrids with 28% capable of vertical takeoff and landing.2 To the best of our knowledge, there is no other available corpus of such diverse aircraft designs with the design details and the results from detailed scientific and engineering evaluation. Footnote 2: We note that some aerial vehicles, such as many fixed wing aircraft, are launched (or catapulted) and therefore are not required to hover. * We include multiple **baseline surrogate models** to predict some of the key design performance metrics. Our surrogate models include a transformer encoder model that takes the symbolic design configuration, as its input, a graph convolution neural network that uses the point cloud modality and an LSTM model. These baseline models are used to predict the mass, the maximum flight distance, the maximum hover-time and the presence of any structural interferences that needs to be avoided for fabrication and manufacturing. **Dataset Availability.** The dataset is publicly available for free under Creative Commons Attribution-ShareAlike (CC BY-SA) license, which will enable future extension and adaptation of the dataset by others. With respect to its maintainability and long-term availability, the dataset is hosted at [https://zenodo.org/record/6525446](https://zenodo.org/record/6525446) and will be maintained by SRI International3, a non-profit research institute with over 75 years of history of contributing to society and research community, with a proven track record of building, maintaining and distributing several datasets such as BioCyc [15], Voices [25], and open-source tools4 - some of which have been maintained for multiple decades. Footnote 3: [https://en.wikipedia.org/wiki/SRI_International](https://en.wikipedia.org/wiki/SRI_International) Footnote 4: [https://github.com/SRI-CSL](https://github.com/SRI-CSL) **Dataset Curation.** The designs in AircraftVerse are battery-powered, where the propulsion comes from the electric motors that power the propellers. Our design corpus includes a range of propellers, motors, batteries, and fixed wings. The list of components is provided in Appendix F. We use the propulsion subsystem design as an example to describe the curation of the dataset. We use a combination of wings and propellers (in the right topology) provides the thrust and lift needed for efficient horizontal flight and vertical take-off and landing. Our components are diverse, such as the propellers have a different number of rotor blades with different diameters and pitches, the component motors have different \(K_{v}\), \(K_{m}\), \(K_{t}\) ratings, and the batteries have different peak and continuous current ratings and different voltage ratings. Consequently, the design choices allow significant diversity in structure and composition for similar performance. For example, the thrust produced by a propeller-motor pair can be related approximately to the parameters by the following formula: \(Thrust\propto(K_{v}\times Voltage)/(Pitch\times Diameter)\), where \(K_{v}\) is the motor's "Kilovolt" rating, which represents the number of revolutions per minute (RPM) that the motor will produce per volt, \(Voltage\) is the voltage being applied to the motor, \(Pitch\) is the distance that the propeller would move forward in one rotation, and \(Diameter\) is the distance across the propeller. Our design used more detailed models [29, 23], but this dependency illustrates how the same thrust can be produced by different combinations of propellers and motors. High \(K_{v}\) rating motors that rotate much faster can produce the same thrust with smaller propellers when compared to lower \(K_{v}\) (slower) motors with larger diameter propellers. These combinations can have different continuous and peak current requirements that would influence the battery choice. In addition to these propulsion choices, connectors such as parametric joints, hubs, and arms can be put together in flexible ways to build interesting design geometry and topologies. These can have an impact on the experienced drag, which would then influence the requirements for propulsion. Thus, the design of aircraft requires making a number of tightly coupled design choices. The performance of each design can be assessed using a pipeline comprising commercial tools such as Creo [23] and custom flight dynamics model (FDM) [29, 2]. Each aircraft is also assessed on controllability (existence of trim states - where the aircraft remains stable in the absence of environment perturbations) at different speeds using the FDM. In particular, the FDM evaluates whether the aerial vehicle can fly at a specific velocity through optimization schemes that check if the translational and rotational accelerations can be driven to zero for a given design during vertical take-off and horizontal flight (more details in Appendix C). Manually finding a large number of feasible designs is very challenging and prohibitively time-consuming. Instead, we use a learning-based design approach [5]. We create a number of designs using a procedural aircraft generator that uses heuristics provided by domain experts, such as limiting the complexity of produced designs to have at most 16 propellers and at most 12 wings, using design motifs and symmetries that help with controllability of the aircraft. A transformer-based model is trained on designs with good flight characteristics to act as a filter in the future generation process (Appendix E, [5]). Each of the final 27,714 designs included in the dataset is run through the detailed scientific and engineering models [29, 2, 23] to generate the metadata describing the design performance and characteristics. **Design Structure and Representation.** Each design in the dataset consists of the following:5 * design_tree.json: The design tree describes the design topology, choice of propulsion and energy subsystems. The tree also contains continuous parameters such as wing span, wing chord, and the lengths of propeller arms and connectors. In our dataset, we also include a preorder traversal of the design tree and store this as design_seq.json to facilitate the use of sequence-friendly model architectures. In addition, we include design_low_level.json, which is a more fine-grained engineering representation of the design. This fine-grained representation includes significant repetition that is avoided in the abstract tree representation through the use of symmetry (such as specifying that the same arm structure is repeated six times around a hub). See the pictorial representation of the design tree in Figure 2 that demonstrates a simple example of such a symmetry. * Geom.stp, cadfile.stl, and pointCloud.npy: CAD design for the aerial vehicle in compositional STEP format (ISO 10303 standard), its stereolithographic STL file, and a generated pointcloud of 10,000 points from the CAD representation. * output.json and trims.npy: Summary files containing the vehicle's performance metrics such as maximum flight distance, maximum hover time, flight distance at maximum speed, maximum current draw, and mass. The trims.npy contains the [_Distance, Flight Time, Pitch, Control Input, Thrust, Lift, Drag, Current, Power_] at each evaluated trim state (velocity) of the aircraft. Designs that have more trim states and with contiguous trim states are preferable as these designs would be relatively easier to control even when the environment is noisy. Thus, both the output.json and trims.npy contain the evaluation of the performance of each design. See Figure 2 for an example of these evaluation indicators. **Design Example.** The design tree used for aircraft designs reflect the hierarchical nature of CPS designs. Figure 2 shows an example design. It has a _4-way central hub_ component. Each of the four arms of the hub can have a propulsion subsystem attached to it. We identify this subsystem as a _MainSegment_ which can be implemented via different choices (subtrees expanding _MainSegment_) such as _BendSegment_, _PropArm_, _ExtendedPropArm_, _WingArm_, and so on. Each of these choices describe the nature of the propulsion subsystem, such as whether we are using propeller or wing, and whether the arm making connection is a simple connector or a composition of connectors. Further, some of these subsystems can be recursive, allowing fractal growth of the design. For example, an arm itself might contain another segment with a hub having a number of arms. Our symbolic representation of design also enables exploitation of symmetry to compress the design description, and inclusion of expert knowledge in the design. For example, if we want all four arms of a hub to contain the same subsystem, then we only need to specify it once (as a single child in the design tree) and use a symmetry tag over the hub, for example, _ConnectedHub4_Sym_ denotes a hub with four connections - each having the same subsystem. In the case of Figure 2, a propulsion subsystem Figure 2: An illustration of all the key file components that make up an aircraft design. The design_tree.json captures all aspects of the design (e.g. 4-way hub) that appears in both the cadfile.stl and the Geom.stp. Additional files of design_seq.json and design_low_level.json are directly derived from the design tree. The performance of the aerial vehicle is displayed in both the performance file labeled output.json and in the trims.npy. comprises a propeller, motor, and a flange. Further, the hub also connects to a fuselage that contains a battery subsystem that can consist of a single or dual battery subsystem along with additional electronics and their position within the fuselage. An explicit example of this is provided in Appendix G. Additionally, the full corpus of options as included in Appendix F is also provided as a Python dictionary as part of AircraftVerse. **Design evaluation.** For each design, we use state-of-the-art tools [23; 2] to compute the physical characteristics, such as the mass and the component interferences, as well as to render the 3D CAD models (STEP and STL files that are included in AircraftVerse). These details are then passed to the FDM [29] for evaluating flight dynamics (more details in Appendix C). The dynamics model considers the aircraft velocity and its roll, pitch and yaw rotations, along with the electrical state such as the power drawn from the battery and motor current, which in turn determine the torque on the propeller and its generated thrust. The dynamics model summarizes the impact of different forces such as gravity, drag, propeller lift and wing lift, and includes scientific models for conversion of electrical energy into mechanical energy in the propulsion subsystem. From these evaluations, we include Batt_amps_ratio_MFD, Batt_amps_ratio_MxSpd that correspond to the ratio of maximum current drawn by the design compared to the maximum current rating of the battery at maximum flight distance and maximum speed, respectively. These indicate the robustness of the battery subsystem of the design. We also include Mot_amps_ratio_MFD, Mot_amps_ratio_MxSpd, Mot_power_ratio_MFD, Mot_power_ratio_MxSpd that represent the ratio of the current/power in the motor divided by the maximum allowed current/power corresponding to the maximum flight distance or maximum speed of the vehicle. These indicate the robustness of the propulsion subsystem of the design, its endurance, and its agility.. Finally, we include design metrics such as the flight distance at maximum speed (Distance_MxSpd), maximum flight distance (Max_Distance), and the maximum hover (Hover_Time). ## 4 Experiments: diversity, multimodality and baseline models **Diversity.** AircraftVerse includes a diverse set of aircraft designs (samples shown in Figure 3), and the multimodal representation makes this dataset relevant to a variety of challenge problems in machine learning. The components of a design in AircraftVerse are selected from a large component corpus with hundreds of choices of propellers and motors (Appendix F). In addition, each discrete choice of component comes with its own set of attributes that fall in their own ranges (e.g. Figure 3: Diverse vehicle designs in AircraftVerse with different structures and with different performances. Some designs can hover in place. Hover time is in seconds and flight distance is in meters. motors have a large set of parameters such as motor armature-winding resistance and inductance, mass, counter-electromotive force constant, and motor torque constant). Adding to the overall complexity, there are also the continuous parameters. For example, structural components such as connectors have their radius and length as continuous parameters. In Appendix F, we identify the attributes of the key components in an aerial vehicle design and whether their values are fixed or variable within a range. Some characteristics of the design diversity are captured in Figure 4. The figure highlights some of our curation choices. First, the majority of designs have an even number of propellers due to the symmetry built into our tree structure definition. Second, our dataset precludes designs without propellers and favors designs with fewer wings. Finally, while a significant number of designs can take off and achieve horizontal flight (with a large variation in ability), one of the challenges in building a diverse dataset comes at the expense of many designs failing to have the hovering capability or other eVTOL flight performance metrics. However, these designs are useful for learning potential rules as to why these designs have poor characteristics compared to others, so we also include these in the dataset. While structural differences lead to visually striking diversity, we also highlight that design diversity has other aspects, such as the use of different components within the same design topology. Vehicle designs with the same topology can perform very differently. Figure 5 contains four symmetric quadcopters with the same topology (same design tree structure). However, their performance metrics differ significantly, with the maximum flight distance differing by a factor of over 2x. Additionally, the electronics of the rightmost quadcopter varies from the other three in that it has two batteries rather than one. Thus, the performance of designs with similar topologies cannot be predicted from just from the CAD design, but requires the symbolic design description in design_tree.json. **Multimodality.** The AircraftVerse uses multiple modalities to represent an aircraft design, as illustrated earlier in Figure 2. We would not expect the performance of electric aerial vehicle designs to be accurately predicted on the basis of only one of the modalities (e.g. the 3D CAD model). Recent work Figure 4: The designs in AircraftVerse exhibit diverse performance with respect to characteristics such as maximum flight distance and hover-time. They also have very different physical characteristics such as mass, number of propellers, and number of wings. Figure 5: Diverse aerial vehicle designs in AircraftVerse with same topology structure but different motor/propeller/battery choices leading to diverse design performance. on vision-language models [7; 14; 1; 19; 6] would be especially relevant to large CPS datasets such as AircraftVerse. However unlike in the vision-language scenario, AircraftVerse has stricter metadata that enforces physical and electrical constraints. The multi-modality of AircraftVerse makes it applicable to a wide class of machine learning models. AircraftVerse includes the following modalities: * Structured sequential data: The success of transformer-based sequence models used in natural language [28; 8; 24; 3; 20] has driven their adoption and extension to structured data such as computer programs. The sequential data from the design trees in AircraftVerse is an interesting use case for transformer models. Further, the designs in AircraftVerse are more structured than natural language text as the design components need to satisfy physical and electrical constraints6. Footnote 6: See Appendix F for a definition of the structure of our design trees and sequences. * 3D CAD models: There is a growing interest in using machine learning for CAD design [26; 32; 22; 31; 33; 18]. The 3D CAD models (STL and STEP files) in AircraftVerse provide a large dataset with complex 3D shapes. The metadata associated with the designs can be used to train and evaluate methods that predict the physics metrics from the CAD models. * 3D Pointcloud: The use of machine learning for learning generative models or manipulating or segmenting 3D point clouds has also received significant attention. AircraftVerse includes 3D point clouds extracted from the STL and STEP files. Thus, this dataset with the pointclouds can be used as a benchmark for 3D point cloud classification (e.g., presence or absence of interference) [34; 4; 12] and regression (e.g., predicting flight characteristics). **Baseline Models.** We describe baseline models applied multiple modalities of AircraftVerse. We use a transformer encoder (T. Enc.) [28] model and a LSTM [13] model to predict a set of design performance characteristics of an aircraft from its symbolic design tree description. In our example, even using these symbolic descriptions of a design contain information from multiple modalities, as these sequences include electrical, mechanical, and topological information. We also test out the 3D modality of the design representation by applying a graph convolutional neural network (GCNN) [16; 30] to the point clouds. For the sequences, we use the design representation in design_seq.json and use our custom-built tokenizer to convert each token of a design sequence from the symbolic representation (e.g. {'node_type': 'ConnectedHub3\(2\)1'}) into a tensor. For our embedding, we assign one-hot encoding for the keys and an additional one-hot encoding for the values. For keys that require floats as values (e.g. 'armLength') we ensure that there is a class that corresponds to the float in the value embedding, as well as appending the value of the float to the tensor. Each token of the sequence is a 749-element vector corresponding to the concatenation of 43 classes of keys, 673 classes of values, 32 possible attributes of electrical/mechanical components, and a final dimension for float values. We provide our tokenization code as part of the dataset release, as well as the baseline models. Further architectural details of the models are presented in Appendix D. **Results.** The results of predicting the performance statistics from the design sequences are presented in Table 1. This table shows how the sequential design description, containing specific component information such as mass and electrical information, is a useful modality for certain metrics such as the ability of a design to fly. However, the baseline GCNN model is better at inferring structural interferences between components as it directly incorporates 3D information. Future approaches that combine the 3D structural information and symbolic design descriptions may be necessary and we have demonstrated here that the AircraftVerse dataset includes all of these informatic modalities. Figure 6 shows both the receiver operator characteristic (ROC) curve and precision-recall curve for the three models. Again, highlighting the superior performance of the sequential information in estimating the high level CPS performance metrics such as ability to hover. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Approach & Data & Hover & Hover & Mass & Flight & Interference \\ & & & Time & & Distance & \\ & & (Acc. \(\uparrow\)) & (R\({}^{2}\uparrow\)) & (R\({}^{2}\uparrow\)) & (R\({}^{2}\uparrow\)) & (F1 Score \(\uparrow\)) \\ \hline T. Enc. & Sequence & \(\mathbf{0.9004}\) & \(\mathbf{0.6778}\) & \(0.9964\) & \(\mathbf{0.6900}\) & \(0.8270\) \\ LSTM & Sequence & \(0.8838\) & \(0.6189\) & \(\mathbf{0.9974}\) & \(0.3467\) & \(0.8184\) \\ GCNN & Point Cloud & \(0.8380\) & \(0.3936\) & \(0.8888\) & \(0.4486\) & \(\mathbf{0.9758}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Specification prediction using baseline models. ## 5 Conclusion We have introduced a new multi-modal CPS dataset that will enable researchers to explore a new interdisciplinary area of application for machine learning. We emphasize the richness of AircraftVerse, where we provide 27,714 aircraft designs, each with associated metadata summarizing the performance of the design. We highlight that our experiments and baseline models have only touched the surface of the different approaches and problems that can be explored with this dataset due to the availability of all the metadata associated with each design. In addition to the material presented here, we also include an extensive supplementary materials section as well as a website: [https://aircraftverse.onrender.com/](https://aircraftverse.onrender.com/), a GitHub repo: [https://github.com/SRI-CSL/AircraftVerse/](https://github.com/SRI-CSL/AircraftVerse/), and the dataset release at [https://zenodo.org/record/6525446](https://zenodo.org/record/6525446). ## Acknowledgments and Disclosure of Funding This material is based upon work supported by the United States Air Force and DARPA under Contract No. FA8750-20-C-0002. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA.
2307.12337
Minimal chaotic models from the Volterra gyrostat
Low-order models obtained through Galerkin projection of several physically important systems (e.g., Rayleigh-B\'enard convection, mid-latitude quasi-geostrophic dynamics, and vorticity dynamics) appear in the form of coupled gyrostats. Forced dissipative chaos is an important phenomenon in these models, and this paper introduces and identifies 'minimal chaotic models' (MCMs), in the sense of having the fewest external forcing and linear dissipation terms, for the class of models arising from an underlying gyrostat core. The identification of MCMs reveals common conditions for chaos across a wide variety of physical systems. It is shown here that a critical distinction is whether the gyrostat core (without forcing or dissipation) conserves energy, depending on whether the sum of the quadratic coefficients is zero. The paper demonstrates that, for the energy-conserving condition of the gyrostat core, the requirement of a characteristic pair of fixed points that repel the chaotic flow dictates placement of forcing and dissipation in the minimal chaotic models. In contrast if the core does not conserve energy, the forcing can be arranged in additional ways for chaos to appear in the subclasses where linear feedbacks render fewer invariants in the gyrostat core. In all cases, the linear mode must experience dissipation for chaos to arise. The Volterra gyrostat presents a clear example where the arrangement of fixed points circumscribes more complex dynamics.
Ashwin K Seshadri, S Lakshmivarahan
2023-07-23T14:12:42Z
http://arxiv.org/abs/2307.12337v2
# Minimal chaotic models from the Volterra gyrostat ###### Abstract We consider the case of a \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-dimensional \(2\)-\(2\)-dimensional \(2\)-dimensional \(2\)-\(2\)-dimensional \(2\)-\(2\)-dimensional \(2\)-\(2\)-dimensional \(2\)-\(2\)-dimensional \(2\)-\(2\)-dimensional \(2\)-\(2\)-dimensional \(2\)-\(2\)-dimensional \(2\)-\(2\)-dimensional \(2\)-\(2\)-dimensional \(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-\(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2\)-\(2\)-\(2\)-\(2\)-\(2\)-dimensional \(2\)-\(2 ###### Abstract Low-order models obtained through Galerkin projection of several physically important systems (e.g., Rayleigh-B\(\acute{e}\)nard convection, mid-latitude quasi-geostrophic dynamics, and vorticity dynamics) appear in the form of coupled gyrostats. Forced dissipative chaos is an important phenomenon in these models, and this paper considers the minimal chaotic models, in the sense of having the fewest external forcing and linear dissipation terms, arising from an underlying gyrostat core. It is shown here that a critical distinction is whether the gyrostat core (without forcing or dissipation) conserves energy, depending on whether the sum of the quadratic coefficients is zero. The paper demonstrates that, for the energy-conserving case of the gyrostat core, the requirement of a characteristic pair of fixed points that repel the chaotic flow dictates placement of forcing and dissipation in the minimal chaotic models. In contrast, if the core does not conserve energy, the forcing can be arranged in additional ways for chaos to appear, especially for the cases where linear feedbacks render fewer invariants in the gyrostat core. In all cases, the linear mode must experience dissipation for chaos to arise. Thus, the Volterra gyrostat presents a clear example where the arrangement of fixed points circumscribes more complex dynamics. ## 1 Introduction Forced dissipative chaos appears in many climatic and geophysical flows (_Howard and Krishnamurti_ (1986); _Swart_ (1988); _Tong_ (2009)), with many well-known dynamical systems combining effects of forcing as well as dissipation (_Lorenz_ (1960); _Hide_ (1994)). A prominent example involves the special projection of Rayleigh-B\(\acute{e}\)nard convection onto 3 modes, with one momentum and two thermal components (_Lorenz_ (1963)). One of the earliest examples of chaos in 3-dimensional flows, the forcing in this model comes from an external thermal gradient driving dynamics away from equilibrium, while dissipation appears in both momentum and temperature dynamics. This model simplified an earlier low-order model derived by Saltzman (_Saltzman_ (1962)), and is derived from governing equations of convection in a fluid of uniform depth forced by an external thermal gradient. Since the model approximates incompressible flow in two dimensions, the equations describe streamfunction evolution (in a single mode), in addition to two temperature modes evolving nonlinearly (_Lorenz_ (1963)). The route to chaos in this forced-dissipative model has been studied extensively (_Sparrow_ (1982)) and involves a sequence of bifurcations that are initiated by destabilization of a pair of fixed points, eventually giving rise to a strange attractor. Many other examples of nonlinear flows in geophysics, such as wave-mean flow interactions in mid-latitudes in the context of quasi-geostrophic dynamics (_Swart_ (1988)), vorticity dynamics (_Lorenz_ (1960); _Charney and DeVore_ (1979)), convection in shear flows (_Howard and Krishnamurti_ (1986); _Thiffeault and Horton_ (1996); _Gluhovsky and Tong_ (1999)), as well as flows in electrically conducting fluids (_Kennett_ (1976); _Hide_ (1994)), have yielded low-order models admitting complex evolution. The governing equations in such systems have generally been discretized using the Galerkin projection method (_Holmes et al._ (2012)). A general difficulty with such model reductions has been that the resulting equations do not necessarily retain the invariants of the governing equations in the limit without any external forcing or dissipation (_Gluhovsky et al._ (2002); _Gluhovsky_ (2006); _Thiffeault and Horton_ (1996)). Several authors have considered the difficulties that can arise if the invariants are not held by truncated equations, and maintaining such invariants is important to avoid nonphysical numerical dissipation, preserve analogous energy flows in the truncated equations, and avoid spurious divergent solutions (_Thiffeault and Horton_ (1996)). It has been shown that the Volterra gyrostat can naturally form the building block of Galerkin projections of the governing partial-differential equations, while maintaining energy conservation in the unforced and dissipationless limit (_Oboukhov and Dolzhansky_ (1975); _Thiffeault and Horton_ (1996); _Gluhovsky et al._ (2002)). Motivated by early studies pointing to the importance of modular approaches to constructing low order models (LOMs) from the governing equations (_Oboukhov and Dolzhansky_ (1975)), as well as the role of systematic approaches for ensuring the maintenance of invariants within the conservative core (_Thiffeault and Horton_ (1996); _Gluhovsky and Agee_ (1997); _Gluhovsky and Tong_ (1999)), more recent studies have not only expanded on the earlier approaches but also exemplified the ideas (_Gluhovsky_ (2006); _Lakshmivarahan et al._ (2006); _Lakshmivarahan and Wang_ (2008a); _Tong and Gluhovsky_ (2008)). Many examples from these domains have been identified that can be described in terms of systems of coupled gyrostats (_Gluhovsky and Agee_ (1997); _Gluhovsky and Tong_ (1999); _Gluhovsky et al._ (2002); _Gluhovsky_ (2006); _Tong and Gluhovsky_ (2008)). Furthermore, where there exist quadratic invariants such as kinetic energy or the squared angular momentum, these are maintained in the resulting truncated equations as well. Owing to such properties, it is of widespread importance to study dynamics of models arising from the Volterra gyrostat. An important generalization of the Volterra gyrostat involves the inclusion of nonlinear feedback between modes (_Lakshmivarahan and Wang_ (2008b)). This paper considers chaos in models with forcing and dissipation added to the equations of the Volterra gyrostat having linear feedback terms. The Volterra gyrostat is a three-dimensional volume-conserving flow with a skew-symmetric structure of linear feedbacks and nonlinear interactions between modes. To form the building blocks of LOMs, it has been convenient to transform the original equations written by Volterra through a smooth change of variables (_Gluhovsky and Tong_ (1999)). As a result, in general, the building blocks of these LOMs have two invariants, analogous to the conservation of kinetic energy and angular momentum in the physical gyrostat. Each invariant confines the dynamics to a two-dimensional surface, and their intersection gives rise to oscillatory dynamics for the gyrostat core of these models. With inclusion of forcing and dissipation (F&D), there are no longer any quadratic invariants, and thus F&D can generate higher dimensional dynamics, including chaos, when two invariants exist in the gyrostatic core. In the original gyrostat equations, the sum of the quadratic coefficients is zero, a property rooted in the kinetic energy conservation of the physical system (_Gluhovsky and Tong_ (1999)). It is remarkable that this constraint leads to not one, but two, quadratic invariants in the gyrostat core (_Seshadri and Lakshmivarahan_ (2023)). As a result, three-dimensional systems where this constraint holds require forcing and dissipation to be present for chaos to appear. More generally, a simple modification to the gyrostat's quadratic coefficients (even without F&D being present) can reduce the number of quadratic invariants (_Seshadri and Lakshmivarahan_ (2023)). In particular, it was previously shown that if the sum of quadratic coefficients is nonzero, the gyrostat does not conserve energy and then the number of invariants depends on the number of linear feedbacks (_Seshadri and Lakshmivarahan_ (2023)). For example, if there are three distinct linear feedbacks, then there are no quadratic invariants in the gyrostat without a zero-sum of quadratic coefficients, and such models can admit chaotic dynamics even without F&D (_Seshadri and Lakshmivarahan_ (2023)). While maintaining other features of the gyrostatic models such as conservation of volumes in phase space, such cores can also naturally appear in Galerkin projections. Depending on the number of quadratic and linear terms present, _Gluhovsky and Tong_ (1999) have shown that the Volterra gyrostat can be specialized into nine different subclasses by setting various combinations of parameters to zero. Different subclasses constitute different linear and nonlinear interactions between modes and together describe the different types of gyrostatic cores. The number of quadratic invariants in the gyrostatic core can be expected to influence the ways in which chaos can be produced due to F&D effects. Considering the combined effects of forcing and dissipation, we identify minimal chaotic models derived from the Volterra gyrostat. Prior studies of chaos with F&D do not take energy conservation into account, but we show that this property makes an important difference. We first consider the possibility of chaos due to F&D without the energy conservation constraint, where we must distinguish different cases by merely identifying those equations where nonzero F&D must arise. Given the three components of the vector field, we obtain a possible \(2^{6}=64\) cases for placement of F&D, out of which \(2\times 2^{3}=16\) cases have either no forcing or no dissipation (or neither) in any of the equations. We need only consider the remaining 48 cases for the presence of chaos, and simulate ensembles to sample the parameter space for each of these cases. Upon listing all the chaotic cases, we note that there often exist cases that are proper subsets, containing some (but not all) of the forcing and dissipation terms. These proper subsets are defined as "minimal chaotic models (MCMs)". We identify all the MCMs for the gyrostat having two nonlinear terms. There could be more than one MCM, corresponding to distinct proper subsets. The significance of these MCMs lies not only in their specific arrangements of forcing and dissipation, but also in common features across different subclasses of the gyrostat. Following this, we consider the effects of whether energy is conserved in the gyrostat's core, which influences where forcing must be placed for chaos to appear in the equations. ## 2 Models and methods Volterra's equations for the gyrostat \[\mathcal{K}_{1}^{2}\dot{y}_{1} =\left(\mathcal{K}_{2}^{2}-\mathcal{K}_{3}^{2}\right)y_{2}y_{3}+ h_{2}y_{3}-h_{3}y_{2}\] \[\mathcal{K}_{2}^{2}\dot{y}_{2} =\left(\mathcal{K}_{3}^{2}-\mathcal{K}_{1}^{2}\right)y_{3}y_{1}+ h_{3}y_{1}-h_{1}y_{3}\] \[\mathcal{K}_{3}^{2}\dot{y}_{3} =\left(\mathcal{K}_{1}^{2}-\mathcal{K}_{2}^{2}\right)y_{1}y_{2}+ h_{1}y_{2}-h_{2}y_{1} \tag{1}\] with \(y_{i}\), \(i=1,2,3\), being the angular velocity of the carrier body, \(\mathcal{K}_{i}^{2}=l_{i}\) the principal moments of inertia of the gyrostat, and \(h_{i}\) the fixed angular momenta of the rotor relative to the carrier, are transformed smoothly (_Gluhovsky and Tong_ (1999); _Seshadri and Lakshmivarahan_ (2023)) using \(\mathcal{K}_{i}y_{i}=\kappa_{i}\) and upon defining new parameters \[p =\mathcal{K}_{2}^{2}-\mathcal{K}_{3}^{2},q=\mathcal{K}_{3}^{2}- \mathcal{K}_{1}^{2},\text{and }r=\mathcal{K}_{1}^{2}-\mathcal{K}_{2}^{2}\] \[a =\mathcal{K}_{1}h_{1},b=\mathcal{K}_{2}h_{2},\text{and }c= \mathcal{K}_{3}h_{3} \tag{2}\] into the system \[x_{1}^{\prime} = px_{2}x_{3}+bx_{3}-cx_{2}\] \[x_{2}^{\prime} = qx_{3}x_{1}+cx_{1}-\alpha x_{3}\] \[x_{3}^{\prime} = rx_{1}x_{2}+\alpha x_{2}-bx_{1}, \tag{3}\] that naturally appears in modular form in many LOMs. Here \({}^{\prime}\) denotes \(d/ds\), while \({}^{\cdot}\) is \(d/dt\), where \(t=K_{1}K_{2}K_{3}s\). Henceforth we shall work exclusively with the model in Eq. (3) and denote time as appearing there as \(t\). We shall refer to Eq. (3) as the gyrostat core. The resulting flow preserves volumes in phase space, as the trace of the Jacobian is zero. Broadly we must distinguish two types of conditions for the gyrostat core: * With \(\rho+q+r=0\) following Eq. (2), the model in Eq. (3) conserves kinetic energy as well as squared angular momentum and the solutions are oscillatory for all initial conditions (_Seshadri and Lakshmivarahan_ (2023)). * In contrast for \(\rho+q+r\neq 0\), where the model does not have a direct analogue to the physical gyrostat, the number of invariants depends on the number of nonzero linear coefficients \((a,b,c)\), with zero, one, and two invariants for three, two, or fewer nonzero coefficients respectively. Only with all of \(a,b,c\neq 0\) in the absence of energy conservation does the model in Eq. (3) admit chaos as a result of no invariants being present (_Seshadri and Lakshmivarahan_ (2023)). Despite the necessity of non-conservation of energy from \(\rho+q+r\neq 0\) for chaos in the gyrostat core, such a distinction is not usually made for chaos in the presence of F&D. As is well known, counterparts of Eq. (3) with F&D can present chaos even when energy conservation is present and the gyrostat core has periodic dynamics (e.g., _Lorenz_ (1963)). In this paper we consider models with F&D \[x_{1}^{\prime} = px_{2}x_{3}+bx_{3}-cx_{2}-\epsilon_{1}x_{1}+F_{1}\] \[x_{2}^{\prime} = qx_{3}x_{1}+cx_{1}-\alpha x_{3}-\epsilon_{2}x_{2}+F_{2}\] \[x_{3}^{\prime} = rx_{1}x_{2}+\alpha x_{2}-bx_{1}-\epsilon_{3}x_{3}+F_{3} \tag{4}\] with \(\epsilon_{i}\geq 0\), \(i=1,2,3\) and \(F_{j}\in\mathbb{R}\), \(j=1,2,3\). First, we shall sample from the parameter space without imposing the constraint \(\rho+q+r=0\), wherein our goal will be to identify chaotic dynamics with sparse inclusion of forcing and dissipation, where as few of the \(\epsilon_{1},\epsilon_{2},\epsilon_{3},F_{1},F_{2},F_{3}\) are nonzero as possible. _Gluhovsky and Tong_ (1999) have identified special cases ("subclasses") of Eq. (3), by specializing the quadratic and linear coefficients. With the energy conservation constraint that is present throughout their analysis, there must be at least two nonzero \(\rho,q,r\) otherwise the model is linear, so without loss of generality they assume that \(\rho,q\neq 0\). Further restricting various combinations of linear coefficients to be zero gives nine subclasses in addition to the general case with nonzero parameters. We examine these subclasses for the role of F&D in producing chaos, considering only those subclasses with two (but not three) quadratic terms (subclasses \(1-4\) of _Gluhovsky and Tong_ (1999), with \(r=0\) in Eq. (4)).1 This restriction gives a cubic equation for the steady states, as shown below, making these subclasses amenable to a common approach. Footnote 1: We have also omitted the degenerate cases in subclasses \(8-9\), as defined by _Gluhovsky and Tong_ (1999), for which the dynamics is two-dimensional. For each of the subclasses \(1-4\) in _Gluhovsky and Tong_ (1999), we distinguish different cases according to where forcing and dissipation arise. Thus for each subclass, we can have either \(\epsilon_{i}=0\) or \(\epsilon_{i}>0\) and likewise \(F_{j}=0\) or \(F_{j}\neq 0\), for \(i=1,2,3\) and \(j=1,2,3\), for a total of \(4^{3}=64\) cases that differ in whether each equation has dissipation, forcing, both or neither. Eight of these cases have no dissipation in the model and eight have no forcing, and we are left with \(48\) cases having nonzero forcing as well as dissipation, which must be evaluated for the possibility of chaos. Additionally, each case has parameters \((p,q,a,b,c)\) as well as values of \(\left(\epsilon_{i},F_{j}\right)\) that must be varied \((r=0)\). For each of these \(48\) cases of each of the \(4\) subclasses with two nonlinear terms, we generate a large (as much as \(10\)-dimensional) Latin hypercube sample to vary model parameters \((p,q,a,b,c)\), forcing and dissipation \(\left(\epsilon_{i},F_{j}\right)\), and initial conditions \((x_{10},x_{20},x_{30})\). Where dissipation appears in more than one mode, i.e. \(\epsilon_{i},\epsilon_{j}>0\) for \(i\neq j\) it is assigned the same value \(\epsilon_{i}=\epsilon_{j}=\epsilon\) in the ensemble, and likewise with the value of the forcing terms, since the main goal is to identify MCMs. For each member of the sample, we integrate from the corresponding initial conditions \((x_{10},x_{20},x_{30})\) a \(12\)-dimensional system describing the state-variables in Eq. (3) as well as the evolution of the \(3\times 3\) initial condition sensitivity matrix \(D_{x_{0}}\left(t\right)\) that allows us to evaluate the \(3\) Lyapunov exponents of the model. Lyapunov exponents are computed using a standard approach, based on singular value decomposition (SVD) of the matrix \(M\left(t\right)\) defined as \(M\left(t\right)=D_{x_{0}}\left(t\right)^{T}D_{x_{0}}\left(t\right)\). The matrix \(D_{x_{0}}\left(t\right)\in\mathbb{R}^{3\times 3}\) consists of elements \(D_{x_{0}}^{i,j}\left(t\right)=\partial x_{i}\left(t\right)/\partial x_{j0}\) describing forward sensitivities to perturbation in the initial condition (_Pikovsky and Politi_ (2016)). This matrix is initialized to the \(3\times 3\) identity matrix at \(t=0\). We integrate in time for the state \((x_{1}\left(t\right),x_{2}\left(t\right),x_{3}\left(t\right))\) as well as the nine elements of \(D_{x_{0}}\left(t\right)\), from \(t=0\) to \(t=5000\). For a given case, once the \(3\) Lyapunov exponents are estimated from the SVD of matrix \(M\left(t=5000\right)\), and the largest Lyapunov exponent (LLE) is found for each of the samples, we identify the most unstable sample among those whose LLE is positive. In doing so, we consider only those samples whose evolution is bounded and for which the LLE calculation does not diverge. Many combinations of forcing and dissipation can give rise to unbounded dynamics even for these volume-contracting flows with dissipation. In case there exists a most unstable sample with bounded evolution and positive but finite LLE, that case of the subclass is examined further for the presence of chaos. Specifically, we plot the \(3\)-dimensional orbits, time-series of \(x_{1}\), Poincare sections, power spectra of \(x_{1}\), and the evolution of LLE with time \(\lambda\left(t\right)\). Together these multiple lines of evidence allow us to distinguish chaotic orbits from non-chaotic ones among those where LLE is estimated to be positive. The detailed results are plotted in the Supplementary Information (SI). This procedure is repeated for each of the \(48\) cases for all \(4\) subclasses. A case is chaotic if it presents a sample (i.e., choice of parameters and initial conditions) with bounded evolution and concurrent lines of evidence for chaos: broadband power spectrum, Poincare sections with fractal structure, and evolution of LLE to a positive value with time, in addition to highly irregular time-series of \(x_{1}\). For a given subclass, after chaotic cases (among the possible \(48\)) have been identified, minimal chaotic cases are found by inspection. A minimal chaotic case has a proper subset of all the forcing and dissipation terms. The above calculations are repeated for each of the 4 subclasses with the energy conservation constraint for the gyrostatic core being present, i.e. \(p+q+r=0\), by setting \(q=-p\), since \(r=0\) in each of these subclasses. ## 3 Identification of minimal chaotic models Chaotic cases are identified from among each of the 48 possible arrangements of nonzero forcing and dissipation ("cases"), with each case consisting of 2000 samples across model parameters, initial conditions, and forcing and dissipation coefficients, arranged in a Latin hypercube. While such an empirical approach cannot ensure that all cases admitting chaos are identified, each minimal chaotic case appears to have been identified explicitly, as shown below. Multiple lines of evidence have been used to identify chaotic cases. As described in the previous section, we identify simulations having a positive value of the largest Lyapunov exponent (LLE) to shortlist potential cases admitting chaos, and deploy further lines of evidence to confirm the appearance of chaos: inspection of the orbits and time-series of \(x_{1}\), examining the Poincare sections, considering whether the simulated time-series (once transients have diminished) have a broadband power spectrum, and the evolution of the LLE towards positive values. We recall that our present analysis takes \(p+q+r\neq 0\). For subclass 1 there are many cases with positive LLE (17 have been found explicitly; see Supplementary Information (SI) Figures 1-5), but only some of these are chaotic by the above measures. The chaotic cases have nonzero \((F_{1},\epsilon_{3})\), \((F_{1},\epsilon_{1},\epsilon_{3})\), \((F_{1},\epsilon_{2},\epsilon_{3})\), \((F_{1},F_{2},\epsilon_{3})\), \((F_{1},F_{3},\epsilon_{3})\), \((F_{1},F_{2},F_{3},\epsilon_{3})\) and \((F_{1},F_{2},F_{3},\epsilon_{2},\epsilon_{3})\). Despite the diversity of chaotic orbits (Figure 1), inspection identifies a unique MCM with nonzero \((F_{1},\epsilon_{3})\), since each of these cases has nonzero \(F_{1}\) and \(\epsilon_{3}\). In each case the most chaotic version of the 2000-member ensemble, having maximum positive value of LLE, has been plotted to illustrate the appearance of chaos. Similarly, for subclass 2, of the 23 cases with positive LLE (SI Figures 6-10), there are 12 cases that are chaotic by these above measures. These cases (orbits in Figure 2), have nonzero \((F_{1},\epsilon_{3})\),\((F_{1},\epsilon_{1},\epsilon_{3})\),\((F_{1},\epsilon_{2},\epsilon_{3})\), \((F_{2},\epsilon_{3})\),\((F_{2},\epsilon_{1},\epsilon_{3})\),\((F_{1},F_{2},\epsilon_{3})\),\((F_{1},F_{2},\epsilon_{3})\),\((F_{1},F_{2},\epsilon_{3})\),\((F_{1},F_{2},\epsilon_{3})\), \((F_{3},\epsilon_{2},\epsilon_{3})\), \((F_{1},F_{3},\epsilon_{3})\),\((F_{2},F_{3},\epsilon_{3})\) and \((F_{1},F_{2},F_{3},\epsilon_{3})\). The MCMs as defined above include all proper subsets, and not only those that have the fewest number of terms. The simplest such models, with one forcing and one dissipation term, involve nonzero \((F_{1},\epsilon_{3})\) and \((F_{2},\epsilon_{3})\). In addition, there is the case with nonzero \((F_{3},\epsilon_{2},\epsilon_{3})\), which is irreducible to the other two cases owing to different placement of forcing. Thus there are 3 MCMs for subclass 2: \((F_{1},\epsilon_{3})\), \((F_{2},\epsilon_{3})\), and \((F_{3},\epsilon_{2},\epsilon_{3})\), with \(p+q+r\neq 0\). Similar analysis for subclasses 3 and 4 identifies 11 and 8 chaotic cases respectively, whose orbits are shown in Figures 3-4. For both of these subclasses, there are 3 MCMs, with nonzero \((F_{1},\epsilon_{3})\), \((F_{2},\epsilon_{3})\), and \((F_{3},\epsilon_{3})\). The corresponding time-series of \(x_{1}\), Poincare sections, power spectra of the stationary orbits, and evolution of LLE, are shown in SI Figures 11-14, and Figures 15-18, respectively for the two subclasses. Table 1 below summarizes the chaotic cases and MCMs, for each of the subclasses. While siting of forcing can vary, dissipation is circumscribed and all chaotic cases must involve dissipation in the third (i.e., linear) equation through nonzero \(\epsilon_{3}\). The table also lists the MCMs when the energy conservation constraint is present, based on simulations with \(q=-p\), since \(r=0\). The cases with positive LLEs are listed in the SI (Figures 22-24, 25-27, 28-30, and 31-33 for subclasses \(1-4\) respectively), leading to fewer MCMs when energy is conserved. For subclass 1,there is no effect of the energy conservation constraint, with MCM \((\mathcal{F}_{1},\epsilon_{3})\) in either case. In subclass 2, only \((\mathcal{F}_{1},\epsilon_{3})\) and \((\mathcal{F}_{2},\epsilon_{3})\) are MCMs. For subclass 3, \((\mathcal{F}_{1},\epsilon_{3})\) is an MCM while \((\mathcal{F}_{2},\epsilon_{3})\) and \((\mathcal{F}_{3},\epsilon_{3})\) are not. Similarly, in subclass 4, \((\mathcal{F}_{1},\epsilon_{3})\) and \((\mathcal{F}_{2},\epsilon_{3})\) are MCMs, while \((\mathcal{F}_{3},\epsilon_{3})\) is not. If the gyrostatic core conserves energy, forcing can be placed in fewer ways for the model to admit chaos. These effects of the energy conservation constraint are accounted for in the following section. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Subclass & Chaotic cases (\(p+q+r+\vartheta\)) & MCM (\(p+q+r+\vartheta\)) & MCM (\(p+q+r=0\)) & Coefficients of cubic equation for \(\pi_{1}^{*}\) \\ \hline \hline \multirow{4}{*}{1} & \((F_{1},e_{1}),(F_{1},e_{2},e_{3}),(F_{1},e_{2},e_{3}),\) & \multirow{4}{*}{\((F_{1},e_{3})\)} & \multirow{4}{*}{\((F_{1},e_{3})\)} & \multirow{4}{*}{\((F_{1},e_{3})\)} & \multirow{4}{*}{\((F_{1},e_{3})\)} \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},F_{1},e_{3}),\) & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},F_{1},e_{3}),\) & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},F_{1},e_{3}),(F_{1},F_{2},e_{3}),\) & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},e_{3}),(F_{1},e_{3}),(F_{1},F_{2},e_{3}),\) & \multirow{4}{*}{\((F_{1},e_{3})\)} & \multirow{4}{*}{\((F_{1},e_{3})\)} & \multirow{4}{*}{\((F_{1},e_{3})\)} & \multirow{4}{*}{\((F_{1},e_{3})\)} \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & \multirow{4}{*}{\((F_{1},e_{3})\)} & \multirow{4}{*}{\((F_{1},e_{3})\)} & \multirow{4}{*}{\((F_{1},e_{3})\)} & \multirow{4}{*}{\((F_{1},e_{3})\)} \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},e_{3}),(F_{1},F_{2},e_{3}),\) & \multirow{4}{*}{\((F_{1},e_{3})\)} & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},e_{3}),(F_{1},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},e_{3}),(F_{1},F_{2},e_{3}),\) & \multirow{4}{*}{\((F_{1},e_{3})\)} & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & \multirow{4}{*}{\((F_{1},e_{3})\)} & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & \multirow{4}{*}{\((F_{1},e_{3})\)} & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & \multirow{4}{*}{\((F_{1},e_{3})\)} & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & \multirow{4}{*}{\((F_{1},e_{3})\)} & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & \multirow{4}{*}{\((F_{1},e_{3})\)} & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & \multirow{4}{*}{\((F_{1},e_{3})\)} & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & \multirow{4}{*}{\((F_{1},e_{3})\)} & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{2},e_{3}),\) & & & & & \\ \cline{2-2} \cline{5-6} & \((F_{1},F_{2},e_{3}),(F_{1},F_{ ## 4 Accounting for minimal chaotic models ### Subclass 1 with \(r=0,b=c=0\) This subclass of the Volterra gyrostat has evolution \[\chi_{1}^{\prime} =\rho x_{2}x_{3}\] \[\chi_{2}^{\prime} =qx_{3}x_{1}-\alpha x_{3}.\] \[\chi_{3}^{\prime} =\alpha x_{2} \tag{5}\] and serves as the conservative core of many important LOMs (_Gluhovsky and Tong_ (1999)), including that of _Lorenz_ (1963). It has two constants of motion irrespective of whether \(\rho+q=0\) (_Seshadri and Lakshmivarahan_ (2023)), and rotational symmetry \(R_{x_{1}}\left(\pi\right)\) about the axis of \(x_{1}\), i.e. the equations are preserved under the transformation \(\left(x_{1},x_{2},x_{3}\right)\rightarrow\left(x_{1},-x_{2},-x_{3}\right)\). Since all chaotic cases necessarily have nonzero \(\left(F_{1},\epsilon_{3}\right)\), the MCM \[\chi_{1}^{\prime} =\rho x_{2}x_{3}+F_{1}\] \[\chi_{2}^{\prime} =qx_{3}x_{1}-\alpha x_{3}.\] \[\chi_{3}^{\prime} =\alpha x_{2}-\epsilon_{3}x_{3} \tag{6}\] also maintains \(R_{x_{1}}\left(\pi\right)\). For this last system there are two distinct fixed points given by \(\left(a/q,x_{2}^{*},x_{3}^{*}\right)\) and \(\left(a/q,-x_{2}^{*},-x_{3}^{*}\right)\) with \(x_{2}^{*2}=-\frac{\epsilon_{3}F_{1}}{\rho\rho}\) and \(x_{3}^{*2}=-\frac{\epsilon_{3}}{\rho\epsilon_{3}}\), which are real if \(F_{1}\) has opposite sign from \(\alpha\rho\) (\(\epsilon_{3}>0\)). The Jacobian evaluated at \(\left(\sigma/q,x_{2}^{*},x_{3}^{*}\right)\) becomes \[\mathrm{D}_{\mathrm{f}}=\left[\begin{array}{ccc}0&\rho x_{3}^{*}&\rho x_{2} ^{*}\\ qx_{3}^{*}&0&0\\ 0&a&-\epsilon_{3}\end{array}\right] \tag{7}\] having characteristic polynomial \[\lambda^{3}+\epsilon_{3}\lambda^{2}-\rho qx_{3}^{*2}\lambda-2\epsilon_{3}pqx _{3}^{*2}=0, \tag{8}\] where we have used, from the last component of Eq. (6), \(x_{2}^{*}=\left(\epsilon_{3}/a\right)x_{3}^{*}\). From the symmetry of the equations, the Jacobian evaluated at \(\left(a/q,-x_{2}^{*},-x_{3}^{*}\right)\) also has the same characteristic polynomial, and thus the above fixed points make a pair with identical stability. Moreover the discriminant of the characteristic polynomial in Eq. (8)2 Footnote 2: For a general cubic given by \(\beta_{3}x^{3}+\beta_{2}x^{2}+\beta_{1}x+\beta_{0}=0\),the discriminant is defined as \(\Delta=18\beta_{3}\beta_{2}\beta_{1}\beta_{0}-4\beta_{2}^{3}\beta_{0}+\beta_{ 2}^{2}\beta_{1}^{2}-4\beta_{3}\beta_{1}^{3}-27\beta_{3}^{2}\beta_{0}^{2}\) and its sign determines the number of real and complex roots. \[\Delta=pqx_{3}^{*2}\left(8\epsilon_{3}^{4}-71\epsilon_{3}^{2}pqx_{3}^{*2}+4 \rho^{2}q^{2}x_{3}^{*4}\right)<0 \tag{9}\] in case \(pq<0\), since \(x_{3}^{*2}>0\). The MCM plotted in Figure 1 has parameters \(a=-0.94\), \(\rho=0.19\), and \(q=-0.39\). Therefore, \(pq\) is indeed negative, and there is one zero eigenvalue and two complex eigenvalues for the underlying fixed point. A Hopf bifurcation is expected to occur where the real part of the complex roots crosses the imaginary axis. With product of eigenvalues \(2\epsilon_{3}pqx_{3}^{*2}<0\) there is one stable direction, but when \(F_{1}\) is sufficient large the real part of the complex conjugate pair becomes positive and both these fixed points repel the nearby flows. These properties are not maintained in case a single forcing and dissipation term occur elsewhere. There are 8 alternate ways in which a single F&D term can be placed and these give different arrangements of fixed points from the above (Table 2): ranging from no fixed points to an infinite number. Where a pair of fixed points exists, they do not possess the above symmetry. The MCM \((F_{1},\epsilon_{3})\) is not alone in maintaining the symmetry of the gyrostatic core, and other choices in Table 2, such as \((F_{1},\epsilon_{1})\), \((F_{1},\epsilon_{2})\) maintain it as well. The model \((F_{1},\epsilon_{3})\) has the unique attribute of maintaining symmetry of the equations while giving rise to a pair of distinct and symmetrically placed fixed points. Other chaotic cases listed in Table 1 do not always maintain this symmetry. As we shall see below, what matters in general for the appearance of chaos is a pair of fixed points arranged on opposite sides of the \(x_{1}-x_{2}\) plane. The MCM \((F_{1},\epsilon_{3})\) also appears when energy is conserved in the gyrostat core (SI Figs. 22-24; Table 1). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline S. No & Model & Equations & Fixed Points \\ \hline 1 & \(F_{1},\epsilon_{1}\) & \(x_{1}^{\prime}=px_{2}x_{3}+F_{1}-\epsilon_{1}x_{1}\), \(x_{2}^{\prime}=qx_{3}x_{1}-\alpha x_{3}\), \(x_{3}^{\prime}=\alpha x_{2}\) & \(\left(\frac{F_{1}}{\epsilon_{1}},0,0\right)\) \\ \hline 2 & \(F_{1},\epsilon_{2}\) & \(x_{1}^{\prime}=px_{2}x_{3}+F_{1}\), \(x_{2}^{\prime}=qx_{3}x_{1}-\alpha x_{3}-\epsilon_{2}x_{2}\), \(x_{3}^{\prime}=\alpha x_{2}\) & None \\ \hline 3 & \(F_{2},\epsilon_{1}\) & \(x_{1}^{\prime}=px_{2}x_{3}-\epsilon_{1}x_{1}\), \(x_{2}^{\prime}=qx_{3}x_{1}-\alpha x_{3}+F_{2}\), \(x_{3}^{\prime}=\alpha x_{2}\) & \(\left(0,0,\frac{F_{2}}{a}\right)\) \\ \hline 4 & \(F_{2},\epsilon_{2}\) & \(x_{1}^{\prime}=px_{2}x_{3}\), \(x_{2}^{\prime}=qx_{3}x_{1}-\alpha x_{3}+F_{2}-\epsilon_{2}x_{2}\), \(x_{3}^{\prime}=\alpha x_{2}\) & \(\left(x_{1}^{*},0,-\frac{F_{2}}{qx_{1}^{*}-a}\right)\) \\ \hline 5 & \(F_{2},\epsilon_{3}\) & \(x_{1}^{\prime}=px_{2}x_{3}\), \(x_{2}^{\prime}=qx_{3}x_{1}-\alpha x_{3}+F_{2}\), \(x_{3}^{\prime}=\alpha x_{2}-\epsilon_{3}x_{3}\) & None \\ \hline 6 & \(F_{3},\epsilon_{1}\) & \(x_{1}^{\prime}=px_{2}x_{3}-\epsilon_{1}x_{1}\), \(x_{2}^{\prime}=qx_{3}x_{1}-\alpha x_{3}\), \(x_{3}^{\prime}=\alpha x_{2}+F_{3}\) & \(\left(0,-\frac{F_{3}}{a},0\right)\) and \(\left(\frac{a}{q},-\frac{F_{3}}{a},-\frac{\epsilon_{1}a^{2}}{pqF_{3}}\right)\) \\ \hline 7 & \(F_{3},\epsilon_{2}\) & \(x_{1}^{\prime}=px_{2}x_{3}\), \(x_{2}^{\prime}=qx_{3}x_{1}-\alpha x_{3}-\epsilon_{2}x_{2}\), \(x_{3}^{\prime}=\alpha x_{2}+F_{3}\) & None \\ \hline 8 & \(F_{3},\epsilon_{3}\) & \(x_{1}^{\prime}=px_{2}x_{3}\), \(x_{2}^{\prime}=qx_{3}x_{1}-\alpha x_{3}\), \(x_{3}^{\prime}=\alpha x_{2}+F_{3}-\epsilon_{3}x_{3}\) & \(\left(\frac{a}{q},0,\frac{F_{3}}{\epsilon_{3}}\right)\) and \(\left(x_{1}^{*},-\frac{F_{3}}{a},0\right)\) \\ \hline \end{tabular} \end{table} Table 2: Fixed points for various other placements of one F&D term (besides the MCM) in Subclass 1. More generally, this subclass with F&D \[x^{\prime}_{1} =\rho x_{2}x_{3}-\epsilon_{1}x_{1}+F_{1}\] \[x^{\prime}_{2} =qx_{3}x_{1}-\alpha x_{3}-\epsilon_{2}x_{2}+F_{2}\] \[x^{\prime}_{3} =\alpha x_{2}-\epsilon_{3}x_{3}+F_{3} \tag{10}\] has fixed points \(\left(x^{*}_{1},x^{*}_{2},x^{*}_{3}\right)\) as follows: from the third equation, \(x^{*}_{2}=-\left(F_{3}-\epsilon_{3}x^{*}_{3}\right)/a\) and from the second equation \(qx^{*}_{1}x^{*}_{3}=ax^{*}_{3}+\epsilon_{2}x^{*}_{2}-F_{2}\). Together these equations must obey consistency \[\left(qx^{*}_{1}-\frac{\epsilon_{2}\epsilon_{3}}{a}-a\right)x^{*}_{3}=-\frac{ F_{3}\epsilon_{2}}{a}-F_{2} \tag{11}\] so that when \(x^{*}_{3}=0\) the right hand side of Eq. (11) must also vanish. A second consistency condition is furnished by the first equation \[\rho x^{*}_{2}x^{*}_{3}=-F_{1}+\epsilon_{1}x^{*}_{1} \tag{12}\] so that zero \(x^{*}_{3}\) in conjunction with nonzero \(F_{1}\) also entails nonzero \(\epsilon_{1}\). These consistency conditions are met by the cases illustrated in Table 1. Using these relations to eliminate \(x^{*}_{1},x^{*}_{2}\) from the first equation we obtain a cubic in \(x^{*}_{3}\) \[\nu_{3}x^{*}_{3}{}^{3}+\nu_{2}x^{*}_{3}{}^{2}+\nu_{1}x^{*}_{3}+\nu_{0}=0 \tag{13}\] where \[\nu_{3} =\epsilon_{3}\frac{\rho}{a},\] \[\nu_{2} =-F_{3}\frac{\rho}{a},\] \[\nu_{1} =F_{1}-\epsilon_{1}\frac{a}{a}-\epsilon_{1}\epsilon_{2}\epsilon_{ 3}\frac{1}{aq},\] \[\nu_{0} =\frac{F_{2}\epsilon_{1}}{q}+\frac{F_{3}\epsilon_{1}\epsilon_{2}} {a\,q}. \tag{14}\] The number and arrangement of fixed points depends on the number of real roots of Eq. (13), subject to the two consistency conditions. Recall that the MCM has nonzero \((F_{1},\epsilon_{3})\). The resulting expression becomes \[\epsilon_{3}\frac{\rho}{a}x^{*3}_{3}+F_{1}x^{*}_{3}=0 \tag{15}\] yielding two roots satisfying \(x^{*2}_{3}=-\frac{\rho F_{1}}{\rho\epsilon_{3}}\) as found above, since the third root \(x^{*}_{3}=0\) does not meet consistency condition in Eq. (12). This is a simple model with F&D yielding two fixed points with positive/negative \(x^{*}_{3}\). Another model with two fixed points has nonzero \((F_{3},\epsilon_{3})\): we obtain \(\epsilon_{3}\frac{\rho}{a}x^{*3}_{3}-F_{3}\frac{\rho}{a}x^{*2}_{3}=0\), giving a fixed point \(x^{*}_{3}=0\) and \(x^{*}_{3}=F_{3}/\epsilon_{3}\). What precludes it becoming chaotic? Owing to the same placement of dissipation the model has the same underlying Jacobian and resulting characteristic polynomial as Eq. (8), and consequently the same discriminant as Eq. (9). At \(x^{*}_{3}=0\) clearly \(\Delta=0\) and the equation has a double eigenvalue, precluding complex roots. Thus a Hopf bifurcation cannot arise in this case. In summary, the MCM (\(F_{1},\epsilon_{3}\)) provides the simplest model with a pair of fixed points, with nonzero \(\chi_{3}^{*}\), which can undergo a Hopf bifurcation. Such fixed points play an important role in the Lorenz model (_Sparrow_ (1982)). On the route to chaos, as the forcing is increased, trajectories spiral around these fixed points at an increasing distance (_Kaplan and Yorke_ (1979)). There is also an important difference with the Lorenz model, which will be examined later: the absence of a third fixed point at the origin for the MCM precludes homoclinic orbits that are important in the Lorenz model. Such a comparison of subclass 1 that closely parallels the Lorenz model (_Gluhovsky and Tong_ (1999)) identifies the critical condition for chaos in these models: the appearance of a pair of fixed points, with positive/negative \(\chi_{3}^{*}\), yielding opposing spirals when projected onto the \(\chi_{1}-\chi_{2}\) plane.3 Footnote 3: The role of the pair of fixed points with opposite signs of \(\chi_{3}^{*}\) in creating opposing spirals when projected onto \(\chi_{1}-\chi_{2}\) is clear from the Jacobian in Eq. (7). ### Subclass 2 with \(r=0,c=0\) Here the conservative core \[\chi_{1}^{\prime} = p\chi_{2}\chi_{3}+b\chi_{3}\] \[\chi_{2}^{\prime} = q\chi_{3}\chi_{1}-\alpha\chi_{3}\] \[\chi_{3}^{\prime} = \alpha\chi_{2}-b\chi_{1} \tag{16}\] has a single constant of motion (_Seshadri and Lakshmivarahan_ (2023)) in the general situation with \(p+q\neq 0\), and it is nonzero \(b\) that breaks symmetry \(R_{\chi_{1}}(\pi)\). The MCMs (\(F_{1},\epsilon_{3}\)) and (\(F_{2},\epsilon_{3}\)), which both appear even with \(p+q+r=0\), are analogous, so we consider (\(F_{1},\epsilon_{3}\)) \[\chi_{1}^{\prime} = p\chi_{2}\chi_{3}+b\chi_{3}+F_{1}\] \[\chi_{2}^{\prime} = q\chi_{3}\chi_{1}-\alpha\chi_{3}\] \[\chi_{3}^{\prime} = \alpha\chi_{2}-b\chi_{1}-\epsilon_{3}\chi_{3} \tag{17}\] whose fixed points are given by \(\left(\frac{q}{q},\frac{b}{q}+\frac{\epsilon_{3}}{a}\chi_{3}^{*},\chi_{3}^{*}\right)\) where \(\chi_{3}^{*}\) solves the quadratic equation \(\epsilon_{3}pq\chi_{3}^{*2}+qb\left(p+q\right)\chi_{3}^{*}+aqF_{1}=0\). The Jacobian evaluated at the fixed point \[\mathrm{D_{I}}=\left[\begin{array}{ccc}0&\rho\chi_{3}^{*}&\rho\chi_{2}^{*}+ b\\ q\chi_{3}^{*}&0&0\\ -b&a&-\epsilon_{3}\end{array}\right] \tag{18}\] has characteristic polynomial \[\lambda^{3}+\epsilon_{3}\lambda^{2}-\left(pq\chi_{3}^{*2}-b\right)\lambda- \epsilon_{3}pq\chi_{3}^{*2}+aqF_{1}=0, \tag{19}\] which depends on \(\chi_{3}^{*2}\). For \(p+q\neq 0\) the two fixed points do not have identical stability. As with subclass 1, for \(F_{1}\) sufficiently large the complex roots have positive real part so this pair of fixed points repel the flow in the neighbourhood. The other MCM \((F_{3},\epsilon_{2},\epsilon_{3})\) \[x_{1}^{\prime} = \rho x_{2}x_{3}+bx_{3}\] \[x_{2}^{\prime} = qx_{3}x_{1}-ax_{3}-\epsilon_{2}x_{2}\] \[x_{3}^{\prime} = \alpha x_{2}-bx_{1}-\epsilon_{3}x_{3}+F_{3} \tag{20}\] does not appear in the presence of energy conservation, and has origins that are not rooted in the aforementioned property of fixed points (as shown below). With the energy conservation constraint, let us consider alternate placements of forcing and dissipation terms (Table 3). Most of the alternate placements yield only a single fixed point, whereas those with two fixed points have one of them lying on the \(x_{1}-x_{2}\) plane. This does not favor the pair of spirals described in the context of subclass 1. Only the MCMs \((F_{1},\epsilon_{3})\) and \((F_{2},\epsilon_{3})\) lead to a pair of fixed points with opposite signs of \(x_{3}^{*}\), and these are the ones leading to the pair of opposing spirals (see Eq. (18)). It is hardly surprising that these are the MCMs also observed with \(p+q+r=0\) (SI Figs. 25-27; Table 1). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline S. No & Model & Equations & Fixed Points \\ \hline 1 & \(F_{1},\epsilon_{1}\) & \(\chi_{1}^{\prime}=p\chi_{2}\chi_{3}+bx_{3}+F_{1}-\epsilon_{1}\chi_{1}\), \(\chi_{2}^{\prime}=q\chi_{3}\chi_{1}-a\chi_{3}\), \(\chi_{3}^{\prime}=a\chi_{2}-bx_{1}\) & \(\left[\frac{F_{1}}{\epsilon_{1}^{\prime}}\frac{bF_{1}}{\sigma_{1}},0\right]\) \\ \hline 2 & \(F_{1},\epsilon_{2}\) & \(\chi_{1}^{\prime}=p\chi_{2}\chi_{3}+bx_{3}+F_{1}\), \(\chi_{2}^{\prime}=q\chi_{3}x_{1}-a\chi_{3}-\epsilon_{2}x_{2}\), \(\chi_{3}^{\prime}=a\chi_{2}-bx_{1}\) & \(\left(\frac{g}{b}\chi_{2}^{\prime},\chi_{2}^{\ast}=-\frac{gggg}{\sigma_{1}^{ \prime}},\chi_{3}^{\ast}=\frac{F_{1}ebb}{1+aq-cb}\right)\) \\ \hline 3 & \(F_{2},\epsilon_{1}\) & \(\chi_{1}^{\prime}=p\chi_{2}\chi_{3}+bx_{3}-\epsilon_{1}\chi_{1}\), \(\chi_{2}^{\prime}=q\chi_{3}x_{1}-a\chi_{3}+F_{2}\), \(\chi_{3}^{\prime}=a\chi_{2}-bx_{1}\) & \(\left(\frac{g}{b}\chi_{2}^{\prime},\chi_{2}^{\ast}=-\frac{ggg}{\sigma_{1}^{ \prime}},\chi_{3}^{\ast}=\frac{F_{2}eba}{1+aq-cb}\right)\) \\ \hline 4 & \(F_{2},\epsilon_{2}\) & \(\chi_{1}^{\prime}=p\chi_{2}\chi_{3}+bx_{3}\), \(\chi_{2}^{\prime}=q\chi_{3}x_{1}-a\chi_{3}+F_{2}-\epsilon_{2}x_{2}\), \(\chi_{3}^{\prime}=a\chi_{2}-bx_{1}\) & \(\left(\frac{g}{b}\chi_{2}^{\prime},\frac{F_{2}}{\sigma_{2}},0\right)\) \\ \hline 5 & \(F_{3},\epsilon_{1}\) & \(\chi_{1}^{\prime}=p\chi_{2}\chi_{3}+bx_{3}-\epsilon_{1}\chi_{1}\), \(\chi_{2}^{\prime}=q\chi_{3}x_{1}-a\chi_{3}\), \(\chi_{3}^{\prime}=a\chi_{2}-bx_{1}+F_{3}\) & \(\left(0,-\frac{g}{a},\frac{F_{2}-b}{a},\frac{gg}{\sigma_{1}^{\prime}}\right)\) and \(\left(\frac{g}{b^{\prime}},\frac{F_{2}-b}{a},\frac{gg}{\sigma_{1}^{\prime}} \right)\) \\ \hline 6 & \(F_{3},\epsilon_{2}\) & \(\chi_{1}^{\prime}=p\chi_{2}\chi_{3}+bx_{3}\), \(\chi_{2}^{\prime}=q\chi_{3}x_{1}-a\chi_{3}-\epsilon_{2}x_{2}\), \(\chi_{3}^{\prime}=a\chi_{2}-bx_{1}+F_{3}\) & \(\left(\frac{F_{2}}{b},0\right)\) and \(\left(\frac{-\frac{ggg}{b}+F_{3}}{b}-\frac{b}{p},\frac{cF_{2}^{\prime}}{b \sigma_{1}^{\prime}}\right)\) \\ \hline 7 & \(F_{3},\epsilon_{3}\) & \(\chi_{1}^{\prime}=p\chi_{2}\chi_{3}+bx_{3}\), \(\chi_{2}^{\prime}=q\chi_{3}x_{1}-a\chi_{3}\), \(\chi_{3}^{\prime}=a\chi_{2}-bx_{1}+F_{3}-\epsilon_{3}x_{3}\) & \(\left(\chi_{1}^{\prime},-\frac{F_{2}-bx_{1}^{\prime}}{a},0\right)\) and \(\left(\frac{g}{g},-\frac{b}{p},\frac{F_{1}}{\alpha}\right)\) \\ \hline \end{tabular} \end{table} Table 3: Fixed points for various other placements of one forcing and dissipation term in Subclass 2. Here we have taken \(\rho+q+r=0\). As in subclass 1, the possible solutions for \(x_{3}^{*}\) for general placement of forcing and dissipation follows cubic \[\nu_{3}x_{3}^{*3}+\nu_{2}x_{3}^{*2}+\nu_{1}x_{3}^{*}+\nu_{0}=0 \tag{21}\] where \[\nu_{3} =\epsilon_{3}\frac{\rho}{a},\] \[\nu_{2} =-F_{3}\frac{\rho}{a}+b\,\left(1+\frac{\rho}{q}\right)\,,\] \[\nu_{1} =F_{1}-F_{2}b\frac{\rho}{aq}-\epsilon_{1}\frac{a}{q}-\epsilon_{1 }\epsilon_{2}\epsilon_{3}\frac{1}{aq}-\epsilon_{2}\frac{b^{2}}{aq},\] \[\nu_{0} =\frac{F_{2}\epsilon_{1}}{q}+\frac{F_{3}\epsilon_{1}\epsilon_{2} }{aq}-F_{1}\epsilon_{2}\frac{b}{aq}, \tag{22}\] which reduces to Eq. (13) for \(b=0\). The MCMs generally describe the simplest ways to obtain a pair of fixed points with opposite signs of \(x_{3}^{*}\), and nonzero \(\epsilon_{3}\) along with suitable placement of forcing supports this. Nonzero \((F_{1},\epsilon_{3})\) and \((F_{2},\epsilon_{3})\) both yield \(\nu_{3}x_{3}^{*3}+\nu_{1}x_{3}^{*}=0\) for the condition \(\rho+q+r=0\), giving a pair of fixed points having the aforementioned property. The other MCM \((F_{3},\epsilon_{2},\epsilon_{3})\) gives \(\nu_{3}x_{3}^{*3}+\nu_{2}x_{3}^{*2}+\nu_{1}x_{3}^{*}=0\): it does not fall into the same pattern as the above cases,4 as nonzero \(F_{3}\) precludes a second fixed point across the plane, and this has consequences described later. It is notable that this case does not occur with \(\rho+q+r=0\) (SI Figs. 25-27; Table 1). Footnote 4: Here, it is easily seen that \(\nu_{1}\) and \(\nu_{3}\) are of the same sign, precluding roots of opposite sign. If only two successive coefficients are nonzero, no matter their degree, this would give at most one non-trivial \(x_{3}^{*}\). This occurs with \((F_{3},\epsilon_{3})\), giving \(\nu_{3}x_{3}^{*3}+\nu_{2}x_{2}^{*2}=0\). How about nonzero \((F_{3},\epsilon_{1},\epsilon_{2})\)? This gives \(-F_{3}\frac{b}{a}x_{3}^{*2}-\left(\epsilon_{1}\frac{a}{q}+\epsilon_{2}\frac{ b^{2}}{aq}\right)+\frac{F_{1}\epsilon_{1}\epsilon_{2}}{aq}=0\). In order for the roots to have opposite sign, the constant and quadratic coefficients must be of opposite sign, i.e. \(\left(-F_{3}\frac{\rho}{a}x_{3}^{*2}\right)\,\left(\frac{F_{1}\epsilon_{1} \epsilon_{2}}{aq}\right)<0\). However this is not possible since \(\rho q<0\). The case \((F_{3},F_{2},\epsilon_{1})\) leads to the same difficulty, as the roots are given by \(-F_{3}\frac{\rho}{a}x_{3}^{*2}-\left(F_{2}b\frac{\rho}{aq}+\epsilon_{1}\frac{ a}{q}\right)+\frac{F_{2}\epsilon_{1}}{q}=0\), with quadratic and constant coefficients of the same sign (from our premise of \(F_{3}=F_{2}\)). Since the second and third equation are interchangeable, we can also rule out chaos in \((F_{3},F_{1},\epsilon_{2})\). In summary, we have shown that the pair of fixed points with opposite signs of \(x_{3}^{*}\) cannot be achieved with a quadratic equation for the roots, and nonzero \(\nu_{3}=\epsilon_{3}\frac{\rho}{a}\) is required. This accounts for the presence of dissipation in the linear equation, in all chaotic models. The cases \((F_{1},\epsilon_{3})\) and \((F_{2},\epsilon_{3})\) each yield pairs of fixed points on opposite sides of this plane, and it is no coincidence that these MCMs appear even with \(\rho+q+r=0\). In contrast, the MCM \((F_{3},\epsilon_{2},\epsilon_{3})\), which appears only when the gyrostat core does not conserve energy, is not tied to spiraling orbits around this fixed point pair. ### Subclass 3 with \(r=0,b=0\) and Subclass 4 with \(r=0\) For subclass 3 the equations \[\chi_{1}^{\prime} =p\chi_{2}\chi_{3}-c\chi_{2}-\epsilon_{1}\chi_{1}+F_{1}\] \[\chi_{2}^{\prime} =q\chi_{3}\chi_{1}-\alpha\chi_{3}+c\chi_{1}-\epsilon_{2}\chi_{2}+F _{2}\] \[\chi_{3}^{\prime} =\alpha\chi_{2}-\epsilon_{3}\chi_{3}+F_{3} \tag{23}\] give fixed points \(x_{2}^{*}=-\left(F_{3}-\epsilon_{3}x_{3}^{*}\right)/a\), and \(x_{1}^{*}=\left(\alpha x_{3}^{*}+\epsilon_{2}x_{2}^{*}-F_{2}\right)/\left(qx_ {3}^{*}+\epsilon\right)\), resulting in a cubic equation \(\nu_{3}x_{3}^{*3}+\nu_{2}x_{3}^{*2}+\nu_{1}x_{3}^{*}+\nu_{0}=0\) as before, with coefficients \[\nu_{3} =\epsilon_{3}\frac{\rho}{a},\] \[\nu_{2} =-F_{3}\frac{\rho}{a}+\epsilon_{3}\frac{c}{\alpha}\left(\frac{ \rho}{q}-1\right)\,,\] \[\nu_{1} =F_{1}-F_{3}\frac{c}{a}\left(\frac{\rho}{q}-1\right)-\epsilon_{1 }\frac{a}{q}-\epsilon_{1}\epsilon_{2}\epsilon_{3}\frac{1}{aq}-\epsilon_{3} \frac{c^{2}}{aq},\] \[\nu_{0} =F_{1}\frac{c}{q}+\frac{F_{2}\epsilon_{1}}{q}+F_{3}\left(\frac{ c^{2}}{aq}+\frac{c_{1}\epsilon_{2}}{aq}\right)\,. \tag{24}\] As in the previous case, chaos requires nonzero \(\nu_{3}=\epsilon_{3}\frac{\rho}{a}\), because the pair with positive/negative \(x_{3}^{*}\) cannot be realized with a quadratic equation alone. For e.g., nonzero \(\left(F_{3},\epsilon_{1}\right)\), \(\left(F_{3},\epsilon_{2}\right),\left(F_{3},\epsilon_{1},\epsilon_{2}\right)\), etc. give quadratic and constant coefficients of the same sign.5 Footnote 5: With nonzero \(\epsilon_{3}\) we obtain nonzero \(\nu_{3},\nu_{2},\nu_{1}\); however the unforced equation alone is not up to the task, since \(\epsilon_{3}\frac{\rho}{a}\) and \(-\epsilon_{3}\frac{c^{2}}{aq}\) have the same sign. Moreover, there are simpler grounds for precluding unforced chaos, as discussed in Section 4.4. In the presence of forcing, the minimal chaotic case \(\left(F_{1},\epsilon_{3}\right)\) gives a cubic equation with all coefficients being nonzero, with many settings of the parameters allowing the desired configuration of the fixed-point pair. In case of \(\left(F_{2},\epsilon_{3}\right)\) we obtain \(\nu_{3}x_{3}^{*3}+\nu_{2}x_{3}^{*2}+\nu_{1}x_{3}^{*}=0\), with \(\nu_{1},\nu_{3}\) of same sign. Thus the necessary configuration of nontrivial fixed points is not available, indicating other processes at work. It is thus hardly surprising that this case is not a MCM in case \(\rho+q+r=0\) (SI Figs. 28-30; Table 1). The same goes for \(\left(F_{3},\epsilon_{3}\right)\), which requires nonconservation of energy in the gyrostat core to give rise to F&D chaos. Finally we consider Subclass 4, for which the F&D equations \[\chi_{1}^{\prime} =p\chi_{2}\chi_{3}+b\chi_{3}-c\chi_{2}-\epsilon_{1}\chi_{1}+F_{1}\] \[\chi_{2}^{\prime} =q\chi_{3}\chi_{1}-\alpha\chi_{3}+c\chi_{1}-\epsilon_{2}\chi_{2}+F _{2}\] \[\chi_{3}^{\prime} =\alpha\chi_{2}-bx_{1}-\epsilon_{3}\chi_{3}+F_{3} \tag{25}\] have fixed points \(x_{2}^{*}=-\left(F_{3}-bx_{1}^{*}-\epsilon_{3}x_{3}^{*}\right)/a\), and \(x_{1}^{*}=\left(\alpha x_{3}^{*}+\epsilon_{2}x_{2}^{*}-F_{2}\right)/\left(qx_ {3}^{*}+\epsilon\right)\), resulting in cubic \(\nu_{3}\chi_{3}^{*3}+\nu_{2}\chi_{3}^{*2}+\nu_{1}\chi_{3}^{*}+\nu_{0}=0\) having coefficients \[\nu_{3} =\epsilon_{3}\frac{p}{a},\] \[\nu_{2} =-F_{3}\frac{p}{a}+\epsilon_{3}\frac{c}{a}\left(\frac{p}{q}-1 \right)+b\,\left(1+\frac{p\chi_{1}^{*}}{a}\right),\] \[\nu_{1} =F_{1}-F_{3}\frac{c}{a}\left(\frac{p}{q}-1\right)-\epsilon_{1} \frac{q}{q}-\epsilon_{1}\epsilon_{2}\epsilon_{3}\frac{1}{aq}-\epsilon_{3}\frac {c^{2}}{aq}+\frac{bc}{q}+\frac{bc\chi_{1}^{*}}{a}\left(\frac{p}{q}-1\right)\] \[\nu_{0} =F_{1}\frac{c}{q}+\frac{F_{2}\epsilon_{1}}{q}+F_{3}\left(\frac{c ^{2}}{aq}+\frac{\epsilon_{1}\epsilon_{2}}{aq}\right)-\frac{b\chi_{1}^{*}\left( c^{2}+\epsilon_{1}\epsilon_{2}\right)}{aq}, \tag{26}\] which closely parallels Subclass 3, with additional contributions from nonzero \(b\). Even though this is cubic the structure is really different, since it must be solved simultaneously with the other equations for \(\chi_{1}^{*}\) and \(\chi_{2}^{*}\), and yet the close resemblance points to how the same MCMs are attained (Table 1), once \(\epsilon_{3}\) is nonzero. There are subtle differences, however, which make not only \((F_{1},\epsilon_{3})\), but also \((F_{2},\epsilon_{3})\) as MCMs, in the presence of an energy conserving core (SI Figs. 31-33; Table 1). The presence of \((F_{2},\epsilon_{3})\) can be expected from the similarity of the equations for \(d\chi_{1}/dt\) and \(d\chi_{2}/dt\). ### Role of fixed points For subclass 1, Figures 5-6 (and SI Figs. 19-20) illustrate the orbits for the models \((F_{1},\epsilon_{3})\), and those for \((F_{1},\epsilon_{2},\epsilon_{3})\). The latter closely resembles the model of _Lorenz_ (1963). Chaos in this model requires spiraling orbits surrounding the pair of fixed points (_Sparrow_ (1982)). An important difference is the absence of the third fixed point at the origin, whose saddle structure permits stable homoclinic orbits for intermediate values of forcing. The MCM cannot have such homoclinic orbits, as it possesses only the two fixed points. It is evident from the calculations here that chaos only requires this pair with opposite signs of \(\chi_{3}^{*}\), making the discussion of chaos with constant forcing terms clearly germane to such models despite the differences. Moreover, this MCM appears even in the absence of energy conservation in the gyrostat core. The progression of the three-dimensional orbits with increasing forcing (Fig. 5) illustrates the dynamics that are relevant, with this pair of fixed points. The \(\chi_{1}\) axis is invariant in this model, and the effects are seen in the long stretches of time for which orbits can remain nearby. Similar results occur for the model with \((F_{1},\epsilon_{3})\) in subclass 2 (SI Fig. 21), subclass 3 (Figure 7), as well as subclass 4. Here, there is a pair of fixed points with opposite \(\chi_{3}^{*}\) as before. Analogous plots can be made for \((F_{2},\epsilon_{3})\) in subclasses 2 and 4. As can be seen from inspecting Table 1, these are the cases where the cubic equations naturally yield at least a pair of fixed points with opposite signs of \(\chi_{3}^{*}\). The Jacobian of the vector fields acquires a skew-symmetric contribution from nonzero \(\chi_{3}^{*}\) (this pressupposes opposite signs of \(p\) and \(q\)), with opposite orientations in each hemisphere, influencing the characteristic spirals surrounding the fixed-point pair that alternate between the hemispheres. This occurs even with energy conservation of the gyrostatic core, and it is therefore not surprising that these cases also constitute MCMs for the case with \(p+q+r=0\) (Table 1). The effects of increasing the external forcing for these cases are shown in SI Figs. 34-35 (subclass 2), SI Fig. 36 (subclass 3), and SI Fig. 39 (subclass 4). The other MCMs found for the case of \(p+q+r\neq 0\) do not fit this pattern. In particular, \((F_{3},\epsilon_{2},\epsilon_{3})\) in subclass 2, \((F_{2},\epsilon_{3})\) and \((F_{3},\epsilon_{3})\) in subclass 3 (SI Figs. 37-38), and \((F_{3},\epsilon_{3})\) in subclass 4 do not have a pair of fixed points with opposite \(x_{3}^{*}\). In fact, where the forcing occurs on the \(x_{3}\) mode, one cannot expect steady states of both signs and the resulting attractor looks quite different (SI Fig. 41). This is not limited to coincident forcing and dissipation, with a similar phenomenon occurring also with \((F_{2},\epsilon_{3})\) of subclass 3 (SI Fig. 37). The attractors here (e.g., SI Figs. 37-38) are very different from those of the Lorenz model, which has in its core \(p+q=0\) (\(r=0\) by the choice of streamfunction), but can resemble the attractors found in the gyrostatic core when there are no constants of motion (_Seshadri and Lakshmiwarahan_ (2023)). Sometimes in these cases, for example \((F_{3},\epsilon_{2},\epsilon_{3})\) of subclass 2, and \((F_{3},\epsilon_{3})\) in subclasses \(3-4\), forcing and dissipation appears in the same equation, but this is not always the case, for example \((F_{2},\epsilon_{3})\) in subclass 3. Despite their differences, they all depend on the lack of energy conservation in the gyrostat core, and the absence of the aforementioned pair of fixed points. It is likely that chaos in these cases is tied to the loss of invariants in the gyrostat core when \(p+q+r\neq 0\). With \(p+q+r\neq 0\), subclasses 2 and 3 have 1 invariant owing to the presence of two linear feedbacks (_Seshadri and Lakshmiwarahan_ (2023)), and the inclusion of forcing might play a role analogous to the third linear feedback that creates conditions for Hamiltonian chaos in the gyrostatic core. Without energy conservation, subclass 4 has no invariants in the gyrostatic core and its MCM \((F_{1},\epsilon_{3})\) can encounter very different dynamics on the route to forced and dissipative chaos as compared to the correspondng MCM of subclass 1 (SI Fig. 39). Furthermore, where they are important as in cases \((F_{1},\epsilon_{3})\) and \((F_{1},\epsilon_{2},\epsilon_{3})\) of subclass 1, fixed points can play various roles in pathways to chaos, with not all of the resulting fixed points being germane. The Lorenz model has three fixed points, whereas the variant of subclass 1 closest to it, i.e. \((F_{1},\epsilon_{2},\epsilon_{3})\), has only two. Yet the dynamics resemble each other and crucially involve the pair of fixed points with nonzero \(x_{3}^{*}\). Once these become unstable, they lead to spiraling orbits eventually to the Lorenz attractor. To take up the comparison further, we consider the case \((F_{1},\epsilon_{2},\epsilon_{3})\) of subclass 1, without the fixed point at the origin but having an invariant set in the line \(x_{1}\). The equations for \(\dot{x}_{2}\) and \(\dot{x}_{3}\), being linear in \(x_{2}\) and \(x_{3}\), remain unchanged under the transformation \((x_{1},x_{2},x_{3})\rightarrow(x_{1},-x_{2},-x_{3})\), as does the equation for \(\dot{x}_{1}\) because it depends only on the product \(x_{2}x_{3}\). When forcing is applied to the first equation, and dissipation to the second and third, as in \((F_{1},\epsilon_{3})\) and \((F_{1},\epsilon_{2},\epsilon_{3})\), the line with \(x_{2}=0,x_{3}=0\) remains an invariant set. However there is no fixed point the origin. Near this invariant set, the coordinate \(x_{1}\) evolves as \(\dot{x_{1}}\approx F_{1}\), growing if \(F_{1}>0\). Let us consider the resulting dynamics near \(x_{2}=0,x_{3}=0\) by examining the transverse stability (transverse to this invariant set), given by the linearized equations \[\left[\begin{array}{c}\delta\dot{x}_{2}\\ \delta\dot{x}_{3}\end{array}\right]=\left[\begin{array}{cc}-\epsilon_{2}&qx _{1}-a\\ a&-\epsilon_{3}\end{array}\right]\left[\begin{array}{c}\delta x_{2}\\ \delta x_{3}\end{array}\right] \tag{27}\] with the above matrix having characteristic equation \(\lambda^{2}+(\epsilon_{2}+\epsilon_{3})\,\lambda+\epsilon_{2}\epsilon_{3}-a \,(qx_{1}-a)=0\), with eigenvalues \(\lambda=-\epsilon\pm\sqrt{a\,(qx_{1}-a)}\), where we have used \(\epsilon_{2}=\epsilon_{3}=\epsilon\). When \(a\,(qx_{1}-a)<0\), the eigenvalues are complex conjugate leading to a stable spiral towards the invariant set. For \(0\leq a\,(qx_{1}-a)<\epsilon^{2}\), both eigenvalues are real and negative with local dynamics resembling a sink. However for \(\epsilon^{2}<a\,(qx_{1}-a)\) points on the invariant set behave as a saddle, and nearby trajectories are repelled. Similar results are obtained for the MCM \((F_{1},\epsilon_{3})\), with characteristic polynomial and eigenvalues \[\lambda=-\frac{\epsilon_{3}}{2}\pm\sqrt{\frac{\epsilon_{3}^{2}}{4}+a\left(qx_{1}-a \right)}. \tag{28}\] Here points are repelled from the invariant set once \(0<a\left(qx_{1}-a\right)\), and the transition in the neighbourhood of the invariant set remains the same. These dynamics are shown in Figure 8, for initial condition \((1,0.2,0.2)\), fixed \(F_{1}=0.021\) so that \(x_{1}\) increases near the invariant set, and \(aq>0\) so that the invariant set eventually becomes unstable. Arrows indicate the direction of the vector field along the orbit. Near the invariant set the increase of \(x_{1}\) is roughly linear in time, and as it grows the changing stability can be observed. Initially the invariant set is transverse stable, with oscillatory dynamics evident from the gaps in the time-series of \(x_{2}\) and \(x_{3}\) (where these are negative) when plotted on the logarithmic scale. The logarithmic plots show that \(x_{2}\) and \(x_{3}\) approach the invariant set but never reach it, before the set becomes unstable as \(x_{1}\) grows. There is no fixed point along this invariant set and thus no homoclinic orbits. Such dynamics has similarities with the Lorenz model \[X^{\prime} =-\sigma X+\sigma Y\] \[Y^{\prime} =-XZ+rX-Y\] \[Z^{\prime} =XY-bZ \tag{29}\] where \(X\) is related to the amplitude of the first mode of the streamfunction, and \(Y,Z\) are related to modes of temperature evolution. Analogous to the above cases there is an invariant set given by \(X=0,Y=0\), with flow in the neighbourhood contracting towards the origin (a fixed point) following \(Z^{\prime}=-bZ\). The corresponding transverse stability is described by \[\left\{\begin{array}{c}\delta\dot{X}\\ \delta\dot{Y}\end{array}\right\}=\left[\begin{array}{cc}-\sigma&\sigma\\ -Z+r&-1\end{array}\right]\left\{\begin{array}{c}\delta X\\ \delta Y\end{array}\right\} \tag{30}\] with the above matrix having characteristic equation \(\lambda^{2}+(1+\sigma)\,\lambda+\sigma\,(1-r+Z)=0\), with eigenvalues \(\lambda=-\left(1+\sigma\right)\pm\sqrt{\left(1+\sigma\right)^{2}-4\sigma\,(1- r+Z)}\), which behaves as a saddle whenever \((1-r+Z)<0\), or \(r>1\) which is the well known condition for instability of the fixed point at the origin. Thus, there are close parallels with subclass 1, despite the additional fixed point at the origin. In each case, it is the pair of repelling fixed points away from the origin that circumscribes the possibility for chaos. ## 5 Discussion This paper is based on large ensemble simulations, to identify the simplest chaotic models derived from the Volterra gyrostat. This revealed that minimal chaotic models exist, involving proper subsets of the forcing and dissipation terms present in each of the chaotic cases that are found. The existence of such MCMs is explicable through common conditions for chaos in these models. Our analysis showed that the forcing and dissipation in these MCMs play very specific roles. Dissipation induces a stable direction in the flow. The skew-symmetric property of the nonlinear coupling between \(x_{1}\) and \(x_{2}\) in each of the subclasses investigated here (owing to opposite signs of the quadratic terms) makes the linear mode \(x_{3}\) a natural candidate for governing the stable direction, and hence where dissipation appears. Instead, placing dissipation in \(x_{1}\) or \(x_{2}\) alone alters the arrangement of fixed points. With dissipation but in the absence of forcing, the energy is decreasing: defining \(E=\frac{1}{2}\left(x_{1}^{2}+x_{2}^{2}+x_{3}^{3}\right)\) the model of subclass 1 \[x_{1}^{\prime} =\rho x_{2}x_{3}-cx_{2}+F_{1}\] \[x_{2}^{\prime} =qx_{3}x_{1}-ax_{3}+cx_{1}+F_{2}\] \[x_{3}^{\prime} =ax_{2}-\epsilon_{3}x_{3}+F_{3} \tag{31}\] has \(E^{\prime}=-\epsilon_{3}x_{3}^{2}\leq 0\) in the absence of any forcing, and chaos cannot ensue. Forcing shifts the attractor along the corresponding axis, for example nonzero \(F_{3}\) makes \(E^{\prime}=-\epsilon_{3}x_{3}^{2}+F_{3}x_{3}\) and for chaos to appear \(F_{3}x_{3}\) must be positive most of the time. Such an attractor cannot be distributed on either side of the \(x_{1}-x_{2}\) plane, and the coincidence of forcing and dissipation precludes the fixed point pair associated with the more characteristic MCMs. In contrast, nonzero \(F_{1}\) makes \(E^{\prime}=-\epsilon_{3}x_{3}^{2}+F_{1}x_{1}\) and fixed points on either side of the \(x_{1}-x_{2}\) plane are readily obtained, giving rise to the more typical structures resembling the Lorenz attractor. Inspection of the Jacobian of the two fixed points also indicates how these have opposite orientations in their surrounding flows, which is crucial for the chaotic set that follows. We caution that this discussion presents an oversimplified intuition, and consideration of the case \((F_{2},\epsilon_{3})\) of subclass 3 refutes overgeneralization prior to analysis. In summary, when the gyrostat equations have two nonlinear terms, chaos requires dissipation of the linear mode. As for forcing, the main factor is whether the gyrostat core conserves energy. If it does, then there are fewer ways in which forcing can appear for chaos to be present. In these circumstances, chaos requires fixed points with opposite signs of \(x_{3}^{*}\) (the linear mode) and this circumscribes where forcing can appear. The precise results for each subclass are easily found through the corresponding expression for \(x_{3}^{*}\) which takes the form \(\nu_{3}x_{3}^{*2}+\nu_{1}=0\) for subclasses \(1-2\), and with all cubic terms nonzero for subclasses \(3-4\). Previous studies have pointed to the importance of investigating how the arrangement of fixed points can sometimes circumscribe more complex dynamics (_Eschenazi et al._ (1989); _Gilmore_ (1998)), and the gyrostat equations present a clear example. Such analyses can also shed light on the origins of chaos in the model of _Lorenz_ (1963). Although nonlinear momentum advection is present in the model, symmetry in the assumed basis function of streamfunction renders nonlinear advection's effects absent. Therefore, with two nonlinear terms, and one linearly evolving mode \(X\) it is not surprising that chaos in this model requires dissipation to be present through nonzero kinematic viscosity, which is related to the parameter \(\sigma\). The model also includes additional dissipation terms, through effects of thermal diffusivity, and it is hardly surprising that the case of subclass 1 with nonzero \((F_{1},\epsilon_{2},\epsilon_{3})\) resembles the Lorenz attractor. One difference between the Lorenz model and subclass 1 is that the former has a third fixed point at the origin, whereas the models of subclass 1 have only the pair with nonzero \(x_{3}^{*}\). This is because the forcing in the Lorenz model appears through the term \(rX\), with Rayleigh number \(r\) being the bifurcation parameter, which couples momentum to temperature. In contrast the case \((F_{1},\epsilon_{2},\epsilon_{3})\) of subclass 1 has a constant forcing term, leading to only the pair of fixed points. Nevertheless, the resulting dynamics are quite similar, showing that only this pair of fixed points is essential to the appearance of chaos. This also shows how linear coupling and external forcing can have similar effects in such models. An implication is that momentum diffusion would be necessary and sufficient for chaos in the model of _Lorenz_ (1963), and the further presence of thermal diffusion only influences the shape of the attractor, analogous to \((F_{1},\epsilon_{3})\) and \((F_{1},\epsilon_{2},\epsilon_{3})\) of subclass 1. Furthermore, the Lorenz model has been obtained from a wide variety of physical processes (_Brindley and Moroz_ (1980); _Gibbon and McGuinness_ (1982); _Matson_ (2007)), and such inquiries can inform the understanding of irregular dynamics in a variety of systems. Since chaos in these cases does not depend on the status of \(\rho+q+r\), irregular dynamics can be experienced regardless of whether energy is conserved. In contrast, if the gyrostat core does not conserve energy, there are additional possibilities for forcing to appear. These possibilities do not require two fixed points with opposite \(x_{3}^{*}\). Moreover, such cases do not admit chaos when the energy conservation constraint is present in gyrostatic core. Thus, the appearance of chaos in these cases is closely tied to the presence of fewer invariants in the gyrostatic core. Broadly, our findings about forced-dissipative chaos in the Volterra gyrostat can be summarized as follows. When there is one linear mode (let us call it \(x_{3}\)), it sets the direction where points in phase space experience contraction, and dissipation must necessarily be present in \(x_{3}\). If the placement of external forcing allows two fixed points with opposite signs of \(x_{3}^{*}\), then attractors that resemble the Lorenz attractor can appear. This condition is necessary if the gyrostat core is energy-conserving, with these fixed points acting as repellors. If the gyrostat core does not conserve energy, then there are further ways for chaos to arise. These further arrangements are closely tied to the loss of invariants in subclasses of the gyrostat having two or more linear feedback terms. That is the reason this possibility is absent from Subclass 1, whose gyrostat core maintains two invariants even with \(p+q+r\neq 0\). In Subclass 3, with two linear feedbacks and thus only one invariant in the gyrostat core, if forcing is applied to \(x_{3}\) and thus coincides with dissipation, the attractor is shifted along the \(x_{3}\) axis and the aforementioned pair of fixed points cannot exist. Although the resulting chaos collapses volumes in phase space, it is possible that it is closely tied to the chaos in volume-conserving flows found earlier (_Seshadri and Lakshmivarahan_ (2023)). Chaos in general need not depend on the arrangement of fixed points, and many simple chaotic models have previously been found that do not contain any fixed points (_Sprott_ (1994)). Such minimal chaotic cases that only appear when the gyrostatic core does not conserve energy merit further inquiry. Of course, with forcing and dissipation both present, the model no longer has any quadratic invariants, and it remains an open question as to whether such chaotic cases with coincident forcing and dissipation arise from similar pathways as in the conservative case without invariants. Much of this seems to turn on whether adding forcing has effects that parallel the linear feedbacks that limit the number of invariants. There are other subclasses of the Volterra gyrostat, but we have not considered those with three nonlinear terms, whose fixed-point equations contain higher degrees. Further generalization of our present results to such subclasses, as well as to systems of coupled gyrostats, calls for explicit analysis of these models. Prior studies have indicated the appearance of Lorenz-like attractors in low-order models of higher dimension that discretize Rayleigh-B\(\acute{e}\)nard convection with additional modes (_Musielak and Musielak_ (2009); _Reiterer et al._ (1998)). Similar attractors are especially prevalent when such discretization maintains the conservation properties of these original models. Since such models must contain systems of coupled gyrostats, similar constraints on fixed points (and concomitant routes to chaos) might be present in higher dimensions. Nonlinear feedback is also an important extension of the gyrostat model (_Lakshmivarahan and Wang_ (2008b)). Studying chaos in models involving coupled gyrostats, investigating the relationships with the number of invariants, and the possibility of a wide range of chaotic attractors circumscribed by fixed points in these models, with and without the presence of nonlinear feedbacks, is a rich area of study. ## Declarations of interest The authors have no competing interests to declare. ## Acknowledgments The authors are grateful to S Krishna Kumar and Rajat Masiwal for suggesting improvements to the manuscript.
2305.02985
Quarkonium spin alignment in a vortical medium
We use a potential model to investigate the phenomenology of quarkonium in a thermal rotating medium, where vorticity and spin density are not necessarily in equilibrium. We find that the quarkonium spin density matrix, as well as the binding energy and melting temperature, are sensitive to both the vorticity and the lack of equilibrium between vorticity and spin. This means that quarkonium spin alignment is a sensitive probe for vorticity and spin within the hydrodynamic phase. Information unequivocably pointing to spin-orbit non-equilibrium dynamics can be obtained from a combined study of quarkonium relative abundance and spin alignment, as well as experimentally obtainable off-diagonal density matrix elements.
Paulo Henrique De Moura, Kayman J. Goncalves, Giorgio Torrieri
2023-05-04T16:50:16Z
http://arxiv.org/abs/2305.02985v2
# Quarkonium spin alignment in a vortical medium ###### Abstract We use a potential model to investigate the phenomenology of quarkonium in a thermal rotating medium, where vorticity and spin density are not necessarily in equilibrium. We find that the quarkonium spin density matrix, as well as the binding energy and melting temperature, are sensitive to both the vorticity and the lack of equilibrium between vorticity and spin. This means that quarkonium spin alignment is a sensitive probe for vorticity and spin within the hydrodynamic phase. Information unequivocably pointing to spin-orbit non-equilibrium dynamics can be obtained from a combined study of quarkonium relative abundance and spin alignment, as well as experimentally obtainable off-diagonal density matrix elements. Introduction Quarkonium as been used as a probe of thermodynamic properties of the Quark-Gluon Plasma (QGP) since the seminal paper from [1]. The heavy mass scale means that it is a probe that can both be examined reliably by theoretical calculations and susceptible to non-trivial in-medium effects, serving as a "thermometer", sensitive to the interplay of thermal fluctuations and the QCD medium, and as a probe for non-equilibrium behavior [2; 3; 4; 5]. The advent of the study of vorticity in heavy ion collisions [6] added a potentially new arena where quarkonium could be used. In fact, we shall argue that quarkonium provides unique opportunities for the phenomenology of hydrodynamics with spin. First of all, quarkonium can be formed early in the collision and can survive throughout the quark gluon plasma evolution. Unlike polarized \(\Lambda\)s [6] and spin-aligned vector mesons [7; 8; 9; 10; 11; 12], they are potentially sensitive to the entire dynamics of the fluid and not just to the freeze-out. Furthermore, the long-lived quarkonium state is spin 1, having a 3\(\times\)3 density matrix with 8 degrees of freedom, 6 of which are accessible to spin alignment measurements. Thus, if, as seems to be theoretically highly likely [13; 14; 15; 16; 17; 18; 19; 20], spin and vorticity are not in equilibrium, this lack of equilibration will can be imprinted on the density matrix's measurable off-diagonal elements \(\rho_{0,\pm 1},\rho_{\pm 1,\mp 1}\)[11] (No equivalent elements exist for the "qubit" density matrix of a fermion). In fact, these elements have recently been measured [21]. Last but not least, bottomonium and to a certain extent charmonium states can be viewed as solutions to a Schrodinger equation with heavy Quark wavefunctions moving around a QCD potential (including a weakly coupled and a confined part) [22; 23]. It is therefore possible to understand, both analytically [24] and via effective theory [25], effect that vorticity will have to the properties of quarkonium. In this work, we shall go in this direction, combining the insights developed in [11] with a potential model for quarkonium solved using standard methods extended to rotating frames [26; 27; 28; 29] and finite temperature [30; 31]. In section II we shall assess currently available experimental data, and point out what would be necessary to probe spin-orbit non-equilibrium. Then in section III we calculate the quarkonium wavefunction properties, namely binding energy and a semi-classical estimate of the melting temperature, in a rotating frame. Finally, in section IV we calculate quarkonium observables that could indicate a lack of equilibrium between spin and vorticity. ## II The quarkonium spin density matrix In [11], we have argued that vector spin alignement contains crucial information on the still-unknown spin hydrodynamic evolution in heavy ion collisions, provided that not just the \(\rho_{00}\) coefficient but the "off-diagonal" coefficients are measured. The former, \(\rho_{00}\) was measured in [7; 8] and is sensitive to \(\theta\), the angle w.r.t. the spin alignement direction, which for heavy ions would be mainly the reaction plane, reflecting the vorticity structure in analogy to the global polarization measurement [6]. The other coefficients, related to density matrix elements \(\rho_{0\pm 1,\pm 1\pm 1,\pm 1\mp 1}\) (see equation (2) of [11]) would depend on a second "reference" angle \(\phi\), whose most logical definition is in terms of the beam axis. In such a set-up, the non-equilibrium between spin and vorticity would manifest itself by the interplay of transverse vs. longitudinal polarization [6]. The heavy ion system actually lends hope that this interplay coil occur (Fig. 1 ): Transverse polarization is thought to be present from the beginning, since it is the result of the angular momentum of the initial Figure 1: A schematic representation of how the vorticity in heavy ion collisions looks like at early and late times Blue dashed arrows represent vorticity, cyan dot-dashed ones polarization, red solid ones flow. The mis-alignment of spin in the right panel is due to the lack of equilibrium between vorticity and polarization [13; 15]. state of an off-central collision. Longitudinal polarization forms from azimuthal gradients, on a time-scale comparable to the formation of momentum anisotropy causing elliptic flow. If polarization "relaxes to vorticity" [13; 15], one expects polarization to be aligned in the transverse direction, but comparatively mis-aligned in the longitudinal one [15], which need seems to be out of phase with some dynamical calculations [6]. The hope, therefore, is that off-diagonal (longitudinal-transverse) coefficients are potentially important for they could signal deviations from local equilibrium (included via a Cooper-Frye type formula [32]) due to the presence of two distinct axial currents, representing spin and vorticity, evolving on different time-scales [15]. In [11] we illustrated this with a coalescence type model. Coalescence of only spin within a vortical background should not change the coherence of the density matrix, since it is a unitary process and the dynamics is symmetric around the vortical axis. But assuming vorticity and pre-existing spin density are not in equilibrium and pointing in different directions, this is no longer true [11]; Vorticity is "classical" background, interacting with the quantum spin state, so if coalescence happens in a vortical background (i.e. if spin and vorticity are out of equilibrium) one expects impurity of the density matrix. Mathematically, the loss of purity is manifest in Eq. (23) of [11]\(P_{L}(w)\) representing the (unknown) classical probability of a vortex \(\omega\) giving an angular momentum \(L\) to the meson wavefunction. When this probability becomes uniform (\(P_{L}(w)\rightarrow\) constant ) we recover a maximally impure state1. For vector bosons, this impurity is manifest in the off-diagonal matrix components (see the discussion between eqs 4 and 5 of [11]). Footnote 1: Through not quite the Cooper-frye ansatz of [32]. A \(P_{L}=\) constant impure state can be regarded as a microcanonical density matrix assuming the diquark quarkonium state is exact. The grand canonical matrix inherent in the Cooper-Frye formula of [32] and it’s vector extension would arise if all values of \(L\) up to \(\infty\) were allowed due to angular momentum fluctuations and a bath of degrees of freedom. While this data as yet does not exist for vector mesons, it does exist for quarkonia \(J/\psi\) and \(\Upsilon\) states [21], since the ALICE Collaboration measurement of the quarkonium polarization included the off-diagonal values of the spin density matrix. Therefore, we can do direct connection between polarization parameters \(\lambda_{\theta}\), \(\lambda_{\phi}\) and \(\lambda_{\theta\phi}\) and density matrix [21] and the coefficients used in [11] (table in eq. 1) \[\begin{array}{cccc}\mbox{Variable}&\mbox{Element}&\mbox{coefficient}\times\frac{ 3}{4\pi}\\ \rho_{00}&\rho_{00}&\cos^{2}\theta\\ \frac{1-\rho_{00}}{2}&\frac{\rho_{11}+\rho_{-1-1}}{2}&\sin^{2}\theta\\ r_{10}&Re[\rho_{-10}-\rho_{10}]&\sin(2\theta)\cos(\phi)\\ \alpha_{10}&Im[-\rho_{-10}+\rho_{10}]&\sin(2\theta)\sin(\phi)\\ r_{1,-1}&Re[\rho_{1,-1}]&\sin^{2}\theta\cos(2\phi)\\ \alpha_{1,-1}&Im[\rho_{1,-1}]&\sin^{2}\theta\sin(2\phi)\\ \end{array} \tag{1}\] \[\rho_{00}=\frac{1+\lambda_{\theta}}{3+\lambda_{\theta}}\hskip 14.226378pt, \hskip 14.226378ptr_{1,-1}=\frac{\lambda_{\phi}}{3+\lambda_{\theta}}\hskip 14.226378pt, \hskip 14.226378ptr_{10}=\frac{\lambda_{\theta\phi}}{3+\lambda_{\theta}} \tag{2}\] This is possible by making the comparison between \(J/\psi\) angular distribution and the standard vector mesons angular distribution shown in Eq. 1 of [11] (Eq. 2). Thus, we can do the analyses presented in [11] to relate \(\lambda_{\theta,\phi,\theta\phi}\) to the wave function coherence via the parametrization in terms of Gell-Mann matrices. Choosing the \(n_{3-8}\) basis for this parametrization, we need to solve the following system of algebraic equations derived in [11] in terms of the frame relating the lab to the spin direction (defined by angles \(\theta_{r},\phi_{r}\)) \[\frac{1}{12}\left(3\left(n_{8}-\sqrt{3}\;n_{3}\right)\cos\left(2 \theta_{r}\right)-\sqrt{3}\;n_{3}+n_{8}+4\right)=\rho_{00} \tag{3}\] \[\frac{\left(n_{8}-\sqrt{3}\;n_{3}\right)\sin\left(\theta_{r} \right)\cos\left(\theta_{r}\right)\cos\left(\phi_{r}\right)}{\sqrt{2}}=r_{10}\] (4) \[-\frac{\left(\sqrt{3}\;n_{3}+3n_{8}\right)\sin\left(\theta_{r} \right)\sin\left(\phi_{r}\right)}{3\sqrt{2}}=\alpha_{10} \tag{5}\] \[\phi_{r}=-\frac{1}{2}\tan^{-1}\left(\frac{\alpha_{1,-1}}{r_{1,-1}}\right) \tag{6}\] Now, we will do the follow change variable \(\tilde{n}=n_{8}-\sqrt{3}\;n_{3}\), and knowing that variables are equal to zero \(\alpha_{10}\) and \(\alpha_{1,-1}\). So, we can write this system equation the following form: \[\frac{1}{12}\left(3\tilde{n}\cos\left(2\theta_{r}\right)+\tilde{n}+4\right)= \rho_{00} \tag{7}\] \[\frac{\tilde{n}\sin\left(\theta_{r}\right)\cos\left(\theta_{r}\right)\cos \left(\phi_{r}\right)}{\sqrt{2}}=r_{10} \tag{8}\] \[\phi_{r}=0 \tag{9}\] Therefore, we have the following solution: \[\tilde{n}\left(\rho_{00},r_{10}\right)=-\frac{(1-3\rho_{00})^{2}+3\sqrt{(1-3\rho_ {00})^{4}+4(1-3\rho_{00})^{2}r_{10}^{2}}}{6\rho_{00}-2} \tag{11}\] \[\Theta_{1}\left(\rho_{00},r_{10}\right)=-\sqrt{\frac{2(1-3\rho_{00})^{2}-2\sqrt {(1-3\rho_{00})^{4}+4(1-3\rho_{00})^{2}r_{10}^{2}}+6r_{10}^{2}}{2(1-3\rho_{00}) ^{2}+9r_{10}^{2}}} \tag{12}\] \[\Theta_{2}\left(\rho_{00},r_{10}\right)=\sqrt{\frac{(1-3\rho_{00})^{2}-\sqrt{( 1-3\rho_{00})^{4}+4(1-3\rho_{00})^{2}r_{10}^{2}}+3r_{10}^{2}}{2(1-3\rho_{00})^ {2}+9r_{10}^{2}}} \tag{13}\] \[\Theta_{3}\left(\rho_{00},r_{10}\right)=\frac{\Theta_{2}\left(\rho_{00},r_{10 }\right)\left(2(1-3\rho_{00})^{2}+2\sqrt{(1-3\rho_{00})^{4}+4(1-3\rho_{00})^{2 }r_{10}^{2}}\right)}{2(3\rho_{00}-1)r_{10}} \tag{14}\] \[\theta_{r}=\tan^{-1}\left(\Theta_{1}\left(\rho_{00},r_{10}\right),\Theta_{3} \left(\rho_{00},r_{10}\right)\right) \tag{15}\] So, using polarization parameters that were obtained from ALICE collaboration [21] to differents transversal momentum \(p_{T}\) ranges, we can determine whether the density matrix represents a coherence state or not. To make it we will use the equation 11 and reach the following figure 2. Now, the coefficients in the frame Collins-Soper frame, given in the ranges \(4<p_{T}<6\)\((GeV/C^{2})\) and \(6<p_{T}<10\)\((GeV/C^{2})\) respectively result in \(\tilde{n}=(0.09\pm 0.10,0.02\pm 0.07)\). Making the comparison with the Helicity frame value \(\tilde{n}=0.09\pm 0.11\) to \(6<p_{T}<10\)\((GeV/C^{2})\). Therefore, we can see that they are the same within error bar. Looking at the figure 2, we can conclude that the density matrix from \(J/\psi\) particle does not represent the pure state since none of the values for \(n_{3,8}\) obtained from the data intersects the black pink points, i.e. these points represent the pure state in other words when the density matrix satisfy \(\rho^{2}=\rho\). This might indicate that statistical freeze-out advocated in [3; 32] is a good estimate of particle production in heavy-ion collisions because the density matrix does not represent a coherent state as argued in [11]. In the bottomonium \(\Upsilon(1S)\) case, we can see in the right panel of Fig. 2 that because of large uncertainty do not know whether for these particles the density matrix represents a coherence state or not. However, taking these conclusions as definitive would be highly premature as Fig. 4 of [11] and the definition of \(P_{L}(w)\) given in the introduction shows. We have no idea what \(P_{L}(w)\) is beyond the fact that it overall conserves momentum, but it acts as a projector. One recovers a pure state when \(\rho_{ij}=\delta_{Ll}\), (there is a certainty of vorticity giving a certain momentum) and a maximally mixed state when the momentum given by vorticity is independent of \(L\). So the measurement in Fig. 4 [11] is directly connected to how out of equilibrium vorticity and spin are, and how much vorticity vs pre-existing spin influences the final spin of the vector meson. Linear combination of the different \(L\)-values in Fig. 4 of [11] are possible, illustrating a probability of different spin configurations. Note that these coefficients are given in terms of an angle \(\theta\), which in [11] is related to \(\theta_{r}\), the angle between the hadronization frame and the lab frame. This angle of course depends on the detailed hydrodynamical and spin-hydrodynamical evolution of the system, but it is obviously highly dependent on the reaction plane angle \(\Phi\). Considering the Harmonic behavior of the coefficients in Fig. 4 of [11] w.r.t. \(\theta_{r}\) (most coefficients average to zero for all angles), therefore, it would be crucial to measure \(\lambda_{\theta,\phi,\theta\phi}\) not as a function of \(p_{T}\) as in [21] but as function of azimuthal reaction plane angle. A mod Figure 2: The experimental results of \(J/\psi\) and \(\Upsilon\) polarization measurements analyzed in terms of the Gell-Mann matrix representation of [11]. These results were made using the helicity and Collins-Soper frame to \(J/\psi\) polarization [21]. The other Collins-Soper values was not take in this plot because they were above the Helicity \(6<p_{T}<10\) (\(GeV/C^{2}\)) uncertainty band as were shown in the text. a clear signature for a non-trivial \(P_{L}(w)\) which can then be harmonically decomposed into \(L=0,1,2\) components of Fig 4 of [11] to obtain information of the impact of spin vs vorticity in \(J/\psi\) vs \(\Upsilon\) hadronization. If the dependence w.r.t. \(\theta_{r}\) will be compatible with zero as it was for \(p_{T}\) in each \(\Phi\) bin, this is good evidence for a statistical Cooper-Frye freeze-out as in [3; 32]. Schematically, these two alternatives are illustrated in Fig. 3. The quantitative details of Fig. 3 would require a hydrodynamic simulation with a hydrodynamic model where spin and vorticity are not in equilibrium. At the moment, such a model is not available. For this reason, in the rest of the paper we proceed to examine the microscopic dynamics of the charmonium state using a potential model, with a view of developing _quantitative_ signatures of non-equilibrium between charmonium spin and vorticity. Figure 3: A schematic illustration of how we expect the coefficients \(\lambda_{\theta,\phi,\theta\phi}\) to evolve with reaction plane angle \(\Phi-\Phi_{reaction}\) in the two scenarios. In [21], these coefficients are compatible with zero when integrated over this angle. A flat dependence (a) indicates a maximally incoherent Cooper-Frye/thermal type production. A harmonic dependence (b) (whose exact shape will be sensitive to the dynamics) might indicate a more non-trivial coalescence scenario involvin vorticity. Note that both cases are consistent with zero when integrated over \(\Phi\), as was done in [21]. ## III The quarkonium state in rotating reference frames ### The Schrodinger equation We are interested in the angular momentum due to vortices couples with the quark spin. So, we need to take an extra term, \(\mathbf{\omega}\cdot\mathbf{S}\)[28], \[\mathcal{H}=\frac{(\mathbf{p}-m\mathbf{\omega}\times\mathbf{r})^{2}}{2m}-\frac{m}{2}( \mathbf{\omega}\times\mathbf{r})^{2}-\mathbf{\omega}\cdot\mathbf{S}+V(r) \tag{16}\] Now, we can write the equation 16 for a two-body case. Thus we have the following expression: \[\mathcal{H}=\sum_{i=1,2}\left(\frac{(\mathbf{p}_{i}-m_{i}\mathbf{\omega}_{i}\times \mathbf{r}_{i})^{2}}{2m_{i}}-\frac{m_{i}}{2}(\mathbf{\omega}_{i}\times\mathbf{r}_{ i})^{2}-\mathbf{\omega}_{i}\cdot\mathbf{S}_{\mathbf{i}}\right)+V(|\mathbf{r_{1}}- \mathbf{r_{2}}|) \tag{17}\] Using the relations: \[\left\{\begin{aligned} &\mathbf{P}=\mathbf{p_{1}}+\mathbf{p_{2}}, \qquad\qquad\qquad\mathbf{p}=\mu\left(\frac{\mathbf{p_{1}}}{m_{1}}-\frac{ \mathbf{p_{2}}}{m_{2}}\right)\\ &\mu=\frac{m_{1}\;m_{2}}{m_{1}+m_{2}},\\ &\mathbf{r_{1}}=\mathbf{R}+\frac{m_{2}}{m_{1}+m_{2}}\mathbf{r}, \qquad\mathbf{r_{2}}=\mathbf{R}-\frac{m_{1}}{m_{1}+m_{2}}\mathbf{r}\end{aligned}\right. \tag{18}\] As the two quarks that form the mesons are in the same vortical background, we can suppose that \(\mathbf{\omega}_{1}=\mathbf{\omega}_{2}=\mathbf{\omega}\) and \(\mathbf{S}=\mathbf{S_{1}}+\mathbf{S_{2}}\), and then write: \[\mathcal{H}=\frac{P^{2}}{2M}+\frac{p^{2}}{2\mu}-\mathbf{P}\cdot(\mathbf{\omega} \times\mathbf{R})-\mathbf{p}\cdot(\mathbf{\omega}\times\mathbf{r})-\mathbf{\omega} \cdot(\mathbf{S}_{1}+\mathbf{S}_{2})+V(|\mathbf{r}|) \tag{19}\] Since \(\mathbf{S}=\mathbf{S}_{1}+\mathbf{S}_{2}\) and we are interested just in the reduced coordinate, \[\mathcal{H}=\frac{p^{2}}{2\mu}-\mathbf{p}\cdot(\mathbf{\omega}\times\mathbf{r})- \mathbf{\omega}\cdot\mathbf{S}+V(|\mathbf{r}|) \tag{20}\] We can rewrite this equation as \[i\frac{\partial}{\partial t}=-\frac{1}{2\mu}\mathbf{\nabla}^{2}-\omega L_{z}- \omega S_{z}+V(|\mathbf{r}|) \tag{21}\] where we used \(\mathbf{p}=-i\hbar\mathbf{\nabla}\), \(\mathcal{H}=i\frac{\partial}{\partial t}\), \(\mathbf{\omega}=\omega\hat{\mathbf{z}}\) and \(L_{z}=-i\frac{\partial}{\partial\phi}\). In rotating frames, the contribution in Hamiltonian is only the product between orbital angular momentum due to the vortices with the spin of meson. Thus, in this noninertial frame, the contribution is just to \(L\) different from zero. Then, we will write just the radial part of the equation 21. \[\frac{1}{r^{2}}\frac{d}{dr}\left(r^{2}\frac{dR(r)}{dr}\right)+\left[2\mu(E-V(r)) -\frac{l(l+1)}{r^{2}}-2\mu\omega L_{z}-2\mu\omega S_{z}\right]R(r)=0 \tag{22}\] To go forward, we shall assume the rotation to be classical and related to the hydrodynamic vorticity. Then we can define \(\omega\) in terms of the a conserved circulation \[C=\oint\mathbf{v}\cdot d\mathbf{l} \tag{23}\] and also assume the Cornell potential [22] \[V(r)=b\ r-\frac{\alpha_{eff}}{r} \tag{24}\] so \[\frac{1}{2\mu r^{2}}\frac{d}{dr}\left(r^{2}\frac{dR(r)}{dr}\right)+\left[2\mu (E-V(r))-\frac{l(l+1)}{2\mu r^{2}}-\frac{m_{j}\ C}{\pi r^{2}}\right]R(r)=0 \tag{25}\] Making \(\chi(r)=rR(r)\) and change the variable \(x=1/r\), then the equation 25, we get: \[\frac{d^{2}\chi(x)}{dx^{2}}+\frac{2x}{x^{2}}\frac{d\chi(x)}{dx}+\frac{2\mu}{x ^{4}}\left[E-\frac{b}{x}+\alpha_{eff}x-\frac{l(l+1)}{2\mu}x^{2}-\frac{m_{j}\ C}{2 \pi}x^{2}\right]\chi(x)=0 \tag{26}\] Now we will expand the variable \(y\) (\(y=x-\delta\)) around zero with \(\delta=1/r_{0}\) where \(r_{0}\) is the mean meson radius. Therefore we have: \[\frac{b}{x}\approx b\left(\frac{3}{\delta}-\frac{3x}{\delta^{2}}+\frac{x^{2}}{ \delta^{3}}\right) \tag{27}\] Thus we can write the equation 26 in following form: \[\frac{d^{2}\chi(x)}{dx^{2}}+\frac{2x}{x^{2}}\frac{d\chi(x)}{dx}+\frac{2\mu}{x ^{4}}\left[E-\frac{3b}{\delta}+\left(\alpha_{eff}+\frac{3b}{\delta^{2}}\right) x-\left(\frac{l(l+1)}{2\mu}+\frac{m_{j}\ C}{2\pi}+\frac{b}{\delta^{3}}\right)x^{2} \right]\chi(x)=0 \tag{28}\] The coefficients \(H_{i}\) with \(i=0,1,2\), are given by: \[H_{0}=-2\mu\left(E-\frac{3b}{\delta}\right) \tag{29}\] \[H_{1}=2\mu\left(\alpha_{eff}+\frac{3b}{\delta^{2}}\right) \tag{30}\] \[H_{2}=-2\mu\left(\frac{l(l+1)}{2\mu}+\frac{m_{j}\;C}{2\pi}+\frac{b}{\delta^{3}}\right) \tag{31}\] Thus, we can write: \[\frac{d^{2}\chi(x)}{dx^{2}}+\frac{2x}{x^{2}}\frac{d\chi(x)}{dx}+\frac{2\mu}{x^{4 }}[-H_{0}+H_{1}\;x+H_{2}\;x^{2}]R(x)=0 \tag{32}\] Now, we can compare the differential equation 32 with A1, we get: \[\sigma(x)=x^{2}\;\;\;\;\;\;,\;\;\;\;\;\tilde{\tau}(x)=2x\;\;\;\;\;,\;\;\;\;\; \tilde{\sigma}(x)=-H_{0}+H_{1}\;x+H_{2}\;x^{2} \tag{33}\] The function \(\pi(s)\) from equation A5 is given by: \[\pi(x)=\pm\sqrt{(k-H_{2})x^{2}-H_{1}x+H_{0}} \tag{34}\] Choosing the negative solution the polynomial inside of square must have discriminant equal to zero, so we get: \[k=\frac{1}{4H_{0}}\left(H_{1}^{2}+4H_{0}H_{2}\right) \tag{35}\] Then, \[\pi(x)=-\frac{1}{2\sqrt{H_{0}}}\left(H_{1}x-2H_{0}\right) \tag{36}\] and \(\tau(x)\) is given by equation A8, so \[\tau(x)=2x-\frac{1}{\sqrt{H_{0}}}\left(H_{1}x-2H_{0}\right) \tag{37}\] Then equaling the equations A6 and A7 we have: \[\frac{H_{1}^{2}}{4H_{0}}+H_{2}-\frac{H_{1}}{2\sqrt{H_{0}}}=\frac{H_{1}}{\sqrt {H_{0}}}n-n(n+1) \tag{38}\] Solving the equation 38 to \(H_{0}\), we get: \[\sqrt{H_{0}}=\frac{H_{1}}{(1+2n)\pm\sqrt{1-4H_{2}}} \tag{39}\] In this way, we can obtain the energy levels and wavefunctions as a function of the rotation parameters via the Eigenvalue equation \[\sqrt{-2\mu\left(E-\frac{3b}{\delta}\right)}=\frac{2\mu\left(\alpha_{eff}+ \frac{3b}{\delta^{2}}\right)}{(1+2n)+\sqrt{1+4l(l+1)+\frac{4\mu\;m_{j}\;C}{ \pi}+\frac{8ub}{\delta^{3}}}} \tag{40}\] ### Mass and vorticity The previous equation gives the binding energy in the rotating frame \[E_{n,l,m}=\frac{3b}{\delta}-\frac{2\mu\left(\alpha_{eff}+\frac{3b}{\delta^{2}} \right)^{2}}{\left[(1+2n)+\sqrt{1+4l(l+1)+\frac{4\mu\;m_{j}\;C}{\pi}+\frac{8\mu b }{\delta^{3}}}\right]^{2}} \tag{41}\] With he wave function determination is explicitily given in Appendix B using the method outlined in Appendix A. We can obtain the quarkonium mass from the equation 41: \[M=2m_{q}+E_{n,l,m} \tag{42}\] \[M=2m_{q}+\frac{3b}{\delta}-\frac{2\mu\left(\alpha_{eff}+\frac{3b}{\delta^{2}} \right)^{2}}{\left[(1+2n)+\sqrt{1+4l(l+1)+\frac{4\mu\;m_{j}\;C}{\pi}+\frac{8\mu b }{\delta^{3}}}\right]^{2}} \tag{43}\] Fig. 4 and Fig. 5 gives a quantitative value of the binding energy for quarkonia polarized with and opposite the vorticity as a function of the circulation. Unsurprisingly, one is the opposite of the other. This, however, is not experimentally detectible since spin alignment measurements do not distinguish between \(m=\pm 1\) states. The mass difference in section 4 would therefore appear as an impact parameter dependent _widening_ of the quarkonium state. However, since a similar widening occurs in any kind of in-medium interaction, particularly in interactions leading to the melting of quarkonium [5], a univocal proof of spin-orbit coupling can not be obtained by measuring \(E_{\pm 1}\) alone. As we will show in the next section section IV, however, off-diagonal matrix elements could be of help here. ### Vorticity and melting We can use the mass dependence calculated in the previous sub-section to study, qualitatively, how the temperature for quarkonium melting (melting temperature, \(T_{melt}\)) changes under a non-inertial frame. This calculation can be considered to be a semi-classical estimate of the imaginary part of the energy calculated in the previous subsection section III.2. Using a semi-classical analysis with \(\langle p\rangle\sim 1/r\) and \(\langle p^{2}\rangle\sim 1/r^{2}\) we can write the energy by using \[E(r)=\frac{1}{2\mu r^{2}}-m_{j}\;\omega+V(r) \tag{44}\] Figure 4: Charmonium mass in relation to parameter \(C\) for various energy and momentum levels (notation follows atomic physics quantum numbers) Figure 5: Bottomonium mass in relation to parameter \(C\) for various energy and momentum levels. Notation follows atomic physics quantum numbers For the \(J/\psi\) melting, we shall use the potential that takes into consideration the Debye screen where \(\lambda_{D}\) is Debye screening length, which is given by: \[V(r)=-\alpha_{eff}\frac{e^{-r/\lambda_{D}}}{r} \tag{45}\] Next, using the circulation theorem 23. Then, \[E(r)=\left[\frac{1}{2\mu}-\frac{m\ C}{2\pi}\right]\frac{1}{r^{2}}-\frac{\alpha _{eff}e^{-r/\lambda_{D}}}{r} \tag{46}\] The bound state is defined when the energy 46 has a minimum, so we can write: \[\frac{dE(r)}{dr}=-\left[\frac{1}{\mu}-\frac{m\ C}{\pi}\right]\frac{1}{r^{3}}+ \alpha_{eff}e^{-r/\lambda_{D}}\left[1+\frac{r}{\lambda_{D}}\right]\frac{1}{r^ {2}}=0 \tag{47}\] , then \[r\left[1+\frac{r}{\lambda_{D}}\right]\alpha_{eff}e^{-r/\lambda_{D}}=\frac{1}{ \mu}-\frac{m\ C}{\pi} \tag{48}\] Making the following variable change \(\tilde{r}=r/\lambda_{D}\), we get: \[f(\tilde{r})=\tilde{r}\left(1+\tilde{r}\right)e^{-\tilde{r}}=\frac{1}{\alpha_ {eff}\lambda_{D}}\left[\frac{1}{\mu}-\frac{m\ C}{\pi}\right] \tag{49}\] The maximum value of \(f(\tilde{r})\) is 0.840 at \(\tilde{r}=1.92\) in this limit we will have the non bound states. Thereby, we can write the inequation for this limit: \[\frac{1}{\alpha_{eff}\lambda_{D}}\left[\frac{1}{\mu}-\frac{m\ C}{\pi}\right]> 0.840 \tag{50}\] In this moment, we will utilize the Debye leght \(\lambda_{D}\) value from the lowest-order perturbative QCD [31]2: Footnote 2: The \(\lambda_{D}\) value to non-inertial frames is not same as the inertial case. Nevertheless, the Debye mass to an inertial frame is better estimative because the non-inertial effect gives just a second-order contribution. \[\lambda_{D}=\sqrt{\frac{2}{9\pi\alpha_{eff}}}\frac{1}{T} \tag{51}\] We can obtain an estimate for the melting temperature \(T_{melt}\) in non-inertial frames using the equation 49 in the \(f(\tilde{r})\) maximum value, then: \[0.840\sqrt{\frac{2}{9\pi\alpha_{eff}}}\frac{1}{T_{melt}}=\frac{1}{\alpha_{eff }\lambda_{D}}\left[\frac{1}{\mu}-\frac{m\ C}{\pi}\right] \tag{52}\] So, \[T_{melt}=0.840\sqrt{\frac{2\alpha_{eff}}{9\pi}}\left[\frac{1}{\mu}-\frac{m\ C}{ \pi}\right] \tag{53}\] We can see from figure 6 that the melting temperature \(T_{melt}\) depends on the spin polarization quantum number \(m\). In particular for \(m=1\) this temperature increases considerably. For \(m=0\) it predictably does not change and for \(m=-1\) it decrases but, and this is a fundamental point, it decreases a lot less than the increase for \(m=1\). Therefore, in a vortical medium polarized quarkonia will be much more likely to survive, while un-polarized or anti-polarized quarkonia'a survival probability does not change that much w.r.t. quarkonia in a non-rotating medium. This finding is qualitatively important since it shows that vorticity can link quarkonium suppression and polarization via "distillation". In a vortical medium, the melting probability of quarkonium states will depend strongly on their polarization. Such a mechanism will result, analogously to the \(\eta\)[33], to a strong and novel dependence of quarkonium abundance on centrality, which could be investigated quantitatively by a hydrodynamic model. One can be "brave" and try to apply our potential model to the meson \(\phi\), which is formed of strange and anti-strange quarks. The strong-vorticity dependent melting might be able to explain the strong spin alignment observed in experiment, which seems incompatible with Cooper-Frye freezeout [8]. In the picture described here, melting temperature for aligned \(\phi\) rises a lot, while melting temperature for anti-aligned and non-aligned \(\phi\) stays nearly the same. Thus, the large apparent spin alignment of \(\phi\)s comes from a "distillation" process where only polarized \(\phi\)s survive, and this increases relative \(\phi\) spin alignment. Of course, applying potential models to \(\phi\) is not justified theoretically and requires model-building, so such a solution will need considerable quantitative and phenomenological development. In the absence of a detailed hydrodynamic modeling of the correlation between charmonium suppression and polarization, however, distinguishing the effects outlined above from other variations of charmonium suppression with centrality, namely the effect of centrality dependence of temperature, looks complicated, with vorticity just adding an "event-by-event widening" to the processes of melting and regeneration of quarkonia. The next section, however, provides a direct experimentally measurable indication of spin-orbit non-equilibrium in quarkonium polarization measurements Density matrix elements and vorticity Turning our attention back to the density matrix we can write this operator on basis of energy in the following way: \[\hat{\rho}=e^{-\beta\hat{H}}\ \ \ \,\ \ \ \ \ \beta=\frac{1}{T} \tag{54}\] Where the eigenvalues is given by the equation 41. In this moment, we will make a rotation to lab frame, so: \[\hat{\rho}^{r}=U(\theta_{r},\phi_{r})\;\hat{\rho}\;U^{-1}(\theta_{r},\phi_{r}) \tag{55}\] Now, we can expand the equation 55 in following way: \[\rho_{m,m^{\prime}}^{r}=\sum_{m^{\prime\prime},m^{\prime\prime\prime}}e^{i(m^{ \prime\prime\prime}-m)\phi_{r}}d_{m,m^{\prime\prime}}^{j}(\theta_{r})\rho_{m^{ \prime\prime},m^{\prime\prime\prime}}\left[d_{m^{\prime\prime\prime},m^{ \prime\prime}}^{j}(\theta_{r})\right]^{-1}\ \ \ \,\ \ \ \ \ \rho_{m^{\prime\prime},m^{\prime\prime\prime}}=\frac{1}{Z}e^{\beta E_{m^{ \prime\prime\prime}}}\delta_{m^{\prime\prime},m^{\prime\prime\prime}} \tag{56}\] We can relate the density matrix coefficients from variable \(C\) as both density matrix coefficients and energy variation depend on the parameter \(C\). So we can relate these two values we obtain the figure 8. Then, we can relate the energy (\(\Delta E_{m,m^{\prime}}=E_{m}-E_{m^{\prime}}\)) with off-diagonal density matrix \(Re[\rho_{-1,0}]\) and \(Re[\rho_{-1,1}]\) using the values \(\phi=0\), \(\theta_{r}=1.75\pm 0.10\) to Collins-Soper refer to transverse momentum range \(2<p_{T}<4\) (GeV). As can be seen in Fig. 7 the obtained \(\rho_{00}\) are compatible with the estimates shown in Figure 6: An estimate of melting temperature \(T_{melt}\) for \(J/\psi\) (left) and \(\Upsilon\) (right) melting to different spin polarization \(m\) values. section II We can obtain the off-diagonal density matrix components in relation to the quarkonium energy using the parameters shown in Table 1. In figure 8 and 9, we can note the relation between the alignment factor, \(\rho_{00}\), with the circulation parameter C. It is evident that an increase of C increments \(\rho_{00}\); however, this increase will depend on the type of meson. As we can see the bottomonium alignment factor is larger than the charmonium when compared under the same parameter C. Figure 7: The alignment factor \(\rho_{00}\) in relation C for charmonium and bottomonium Unlike the figures in the previous sections, the \(x,y\) axes of the plots in this section \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \cline{2-7} \multicolumn{1}{c|}{} & \(m\) (\(GeV\)) & \(\alpha_{eff}\) & \(b\) (\(GeV^{2}\)) & \(\delta\) (\(GeV\)) & \(\mu\) (\(GeV\)) & \(\theta_{r}\) (rad) \\ \hline \(c\bar{c}\) & 1.209 & 1.244 & 0.2 & 0.231 & 0.6045 & \(1.75\pm 0.10\) \\ \hline \(b\bar{b}\) & 4.823 & 1.569 & 0.2 & 0.378 & 2.4115 & \(1.58\pm 0.10\) \\ \hline \end{tabular} \end{table} Table 1: Eigenvalue quarkonium parameters obtained from [30] with \(\theta_{r}\) determined in system equation 9 to Collins-Soper frame to \(J/\psi\) and \(\Upsilon\) with respectivaly ranges \(4<p_{T}<6\) (\(GeV\)) and \(p_{T}<15\) (\(GeV\)). Figure 8: The off-diagonal density matrix components in relation to energy variation \(\Delta E_{m,m^{\prime}}\). are independently measurable. \(\Delta E\) will be manifest event-by-event as an invariant mass correlation. \(\rho_{i,j}\) can be obtained [9] from \(dN/d\Omega\) according to a harmonic analysis of Eq. (1). Barring acceptance effects, there is no non-vortical dynamics capable of generating a correlation between these two quantities. The most direct analysis to do is a correlation between \(\lambda_{\theta,\phi,\theta\phi}\) and the invariant mass and width. A definite signal of the correlation can be used, using the functions described in this section, to extract the angle between vorticity and heavy quark spin polarization. This can then be used to constrain models of spin hydrodynamics such as [13; 14; 15; 16; 17; 18; 19; 20].This section's Figure 9: The off-diagonal density matrix components in relation to energy variation \(\Delta E_{m,m^{\prime}}\). results, therefore, can be used as a baseline for developing an experimental analysis capable of probing non-equilibrium between spin and vorticity using quarkonium probes. ## V Conclusion The results obtained in this work are certainly to be considered more like a rough estimate than a quantitative analysis, since they use an early version of the Cornell potential model. However we think that this model is good enough to get a physical intuition of the problem of linking spin-vorticity non-equilibrium to the quarkonium state in a rotating frame, and, respectively, the quarkonium state to experimental observables. As section III illustrates, our formalism in fact reproduces the reasonable expectation of how the binding energy (mass) and melting probability (width) of the quarkonium state respond to rotation. Section IV therefore takes the model further and examines what happens when rotation and spin are not aligned, and suggests experimental measurements. A positive experimental observation of the observables described in the previous section will provide evidence that quarkonium state polarization is not thermally aligned, but in fact "remembers" a polarization state which is quite distinct from the vorticity state. As argued in section II and Fig. 1, one in fact expects quarks to be aligned transversely, which would also follow from initial state dynamics [34; 35] and from the large transverse vorticity present in the beginning of the collision [6]. However, the vortex where the quarkonia will be caught up could well be aligned to the flow gradients that developed on the time-scale of a hydrodynamic expansion, which will be in the longitudinal direction. The combination of the two will result in a density matrix exhibiting large corrections to the expectation value in [32], with the magnitude of these correcting correlated to the invariant mass of quarkonium. A very interesting associated result is the strong dependence, non-monotonic in both \(T\) and \(m\), of the dissoiation temperature \(T_{melt}\) of quarkonium shown in section III.3. It suggests a new "distillation" mechanism for quarkonium polarization, where aligned quarkonia have a stronger probability to survive in medium while non-aligned and anti-aligned quarkonia simply melt. Speculatively considering the \(\phi\) a quarkonium state could explain the strong alignment signal seen in [8], though some phenomenological work is needed to confirm such a model is viable. So far, as seen in section II there is no direct experimental evidence for such non equilibrium. In the short term, an azimuthal modulation of \(\lambda_{\theta,\phi,\theta\phi}\) (Fig. 3 ) could provide a strong indication that such non-equilibrium should be investigated. In the long term, an experiment with sufficient resolution in invariant mass and capable of reconstructing the spin alignment could be able to perform the analysis advocated in section IV, providing more direct experimental evidence. Quarkonium polarization, therefore, could well be a very promising observable for experimentally probing spin hydrodynamics in the quark-gluon plasma, a theoretically interesting and challenging topic that so far had little contact with phenomenology. GT thanks CNPQ bolsa de produtividade 306152/2020-7, bolsa FAPESP 2021/01700-2 and participation in tematic FAPESP, 2017/05685-2 and grant BPN/ULM/2021/1/00039 from the Polish National Agency for Academic Exchange K.J.G. is supported by CAPES doctoral fellowship 88887.464061/2019-00. This work was supported in part by the Polish National Science Centre Grant Nos 2018/30/E/ST2/00432. We thank the hospitality of the Jagellonian university when part of this work was performed. ## Appendix A Nikiforov-Uvarov method We will use the method developed by Nikiforov and Uvarov to solve a differential equation where it is possible to reduce to hypergeometric function. We can do it too to Schrodinger equation, which can write in the following form: \[\psi^{{}^{\prime\prime}}(s)+\frac{\tilde{\tau}(s)}{\sigma(s)}\psi^{{}^{\prime} }(s)+\frac{\tilde{\sigma}(s)}{\sigma^{2}(s)}\psi(s)=0 \tag{10}\] Using the variable separation technique, \[\psi(s)=\phi(s)y(s) \tag{11}\] Using the variable separation technique, we can see that the equation must satisfy the following expression [26]: \[\frac{\phi^{{}^{\prime}}(s)}{\phi(s)}=\frac{\pi(s)}{\sigma(s)} \tag{12}\] so we can simplify as \[\sigma(s)y^{{}^{\prime\prime}}(s)+\tau(s)y(s)+\lambda y(s)=0 \tag{13}\] At this moment, we need to define the \(\pi(s)\) function in the following form: \[\pi(s)=\frac{\sigma^{{}^{\prime}}(s)-\tilde{\tau}(s)}{2}\pm\sqrt{\left(\frac {\sigma^{{}^{\prime}}(s)-\tilde{\tau}(s)}{2}\right)^{2}-\tilde{\sigma}(s)+k \sigma(s)} \tag{14}\] and \(\lambda\) parameter is given by: \[\lambda=k+\pi^{{}^{\prime}}(s) \tag{10}\] The Schrodinger equation eigenvalue obtained from this method is given by: \[\lambda_{n}=-n\tau^{{}^{\prime}}(s)-\frac{n(n-1)}{2}\sigma^{{}^{\prime\prime}}(s) \tag{11}\] where \(\tau(s)\) is defined by: \[\tau(s)=\tilde{\tau}(s)+2\pi(s) \tag{12}\] The solution to hypergeometric equation is given by Rodrigues relation: \[y_{n}(s)=\frac{B_{n}}{\tilde{\rho}(s)}\frac{d^{n}}{ds^{n}}[\sigma^{n}\tilde{ \rho}(s)] \tag{13}\] \(B_{n}\) is the normalization coefficient and \(\tilde{\rho}(s)\) must obey the following relation: \[(\sigma(s)\tilde{\rho}(s))^{{}^{\prime}}=\tau(s)\tilde{\rho}(s) \tag{14}\] Then \[\frac{\tilde{\rho}^{{}^{\prime}}}{\tilde{\rho}}=\frac{\tau-\sigma^{{}^{\prime }}}{\sigma} \tag{15}\] We have the following solution: \[\tilde{\rho}(s)=\exp\left(\int\frac{\tau-\sigma^{{}^{\prime}}}{\sigma}\;ds\right) \tag{16}\] ## Appendix B Quarkonium wave function determination Now, we can obtain the wave function from the equation 16 and using the equation 37 and \(\sigma(x)=x^{2}\). From it, we get \[\tilde{\rho}(x)=C_{1}x^{-H_{1}/\sqrt{H_{0}}}e^{-2\sqrt{H_{0}}/x} \tag{17}\] We are able to obtain the value of function \(\phi(x)\) from equation 36 and using the values of equation 36. Then \[\frac{\phi^{{}^{\prime}}(x)}{\phi(x)}=-\frac{1}{2\sqrt{H_{0}}}\left(\frac{H_{ 1}}{x}-\frac{2H_{0}}{x^{2}}\right) \tag{18}\] Then \[\phi(x)=C_{2}\;x^{-H_{1}/2\sqrt{H_{0}}}e^{-\sqrt{H_{0}}/x} \tag{10}\] We can determine the \(y_{n}(r)\) from equations 11, 12 and \(\sigma(r)\). Then, \[y_{n}(r)=B_{n}x^{H_{1}/\sqrt{H_{0}}}e^{2\sqrt{H_{0}}/x}\frac{d^{n}}{dx^{n}} \left[x^{2n-H_{1}/\sqrt{H_{0}}}e^{-2\sqrt{H_{0}}/x}\right] \tag{11}\] From equation 10, 11 and putting in 11, we have that \[\psi(x)=B_{n}x^{H_{1}/2\sqrt{H_{0}}}e^{\sqrt{H_{0}}/x}\frac{d^{n}}{dx^{n}} \left[x^{2n-H_{1}/\sqrt{H_{0}}}e^{-2\sqrt{H_{0}}/x}\right] \tag{12}\] Thereby,changing the variable \(x=1/r\) we can write the radial wave fuction from equation 12 and relation \(\psi(r)=rR(r)\). So: \[R_{n}(r)=B_{n}\;r^{-(H_{1}/2\sqrt{H_{0}})}e^{\sqrt{H_{0}}r}\left[-r^{2}\frac{ d}{dr}\right]^{n}\left[x^{2n-H_{1}/\sqrt{H_{0}}}e^{-2\sqrt{H_{0}}/x}\right] \tag{13}\] Thus, we can write the wave fuction that following form: \[\Psi_{n,l,m}(r,\theta,\phi)=R_{n}(r)Y_{l}^{m}(\theta,\phi) \tag{14}\] Using the value of 13, we get: \[\Psi_{n,l,m}(r,\theta,\phi)=B_{n}\;r^{-(H_{1}/2\sqrt{H_{0}})}e^{\sqrt{H_{0}}r }\left[-r^{2}\frac{d}{dr}\right]^{n}\left[x^{2n-H_{1}/\sqrt{H_{0}}}e^{-2\sqrt{ H_{0}}/x}\right]Y_{l}^{m}(\theta,\phi) \tag{15}\]
2301.10876
Reef-insight: A framework for reef habitat mapping with clustering methods via remote sensing
Environmental damage has been of much concern, particularly in coastal areas and the oceans, given climate change and the drastic effects of pollution and extreme climate events. Our present-day analytical capabilities, along with advancements in information acquisition techniques such as remote sensing, can be utilised for the management and study of coral reef ecosystems. In this paper, we present Reef-Insight, an unsupervised machine learning framework that features advanced clustering methods and remote sensing for reef habitat mapping. Our framework compares different clustering methods for reef habitat mapping using remote sensing data. We evaluate four major clustering approaches based on qualitative and visual assessments which include k-means, hierarchical clustering, Gaussian mixture model, and density-based clustering. We utilise remote sensing data featuring the One Tree Island reef in Australia's Southern Great Barrier Reef. Our results indicate that clustering methods using remote sensing data can well identify benthic and geomorphic clusters in reefs when compared with other studies. Our results indicate that Reef-Insight can generate detailed reef habitat maps outlining distinct reef habitats and has the potential to enable further insights for reef restoration projects.
Saharsh Barve, Jody M. Webster, Rohitash Chandra
2023-01-26T00:03:09Z
http://arxiv.org/abs/2301.10876v2
# Reef-insight: A framework for reef habitat mapping with clustering methods via remote sensing ###### Abstract Environmental damage has been of much concern, particularly coastal areas and the oceans given climate change and drastic effects of pollution and extreme climate events. Our present day analytical capabilities along with the advancements in information acquisition techniques such as remote sensing can be utilized for the management and study of coral reef ecosystems. In this paper, we present Reef-insight, an unsupervised machine learning framework that features advanced clustering methods and remote sensing for reef community mapping. Our framework compares different clustering methods to evaluate them for reef community mapping using remote sensing data. We evaluate four major clustering approaches such as k- means, hierarchical clustering, Gaussian mixture model, and density-based clustering based on qualitative and visual assessment. We utilise remote sensing data featuring Heron reef island region in the Great Barrier Reef of Australia. Our results indicate that clustering methods using remote sensing data can well identify benthic and geomorphic clusters that are found in reefs when compared to other studies. Our results indicate that Reef-insight can generate detailed reef community maps outlining distinct reef habitats and has the potential to enable further insights for reef restoration projects. We release our framework as open source software to enable its extension to different parts of the world. keywords: Clustering, Reef monitoring, Remote sensing, Reef habitats + Footnote †: journal: - ## 1 Introduction Remote sensing provides the methodology that enables the retrieval of aerial data via advanced satellites and aerial vehicles [1; 2]. In the past decades, remote sensing has been prominent in a number of applications that includes monitoring tropical forest environments [3], environmental monitoring [4], monitoring mining environment [5], coral reef monitoring [6], agriculture [7], surface moisture and soil monitoring [8], and space research [9]. Remote sensing data with machine learning methods has been increasingly used [10; 11; 12] with diverse applications such as mineral exploration [13], environment monitoring, and agriculture [14]. Coral reef mapping [15; 16] provides valuable information about reef characteristics such as the structure of the reef, geomorphic and benthic zones, and coral distribution which can help in reef restoration projects [17; 18]. Some of the related studies that use remote sensing are discussed as follows. Kennedy et al. [19] proposed a coral classification system that combines satellite data along with local knowledge for identifying different geomorphic regions in a coral reef. Among various analytical techniques used on remote sensing data, Phinn et al. [20] assessed the quality of benthic and geomorphic community maps of coral reefs produced from multi-scale image analysis. Other map processing approaches commonly used are supervised classification [21] and manual delineation of classes using images as a backdrop. Phinn et al. [22] evaluated eight commonly used benthic cover mapping techniques based on the two processing approaches stated before. Eight techniques were assessed on the basis of cost, accuracy, time and relevance where the preferred mapping approach was with supervised learning via classification of satellite data using basic field knowledge. Nguyen et al. [23] provided a review of coral reef mapping with multi-spectral satellite image correction and preprocessing techniques and classification algorithms. Machine learning enables to mine valuable information in remotely sensed data with clustering [24], dimensionality reduction [25], and classification methods [11]. Machine learning methods are slowly becoming prominent for climate change problems [26]. These methods can also be used to understand and reconstruct data for climate and vegetation millions of years back in time [27]. Clustering [28] is a unsupervised machine learning method that is useful for remote sensing when labelled data is unavailable. The clusters produced using clustering techniques can then be further improved using specific spatial information from the data or by applying basic domain knowledge [29; 30]. Clustering techniques can be used for image segmentation tasks [31] wherein pixels are grouped into distinct regions (clusters) on the basis of a given similarity measures. There are several clustering techniques that serve this purpose, such as k-means clustering algorithm [32], agglomerative hierarchical clustering [33], density-based spatial clustering (DBSCAN) [34], and Gaussian mixture model (GMM) [35]. Each of these have specific properties which make them more useful than others for remote sensing data for different applications, and their strengths and weakness in area of coral reef mapping is yet to be evaluated. In the case of hyper-spectral and multi-spectral data that features multiple bands and thus featuring large amount of data, dimensional reduction methods such as principal component analysis (PCA) [36] can reduce the number of bands in order to make the data applicable for clustering methods. They have been used in remote sensing applications such as mineral exploration with satellite data [25]. Existing work done in creating reef maps using supervised learning techniques can be utilised for qualitative comparisons with clustering results and label the regions. The success of clustering techniques in mapping regions and in the field of geo-sciences inspires the use of the technique for benthic [37] and geomorphic mapping [38] of coral reefs. Although remote sensing data (multi-spectral and hyper-spectral) have been used for coral reef mapping [39; 40; 41; 42; 43; 6], not much have been done using clustering techniques, particularly using open source software framework. There are proprietary remote sensing software suits[44; 45] that have inbuilt features of reef mapping but these are not easily available. This is a problem not just for reproducible research, but for application of such methods in developing countries where such software suites are economically not viable for research purpose and this slows research and developed in area of reef monitoring which is a major focus of climate change related research. In this paper, we present an unsupervised machine learning framework via novel clustering methods for detection and mapping of coral reef habitats with remote sensing. We present Reef-insight, a framework for reef community mapping using remote sensing where we compare four major clustering approaches in order to determine which method is most suitable based on qualitative and visual assessment. The clustering methods include k-means, GMM, agglomerative and DBSCAN. We utilise remote sensing dataset from One Tree Island in the Great Barrier Reef of Australia. Our framework provides the detection and generation of detailed maps that highlight distinct reef habitats that can guide scientists and policy makers in reef restoration projects. The rest of the paper is organised as follows. In Section 2, we present the proposed methodology, followed by experiments and results in Section 3. Section 4 provides a discussion and Section 5 provides the conclusion with directions of future work. ## 2 Methodology ### Study area The One Tree Island (located at 23 \({}^{\circ}\)30' 30" South, 152 \({}^{\circ}\)05' 30" East), is a coral reef in the southern Great Barrier Reef. It is a part of the Capricorn-Bunker group about 90 kilometers east of port Gladstone in Queensland, Australia. The University of Sydney maintains the research station on the island and as such One Tree Reef has been the subject of detailed biological and geological investigation over the past four decades (see [46; 47; 48]), including studies using remote sensing [49]. Hence, the reef habitats and geomorphic zones characterising One Tree Reef are well understood. Our study area (Figure 1) which is present at the eastern end of a coral reef is about 5.5 kilometers (km) in length and 3.5 km in width. ### Data We use PlanetScope satellite imagery that is available on Allen Coral Atlas' website for our research. The PlanetScope (Dove) 1 image-based data features 3 spectral bands (Red, Green, Blue) with a 8 bit radiometric resolution. The raw image undergoes steps such as atmosphere radiance, sensor and radiometric calibration, flatfield correction, debayering, orthorectification and surface reflectance steps in the pre-processing stage. It then undergoes the mosaic-based processing where we utilize the "best scene on top" technique to create the final mosaic. The mosaic process has been taken from implementation in the Allen Coral Atlas [51] (Figure 2). We create the bathymetric image data using 10 meters (m) resolution with the Sentinel-2 surface reflectance dataset via Google Earth Engine (GEE) 2. Finally, we create a single mosaic (16 bit integer) by aggregating median value of the input data over a period of 12 months. We utilize this bathymetric information for creating geomorphic maps. Footnote 1: [https://earth.esa.int/cogateway/missions/planestscope](https://earth.esa.int/cogateway/missions/planestscope) Footnote 2: [https://developers.google.com/earth-engine/datasets/catalog/sentinel-2](https://developers.google.com/earth-engine/datasets/catalog/sentinel-2) ### Benthic and Geomorphic regions in reef Benthic [37] mapping of coral reefs refers to use of aerial imagery, underwater photos, acoustic surveys, and data from Figure 1: One Tree Island reef (located at 23 \({}^{\circ}\)30’ 30” South, 152 \({}^{\circ}\)05’ 30” East and the Capricorn–Bunker group in relation to the Queensland coastline taken from [50]. sediment samples. The benthic zone refers to an ecological region with low level of water such as an ocean and coral reef community which includes the sediment surface. Geromorphology refers to the evolution of topographic and bathymetric features from physical, chemical and biological processes on the Earth's surface[52]. Geomorphic coral reef mapping [38] refers to topographic and bathymetric features in reef habitats [53]. ### Clustering techniques #### 2.4.1 K-means clustering K-means clustering is an algorithm that divides data into a set of clusters (k) based upon distance metric[54; 32]. Given a d-dimensional vector for a dataset of samples (\(x1,x2,...,x_{N}\)) of size \(N\), the algorithm partitions (groups) the data into \(k(\leq N)\) sets \(C=C_{1},C_{2},...,C_{k}\). The aim of the algorithm is to minimize the error given by within-cluster sum of squares (WCSS) which is given as the sum of squared Euclidean distance between the data samples and the corresponding centroid in the original algorithm [55]: \[W(C_{k})=\sum_{x_{i}\in C_{k}}^{1}(x_{i}-\mu_{k})^{2} \tag{1}\] where, \(x_{i}\) is a data sample belonging to the cluster \(C_{k}\) and \(\mu_{k}\) is the mean of the samples in cluster \(C_{k}\). We assign each data sample to a given cluster such that the WCSS error to their assigned cluster centres \(\mu_{k}\) is minimized. The total WCSS error is given as follows \[WCSS=\sum_{k=1}^{k}W(C_{k})=\sum_{k=1}^{k}\sum_{x_{i}\in C_{k}}^{1}(x_{i}-\mu_ {k})^{2} \tag{2}\] Although k-means clustering has been prominent in tabular data, it can also be used for image and remote sensing data data for segmentation which is also the focus of this paper. There have been applications of k-means clustering for remote sensing-based image segmentation, change detection, and land cover classification. Theiler et al. [56] proposed a variation of k-means algorithm to utilise both the spectral as well as the spatial properties of an satellite imagery for image segmentation. Zhiyong et al. [57] integrated k-means clustering with adaptive majority voting with performance that compared well with conventional approaches for land cover change detection. Kesikoglu et al. [58] used dimensional reduction via PCA along with fuzzy k-means clustering for segregating regions of an area for the task of change detection. Abbas et al. [59] utilised k-means and ISODATA [60] (which is an extension of k-means clustering) for land cover classification using remote sensing data. These applications motivate the use of k-means clustering for our proposed framework. #### 2.4.2 Gmm GMM is based on a probabilistic model which assumes that data is generated from a mixture of Gaussian distributions with parameters that are adjusted by training. GMM models are useful for clustering, anomaly detection, and density estimation. [61]. It consists of three parameters, which includes the mean (\(\mu\)) which defines the center for each of the Gaussian, the covariance (\(\Sigma\)) that represents the spread, and mixing probability (\(\Pi\)) defining the weight of the respective Gaussian distribution. The mixing coefficients for each cluster \(k\) are themselves probabilities \(\pi_{k}\) and must have a sum of 1 as shown below \[\sum_{k=1}^{k}\pi_{k}=1 \tag{3}\] In comparison to GMM, k-means clustering places a circle (a hypersphere in case of higher dimensions) at the center of each cluster. We can define a radius by the most distant point in the cluster; however, GMMs can handle also oblong and ellipsoidal form of clusters. The applications of GMMs for remote sensing data include image clustering, segmentation and generating synthetic data. Bei et al. [62] presented an improvised GMM that takes into account spatial information to improve image clustering. Yin et al. [63] combined the fuzzy region competition method with GMM for image segmentation. Davari et al. [64] utilized GMM for hyperspectral remote sensing that featured the challenge of large dimensions (features) with less training data points. Neagoe et al. [65] presented a cascade of k-means clustering and GMM for semi-supervised classification. #### 2.4.3 Agglomerative clustering Agglomerative hierarchical clustering, also known as agglomerative nesting (AGNES), is the most common type of hierarchical clustering used to group data samples in clusters based on their similarity [66; 67; 68]. The algorithm begins by treating each data instance as a singleton cluster and pairs of clusters are merged until all clusters have been merged into a large cluster featuring all the data. This produces a dendogram which is a tree-based representation of the data samples. It produces a flexible and informative cluster tree instead of forcing users to choose a particular number of clusters, such as determining the \(k\) in k-means algorithm. Goncalves et al. [69] proposed an unsupervised clustering method combining self-organizing maps (SOM) with AGNES for automatic classification of remotely sensed data. Liu et al. [70] used hierarchical clustering for image segmentation of high resolution remote sensing images. Figure 2: Mosaic of One Tree Island reef taken from Allen Coral Atlas [51]. #### 2.4.4 Dbscan DBSCAN [34] uses local density estimation to identify clusters of arbitrary shapes not easily possible with traditional methods, such as k-means clustering. In DBSCAN, the data samples are seen as core points (density), reachable points, and outliers. The algorithm counts how many samples are located within a small distance \(\varepsilon\)(epsilon) from each core point and marks a region called the neighbourhood. The data samples in the neighbourhood of a core sample belong to the same cluster. This neighbourhood may include other core instances; therefore, a long sequence of neighbouring core instances forms a single cluster. Any sample that is not a core sample and does not have one in its neighbourhood is considered an anomaly. DBSCAN clustering has been prominent in a number of applications with tabular data [71], and also has been used for remote sensing data. Wang et al. [72] presented improved DBSCAN for Lidar data and the results showed that it could segment different types point clouds with higher accuracy in a robust manner. Liang Zhang et al. [73] utilised DBSCAN clustering in their adaptive superpixel generation algorithm for synthetic-aperture radar (SAR) imagery. Liujun Zhu et al. [74] used DBSCAN for vegetation change detection using multi temporal analysis. ### Framework Next, we present the framework that incorporates the different clustering algorithms such k-means, GMM, agglomerative clustering, and DBSCAN for segmentation of two different types of satellite imagery to create coral reef maps (Figure 3). The initial step is to acquire the remote sensing-based imagery of the coral region of interest (Figure 3- Step a). The coral reef mosaic data taken from Allen Coral atlas [51] utilises sensor and radiometric calibration for image processing. Moreover, it employs the "best scene on top" (BOT) technique in the mosaicking process of PlanetScope imagery [51]. We then create a geomorphic map of the region, where the bathymetric data (Figure 3 - Step c) is concatenated with the imagery obtained from the previous step. Next, we evaluate the clustering algorithms (Figure 3 - Step d) to create clustering regions (segments). We need to evaluate the results and hence need a way to ensure that the segments acquired are meaningful. Hence we apply a qualitative analysis, where we assign each cluster a color according to the map used for comparison, and then we compare the maps qualitatively (visually), side by side (Figure 3 - Step e). This helps in assigning the labels to the clusters based on the visual similarities with the existing maps. If the results obtained are unsatisfactory, the clustering algorithm is again applied on the data with new parameters (Figure 3 - Step g). The final step incorporates map refinement and clean up (Figure 3 - Step h) wherein we merge the extra clusters to the closest region of interest [51] to generate the coral reef map of interest (Figure 3 - Step i). The map refinement by logical rules (Figure 3 - Step h) remaps the excess smaller clusters to the major cluster of a given label surrounding them. This ensures that the smaller clusters are merged to get a resultant map with only the regions having labels of interest. Figure 3: Proposed framework for applying clustering algorithms (k-means, GMM, AGNES, and DBSCAN) on remote sensing data for coral reef mapping. ## 3 Results ### K-means and GMM clustering We begin by finding the optimal number of clusters for the respective clustering methods such as k-means and GMM used in our framework. In the case of k-means clustering, we use the elbow method which plots the sum of square distance to find the number of optimal clusters (\(k\) value) by calculating the distance between a data point and the cluster (WCSS). The point where the curve starts to flatten and resembles like an elbow of a curve is chosen as the \(k\) value. In Figure 4, a the elbow can be seen at \(k=3\). In the case of GMM, we use Bayesian information criterion (BIC) to find the value of \(k\). The gradient of BIC score curve, much similar to finding the elbow of the curve, is used to estimate the optimal number of clusters for the data. The lower the score indicates that the model is reflecting the data we have, and by extension, the true, unknown, distribution. However, in order to avoid over-fitting, this technique penalizes models with big number of clusters. In Figure 4, the gradient can be seen at \(k=2\). In our case, the final goal is to have a visual qualitative comparison of the clustering maps of the reef to the Allen coral atlas. Hence, while keeping the elbow method and BIC score in mind, we look at the results obtained by the clustering algorithms and choose the \(k\) value based on the result that is able to provide maximum resemblance to the important regions of the coral maps (eg. Figure 5- Panel (b)). Figure 8 shows results of logical rules (Figure 3 - step h) used to create the benthic map generated using GMM model. Figure 8- Panel b has black cluster which gets remapped to sand (yellow) cluster in the map refinement stage Figure 9 - Panel c. Figure 5: Evaluate k - K-means Figure 6: Benthic maps Figure 8: Preliminary coral map results - GMM Figure 7: Geomorphic maps then combine this clustering region with the sand region to get the final result. The clustering result obtained from k-means (Figure 9- Panel (b)) and GMM (Figure 9 - Panel (c)) for generating benthic coral maps, showcase three clusters; namely, ocean, sand and rock/rubble. Upon visual comparison with the benthic map from the Allen coral atlas (Figure 9 - Panel (a)), we can see that GMM provides clusters with a higher similarity than k-means clustering. In the case of the geomorphic map, we set the number of clusters to 7 for k-means and GMM. The preliminary results obtained using GMM (Figure 8 - Panel (b)) provides extra clusters in the ocean region by making distinction in the water bathymetry. These clusters are combined to ocean by visually comparing our result to the Allen coral atlas geomorphic map. The final geomorphic maps generated using k-means (Figure 10 - Panel (b)) and GMM (Figure 10- Panel (c)) creates 4 clusters: reef flat, lagoon/ plateau, reef slope and ocean. The reef flat and the lagoon/ plateau region created by GMM model has a greater resemblance to the original geomorphic map given by the Allen coral atlas (Figure 10- Panel (a)). A general limitation seen in using clustering techniques for reef habitat mapping is that the lack of supervised features hinders the sub-classification of the regions. Nevertheless, this approach can be useful in gathering a basic overview of reef habitats without the need for manual labelling which is a labor intensive task. In terms of limitations of the framework, we not that there is a lengthy process of taking into account the visual comparison to find the optimal parameters for reef mapping. However, the study has revealed certain combinations of hyper-parameters (k value) that is useful for reef areas and the same can be used in future studies when the framework is applied to other regions. Furthermore, the current study has considered a relatively small area and it can be a challenge if clustering methods are used for large area, such as the entire GBR region. The framework would then need to be extended using a distributed/parallel computing infrastructure so that the method can work for smaller regions, i.e large region can be divided using a grid and results combined. ## 5 Conclusion and Future Work In this study, we presented a framework to compare different clustering algorithms for the task of reef mapping using unlabelled remote sensing data. We used One Tree Island of GBR to demonstrate the effectiveness of the framework. Moreover, the framework transformed the raw clusters into a reef map using field knowledge and map refinement operations based on logical rules that are gathered from expert knowledge. The results showed that k-means and GMM clustering methods are most suitable for benthic and geomorphhmic reef mapping as these methods created the maps which were visually most similar to maps obtained by related methods (Coral Atlas). In future work, our framework can be used for reef change detection especially when the field inspection can't be easily conducted for example in case of natural disasters such as Tsunami, storms and cyclones. The framework can help in assessing the impact of extreme climate events (cyclones and storms) on the reef habitats, which can play a crucial role in reef restoration projects. Furthermore, the framework can also be utilised for generating maps using remote sensing data of the regions where the labelled data is unavailable such as remote sensing data obtained from Mars and Moon exploration projects. The proposed framework is a way to address the challenges faced by reef scientists that involved finding labelled data for analysis and the need for manually labelling reef regions especially in large regions. It can be considered as a low-cost and robust approach for working with raw data during the exploration stage of a research study. In future work, the framework can be extended with other clustering methods and validated further given regions where reef data is labelled. ## Computer Code Availability We provide open source code and data for our proposed framework via Github repository 3. Figure 10: Geomorphic map results - Allen coral atlas ## Author Credit Statement S. Barve contributed to programming, experiments, and writing. J. Webster provided data and contributed to analyses and writing. R. Chandra conceptualised and supervised the project and contributed to writing and analyses.
2302.07682
Origin of multiwavelength emission from flaring high redshift blazar PKS 0537-286
The high redhsift blazars powered by supermassive black holes with masses exceeding $10^9\:M_\odot$ have the highest jet power and luminosity and are important probes to test the physics of relativistic jets at the early epochs of the Universe. We present a multi-frequency spectral and temporal study of high redshift blazar PKS 0537-286 by analyzing data from Fermi-LAT, NuSTAR Swift XRT and UVOT. Although the time averaged $\gamma$-ray spectrum of the source is relatively soft (indicating the high-energy emission peak is below the GeV range), several prominent flares were observed when the spectrum hardened and the luminosity increased above $10^{49}\:{\rm erg\:s^{-1}}$. The X-ray emission of the source varies in different observations and is characterised by a hard spectrum $\leq1.38$ with a luminosity of $>10^{47}\:{\rm erg\:s^{-1}}$. The broadband spectral energy distribution in the quiescent and flaring periods was modeled within a one-zone leptonic scenario assuming different locations of the emission region and considering both internal (synchrotron radiation) and external (from the disk, broad-line region and dusty torus) photon fields for the inverse Compton scattering. The modeling shows that the most optimistic scenario, from the energy requirement point of view, is when the jet energy dissipation occurs within the broad-line region. The comparison of the model parameters obtained for the quiescent and flaring periods suggests that the flaring activities are most likely caused by the hardening of the emitting electron spectral index and shifting of the cut-off energy to higher values.
N. Sahakyan, G. Harutyunyan, D. Israyelyan
2023-02-15T14:19:03Z
http://arxiv.org/abs/2302.07682v2
# Origin of multiwavelength emission from flaring high redshift blazar PKS 0537-286 ###### Abstract The high redhsift blazars powered by supermassive black holes with masses exceeding \(10^{9}\)\(M_{\odot}\) have the highest jet power and luminosity and are important probes to test the physics of relativistic jets at the early epochs of the Universe. We present a multi-frequency spectral and temporal study of high redshift blazar PKS 0537-286 by analyzing data from _Fermi_-LAT, NuSTAR Swift XRT and UVOT. Although the time averaged \(\gamma\)-ray spectrum of the source is relatively soft (indicating the high-energy emission peak is below the GeV range), several prominent flares were observed when the spectrum hardened and the luminosity increased above \(10^{49}\)\(\rm erg\,s^{-1}\). The X-ray emission of the source varies in different observations and is characterised by a hard spectrum \(\leq 1.38\) with a luminosity of \(>10^{47}\)\(\rm erg\,s^{-1}\). The broadband spectral energy distribution in the quiescent and flaring periods was modeled within a one-zone leptonic scenario assuming different locations of the emission region and considering both internal (synchrotron radiation) and external (from the disk, broad-line region and dusty torus) photon fields for the inverse Compton scattering. The modeling shows that the most optimistic scenario, from the energy requirement point of view, is when the jet energy dissipation occurs within the broad-line region. The comparison of the model parameters obtained for the quiescent and flaring periods suggests that the flaring activities are most likely caused by the hardening of the emitting electron spectral index and shifting of the cut-off energy to higher values. keywords: galaxies: jets - galaxies: active - gamma-rays: galaxies - quasars: individual: PKS 0537-286 ## 1 Introduction Blazars are radio-loud quasars with powerful relativistic jets that make a small angle to the observer's line of sight (Urry & Padovani, 1995). They are among the most energetic sources in the Universe and a dominant class of extragalactic sources in the high energy (HE; \(>100\) MeV) \(\gamma\)-ray sky (e.g., Abdollahi et al., 2022). The new possibility of extensive multiwavelength observations coupled with multi-messenger observations have the potential to widen our understanding of blazars. Historically, blazars are sub-grouped in two large sub-classes: BL Lacs and flat-spectrum radio quasars (FSRQs) (Urry & Padovani, 1995). BL Lacs have nearly featureless optical spectra (very weak or no lines are observed) while the FSRQs have bright and broad lines with equivalent widths of \(|\rm EM|>5\) A. One of the most distinguishable features of blazars is the very strong variability of their emission in almost all the observed bands in various times scales, from minute to week or month scales; (e.g., Arlen et al., 2013; Aleksic et al., 2014; Ackermann et al., 2016; Shukla et al., 2018; Meyer et al., 2019). This variability is stochastic in nature but a recent detection of quasi-periodic oscillations was reported (e.g., see Pehil et al., 2020; Ren et al., 2022). Being powerful emitters, blazars are frequently monitored in all the accessible wavelengths which resulted in accumulation of a substantial amount of data. The emission from blazars, predominantly of a nonthermal nature (e.g., Padovani et al., 2017), is dominated by Doppler-amplified emission from the jet, typically showing two broad peaks: the first at radio to X-ray bands, and the second at \(\gamma\)-rays. The low-energy component peak (\(v_{\rm syn}^{\rm p}\)) is used to further classify blazars as low- (LBL/LSP), intermediate- (IBL/ISP) or high- (HBL/HSP) peaked sources when \(v_{\rm syn}^{\rm p}<10^{14}\) Hz, \(10^{14}\) Hz \(<v_{\rm syn}^{\rm p}<10^{15}\) and \(v_{\rm syn}^{\rm p}>10^{15}\) Hz, respectively (Padovani & Giommi, 1995; Abdo et al., 2010). However, \(v_{\rm syn}^{\rm p}\) can be well above \(2.4\times 10^{17}\) in extreme blazars (e.g., Giommi et al., 1999; Costamante et al., 2001; Biteau et al., 2020; Pian et al., 1998; Sahakyan, 2020) which are interesting as they challenge our current understanding of particle acceleration and emission processes. In addition, the remarkable \(\gamma\)-ray blazar 4FGL J1544.3-0649, which was undetected in the X-ray and \(\gamma\)-rays until May 2017, showed a transient-like behaviour, becoming a very bright source for a few months and detected by Fermi Large Area telescope (_Fermi_-LAT) and MAXI X-ray sky monitor (Sahakyan & Giommi, 2021). This suggests the existence of an undiscovered blazar population which may occasionally flare. The broadband spectral energy distribution (SED) of blazars can be modeled within different scenarios involving the interaction of electrons and protons in a single or multiple zone. Although, there is a consensus that the low-energy component is due to the synchrotron emission of ultra-relativistic charged electrons in the blazar jet, the origin of the second component is highly debated. In the leptonic scenarios, this component is due to inverse Compton scattering of low-energy photons which might be the produced synchrotron photons (synchrotron-self Compton model, SSC; Ghisellini et al., 1985; Maraschi et al., 1992; Bloom & Marscher, 1996) or be of an external origin (e.g., Sikora et al., 1994), such as photons directly emitted from the accretion disk (Dermer et al., 1992; Dermer & Schlickeiser, 1994) or reflected from the broad-line region (Sikora et al., 1994) or emitted from the dusty torus (Blazejowski et al., 2000). Alternatively, the second component can be due to either the synchrotron emission of the energetic protons inside the jet (Mucke & Protheroe, 2001) or due to the secondaries generated in photo-pion and photo-pair interactions (Mannheim, 1993; Mannheim & Biermann, 1989; Mucke & Protheroe, 2001; Mucke et al., 2003; Bottcher et al., 2013; Petropoulou & Mastichiadis, 2015; Gasparyan et al., 2022). These hadronic models (especially lepto-hadronic models, e.g., Ansoldi et al., 2018; Keivani et al., 2018; Murase et al., 2018; Padovani et al., 2018; Sahakyan, 2018; Righi et al., 2019; Cerruti et al., 2019; Sahakyan, 2019; Gao et al., 2019; Gasparyan et al., 2022) have become more attractive after the observations of IceCube-170922A neutrino event from the direction of TXS 0506+056 (IceCube Collaboration et al., 2018, 2018) and \(\gamma\)-ray observations of multiple neutrino events from the direction of PKS 0735+178 when it was bright in the optical/UV, X-ray and \(\gamma\)-ray bands (Sahakyan et al., 2022). Due to the extreme luminosities of blazars, even very high redshift ones can be observed (e.g., see Ackermann et al., 2017). The observation of distant blazars is of particular interest as they allow it) to study the relativistic jets as well as their connection with accretion disk/black hole in the early epochs of the Universe, ii) to measure the suppression of the \(\gamma\)-ray flux which can be used to estimate or constraint the density of the extragalactic background light (EBL) (Kneiske et al., 2004; Mazin & Raue, 2007; Franceschini et al., 2008) and understand its cosmological evolution, iii) to investigate, in general, the properties of \(\gamma\)-ray emitting active galactic nuclei (AGN), which is important for the understanding of the cosmological evolution of the \(\gamma\)-ray background (Abdo et al., 2010). Due to their faintness, high redshift blazars are rather difficult to observe and identify, limiting the number of already associated high redshift blazars. For example, in the fourth catalog of AGNs detected by _Fermi_-LAT (data release 3 (DR3); Abdollahi et al., 2022) only 110 blazars are observed beyond \(z=2.0\) and only 10 beyond \(z=3.0\). The most distant blazar observed in the \(\gamma\)-ray band is GB 1508+5714 at \(z=4.31\). The physical properties of these high redshift blazars have been frequently investigated using multi-frequency data (e.g., Ghisellini et al., 2009, 2011; Paliya, 2015; Paliya et al., 2016; Marcotulli et al., 2017; Ackermann et al., 2017; Li et al., 2018; Paliya et al., 2019). For example, in Paliya et al. (2020) by studying nine \(\gamma\)-ray emitting blazars and 133 candidate blazars with soft X-ray spectra it is shown that these high redshift blazars host massive black holes (\(M_{\rm BH}>10^{9}\)\(M_{\odot}\)) and have an accretion disk luminosity of \(>10^{46}\) erg s\({}^{-1}\). Or, in Sahakyan et al. (2020), by studying the spectral and temporal properties of thirty-three distant blazars (\(z>2.5\)) and modeling their SEDs, it is found that the emission region size is \(\leq 0.05\) pc, while the magnetic field and the Doppler factor are correspondingly within \(0.10-1.74\) G and \(10.0-27.4\). Although the number of observed high redshift blazars is not high enough to perform statistically sound population studies, the investigation of the properties of individual objects provides interesting peaces to understand the general physics of high redshift blazars. The multiwavelength monitoring of several high redshift blazars opens wide opportunities for investigation of their multiwavelength spectral and temporal properties as well as for performing detailed theoretical modeling and interpretation of the results. For example, the continuous monitoring of these sources in the HE \(\gamma\)-ray band by _Fermi_-LAT (Atwood et al., 2009) allows to select various emission states, or their observations in the X-ray band with Neil Gehrels Swift Observatory (Gehrels et al., 2004, hereafter Swift), and Nuclear Spectroscopic Telescope Array (NuSTAR; Harrison et al., 2013) combined with the \(\gamma\)-ray data allows a precise estimation of the second emission component peak, or the data in the optical/UV bands can be used to constrain the high energy tail of the synchrotron component and/or the direct thermal emission from the accretion disk (Ghisellini et al., 2011). Therefore, the data available in different bands can be used to put tighter constraints on the physics of individual high redshift blazars. Here we present a broadband study of PKS 0537-286; at \(z=3.10\)(Wright et al., 1978) it is one of the brightest high redshift blazars. It was observed in the X-ray band with various instruments (e.g., Einstein observatory (Zamorani et al., 1981), ASCA (Cappi et al., 1997; Siebert et al., 1996), ROSAT (Fiore et al., 1998), etc.) showing a particularly hard X-ray spectrum (\(\sim 1.2\)). In the \(\gamma\)-ray band, with an energy flux of \((1.44\pm 0.006)\times 10^{-11}\) erg cm\({}^{-2}\) s\({}^{-1}\) in the fourth catalog of _Fermi_-LAT AGNs (DR3; Abdollahi et al., 2022), it is the brightest blazar beyond \(z=3.0\). Moreover, in several occasions \(\gamma\)-ray flares were observed when the daily flux was above \(10^{-6}\) photon cm\({}^{-2}\) s\({}^{-1}\)(Valverde & Forman, 2022; Angioni, 2020; Cheung, 2017) which makes PKS 0537-286 the most distant \(\gamma\)-ray flaring blazar (Li et al., 2018; Sahakyan et al., 2020).The broadband emission from PKS 0537-286 was successfully modeled within a one-zone synchrotron and external inverse Compton scenario where the excess in optical/UV band was interpreted as emission from bright thermal accretion disk (Bottacini et al., 2010). In general, the peak of the second component in the SED of high redshift blazars is at MeV energies, which implies their HE \(\gamma\)-ray spectrum is soft, so they are not ideal sources for \(\gamma\)-ray observations. Therefore, the observation of the \(\gamma\)-ray flaring activity of distant blazars, which is crucial for testing different emission scenarios of relativistic jets, is even more interesting as compared with that of the nearby sources. Motivated _i)_ by the availability of multiwavelength data from PKS 0537-286 observations - since 2008 in the HE \(\gamma\)-ray band by _Fermi_-LAT, multiple observations of PKS 0537-286 by Swift X-Ray Telescope (XRT) and Ultra-violet Optical Telescope (UVOT) instruments and two observations of PKS 0537-286 with NuSTAR, and _ii)_ by the observed multiple flaring activities of PKS 0537-286, we decided to investigate the spectral and temporal properties of PKS 0537-286 by analyzing the data accumulated in the optical/UV, X-ray and \(\gamma\)-ray bands and put, through theoretical modeling, a constraint on the physical processes responsible for the PKS 0537-286 emission in the quiescent and flaring states. The paper is structured as follows. The data extraction and analysis in the \(\gamma\)-ray, X-ray and optical/UV bands are presented correspondingly in Sections 2, 3 and 4. The SED of PKS 0537-286 and its evolution in time is presented in Section 5, and the origin of the emission is discussed in Section 6. The results are presented and discussed in section 7 while the summary is given in Section 8. ## 2 _Fermi_-LAT observations and data analyses Fermi satellite launched in 2008 carries two instruments- the Large Area Telescope (LAT) is the main instrument on board designed to scan the entire sky in \(\gamma\)-ray band, and the Gamma-ray Burst Monitor (GBM) is designed to study gamma-ray bursts. LAT is a pair-conversion \(\gamma\)-ray telescope sensitive in the energy range from 20 MeV to 300 GeV with a field of view of \(\sim 2.4\) sr. It is by default in the all sky scanning mode which allows to study the HE properties of various sources, including blazars. For more details on the LAT see Atwood et al. (2009). We have analyzed the \(\gamma\)-ray data collected between August 4 2008 and September 9 2022 (MET=239557417-686130659). The data was reduced and analyzed following the standard procedures described in the _Fermi_-LAT documentation with _fermirolos_ version 2.0.8 using PSR3_SOURCE_V3 instrument response functions. The events in the energy range from 100 MeV to 300 GeV are selected from a circular region of interest (ROI) of \(12^{\circ}\) radius centered at the \(\gamma\)-ray location of PKS 0537-286 (RA= \(89.99\), Dec= \(-28.65\)), retrieving only events classified as _exclass=128_ and _evtype= 3_. A zenith angle cut less than \(90^{\circ}\) was introduced to remove the secondary \(\gamma\)-rays from the earth limb. The model file that includes point-like sources and background models was created based on the _Fermi_-LAT fourth source catalog (4FGL) incremental version (DR 3; Abdollahi et al., 2022), which is based on 12 years of initial _Fermi_-LAT operation and includes best-fit spectral parameters of all known 4FGL \(\gamma\)-ray sources in the ROI. The sources which are within \(17^{\circ}\) from the PKS 0537-286 location were included in the model file; the spectral parameters of the sources within \(12^{\circ}-17^{\circ}\) are fixed to their values reported in 4FGL while they are left free for the sources falling within \(<12^{\circ}\) radius. The galactic background and isotropic galactic emissions were modeled with the latest version available files, gll_iem_v07 and iso_P8R3_SOURCE_V3_v1, respectively. The \(\gamma\)-ray analysis is performed with _gtlike_ tool, following the binned likelihood method. Initially, the spectrum of PKS 0537-286 was modeled with a log-parabolic model as in 4FGL. However, the fit was also performed when assuming a power-law model for PKS 0537-286 \(\gamma\)-ray emission and the resulting model file was used in the light-curve calculations, because for shorter periods a power-law can be a good approximation of the spectrum. The significance of the source emission is estimated using test statistic (TS), which is defined by \(TS=2\)(lnL\({}_{1}-\)lnL\({}_{0}\)) where \(L_{1}\) and \(L_{0}\) are maximum likelihoods with and without the source, respectively (Mattox et al., 1996). In order to investigate the variability pattern of the source, the light curves were generated by two different methods. Initially, the whole time interval was divided into 5-day intervals and the photon index and flux of PKS 0537-286 were estimated by the unbinned analysis method from _gtlike_ tool. Next, in order to obtain a deeper insight into the \(\gamma\)-ray flux evolution in time, the adaptively binned light curve was computed (Lott et al., 2012). In this method, the bin widths above the optimal energy (\(E_{\rm opt}\)) are adjusted to have fixed uncertainty, so in the case of flux increase shorter intervals are estimated, whereas in the quiescent/normal states time bins are wider. This method has been proven to be a powerful tool in finding flaring periods in blazars (e.g., see Rani et al., 2013; Britto et al., 2016; Sahakyan & Gasparyan, 2017; Zargaryan et al., 2017; Baghmanyan et al., 2017; Gasparyan et al., 2018; Sahakyan et al., 2018; Sahakyan et al., 2021; Sahakyan et al., 2022; Sahakyan & Giommi, 2022). The adaptively binned light curve (\(>E_{\rm opt}=168.19\) MeV) is shown in Fig. 1 upper panel. Up to MJD 57740 the \(\gamma\)-ray flux was in its average level of \((1-3)\times 10^{-8}\) photon cm\({}^{-2}\) s\({}^{-1}\) with no significant changes, while then, in several occasions, the \(\gamma\)-ray flux increased substantially. The light curve with 5-day (\(>100\) MeV) and adaptive bins (\(>E_{\rm opt}=168.19\) MeV) for the period when the source was active in the \(\gamma\)-ray band are shown correspondingly in Fig. 1 panels a) and b). The first flaring period was between MJD 57876-57883 when the flux increased with a maximum of \((5.26\pm 1.13)\times 10^{-7}\) photon cm\({}^{-2}\) s\({}^{-1}\). Starting from MJD 59170, the source entered an active emission state with several bright flaring periods between MJD 59204-59233, MJD 59301-59411 and MJD 59721-59738. The maximum \(\gamma\)-ray flux of the source, \((6.32\pm 1.11)\times 10^{-7}\) photon cm\({}^{-2}\) s\({}^{-1}\) was also observed in these \(\gamma\)-ray flaring periods. Fig. 1 panel c shows the \(\gamma\)-ray photon index estimated for the adaptively binned periods; it varies in time as well. In the non-flaring periods, the \(\gamma\)-ray spectrum is characterised by a soft spectrum with a mean of \(\Gamma\simeq 2.83\) but the photon index hardens during the bright periods as can be seen from Fig. 1 panel c. For example, during the first flare between MJD 57876-57883 the hardest index of \(2.49\pm 0.23\) was observed on MJD 57879.9 or during the second flare between MJD 59204-59233 the hardest index was \(2.25\pm 0.21\) when the source was in an active state with a flux of \((6.12\pm 1.22)\times 10^{-7}\) photon cm\({}^{-2}\) s\({}^{-1}\). During the hardest \(\gamma\)-ray emission period, \(2.23\pm 0.18\) was detected on MJD 59322 which is unusual for this source. ## 3 NuSTAR Data Analysis NuSTAR is a hard X-ray telescope sensitive in the 3-79 keV energy range (Harrison et al., 2013). NuSTAR with two focal plane modules (FPMs), FPMA and FPMB, observed PKS 0537-286 on 28 December 2020 (MJD 59211.99) and on 24 October 2020 (MJD 59146.17) for 97.1 ks and 24.3 ks, respectively. It should be noted that around these observations PKS 0537-286 was also monitored with Swift (see dashed blue lines in Fig. 1 panel d), so the X-ray spectrum of the source can be obtained in a large range of 0.3-79 keV. The NuSTAR data was analyzed applying the standard procedure and using _NuSTAR_Spectra_ tool developed in Middei et al. (2022). _NuSTAR_Spectra_ script downloads calibrated and filtered event files from the SSDC repository, uses _XIMAGE_ package to precisely locate the source's coordinate then extracts high-level scientific products for the detected sources using _unproducts_ routine. The script automatically sets the source extraction region radius depending on the source counts (usually in the range of 30-70 arcsec). The background is computed in an annulus centered on the source with a minimum separation of 50 arcsec between the inner and outer radii. Then, a spectral analysis is performed using the XSPEC package (Arnaud, 1996) adopting Cash statistics (Cash, 1979). More details on _NuSTAR_Spectra_ are available in Middei et al. (2022). The analysis shows that the X-ray photon index of PKS 0537-286 is the same in both observations - \(1.26\pm 0.06\) and \(1.26\pm 0.02\) on MJD 59146.17 and MJD 59211.99, respectively. The X-ray flux between 3-10 keV measured on MJD 59146.17 is \(\rm F_{3-10\,keV}=(2.72\pm 0.06)\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\) and on MJD 59211.99, it increased by about a factor of two, \(\rm F_{3-10\,keV}=(5.10\pm 0.04)\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\). Similarly, the flux between 10-30 keV also increased in these two observations, being correspondingly \(\rm F_{10-30\,keV}=(5.79\pm 0.20)\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\) and \(\rm F_{10-30\,keV}=(1.08\pm 0.01)\times 10^{-11}\) erg cm\({}^{-2}\) s\({}^{-1}\). This shows that the source in the 3.0-30 keV range was in an enhanced state on 28 December 2020. ## 4 Swift Data Analysis Swift is a space-based observatory with three main instruments onboard, namely burst alert telescope (BAT) sensitive in the energy Figure 1: Multiwavelength light curve of PKS 0537-286. Top panel shows the long-term adaptively binned \(\gamma\)-ray light curve above 168.19 MeV. The other panels show the light curves after MJD 57800 (16 February 2017) when the source was active in the \(\gamma\)-ray band. _a)_ adaptively binned \(\gamma\)-ray light curve, _b)_ 5-day binned \(\gamma\)-ray light curve (\(>100\) MeV), _c)_ \(\gamma\)-ray photon index measured for the adaptive time bins, _d)_ and _e)_ X-ray flux (2-10 keV) and photon index in different Swift observations. The dashed blue lines show the NuSTAR observation times. _f)_ Swift UVOT fluxes in V, B, U, W1, M2, and W2 bands. range of 3.0-150.0 keV, XRT sensitive in the energy range of 0.3-10.0 keV, and UVOT sensitive in the optical/UV band 170 - 650 nm (Gehrels et al., 2004). Swift performed 29 observations of PKS 0537-286 among which nine observations were performed before the lunch of _Fermi_-LAT. However, in order to investigate the flux changes in different years, we have analyzed all the data from Swift observations of PKS 0537-286. ### Swift XRT All the XRT observations were processed with _Swift_xrtproc_ tool applying standard analysis procedure (Giommi et al., 2021). _Swift_xrtproc_ downloads the raw data for each snapshot and for the whole observation, generates exposure maps and calibrated data product using the XRTPIPELINE task adopting standard parameters and filtering criteria. The source region counts are estimated from a circle of a radius of 20 pixels while the background counts from an annular region centred around the source with a radius sufficiently large to avoid contamination from source photons. The resultant ungrouped data is loaded in XSPEC (Arnaud, 1996) for spectral fitting using Cash statistics (Cash, 1979), modeling the source spectrum as power-law and log-parabola. As a result, the X-ray photon index in the energy range 0.3-10 keV and the flux in various bands are estimated. The 2-10 keV X-ray flux variation is shown in Fig. 1 panel d). Although in the X-ray band there is a limited number of observations, the flux variation is evident. The X-ray emission of the source in the 2.0-10 keV band was at the level of \(\sim 3.0\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\) but during the bright periods it is \(\geq 5.0\times 10^{-12}\)erg cm\({}^{-2}\) s\({}^{-1}\). The highest X-ray flux of (\(8.34\pm 3.59\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\) was observed on MJD 59213.18. The X-ray spectrum of the source is hard (Fig. 1 panel e) and during all the observations \(\rm{\Gamma_{X-ray}}\leq 1.38\). Therefore, as it is typical for FSRQs, the X-ray band defines the rising part of the second component in the SED. ### Swift UVOT In the same periods, UVOT observed PKS 0537-286 in V (500-60 nm), B (380- 500 nm), U (300- 400 nm), W1 (220-400 nm), M2 (200-280 nm) and W2 (180-260 nm) filters. All the available 28 observations were downloaded and reduced using HEAsoft version 6.29 with the latest release of HEASARC CALDB. Photometry was computed using a five-arcsecond source region centered on the sky position of PKS 0537-286 and the background counts are estimated from a twenty-arcsecond region away from the source. The magnitudes were derived using _uvotsource_ tool, then the fluxes were obtained using the conversion factors provided by Poole et al. (2008) which were corrected for extinction using the reddening coefficient \(E(B-V)\) from the Infrared Science Archive 1. Footnote 1: [http://irsa.ipac.caltech.edu/applications/DUST/](http://irsa.ipac.caltech.edu/applications/DUST/) Fig. 1 panel f) shows the light curve of PKS 0537-286 in optical/UV bands. The source is relatively faint in all the filters with the flux around \(\simeq 10^{-13}\)erg cm\({}^{-2}\) s\({}^{-1}\). In some cases, coinciding with the flares in the \(\gamma\)-ray band, the flux increased several times. The highest flux of the source was observed in V-band; on MJD 59213.18 and MJD 59732.67 it was \((1.08\pm 0.37)\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\) and \((1.38\pm 0.26)\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\), respectively. In addition, VOU-Blazar tool, which allows to search and collect all spectral information accessible through virtual observatory services and build the multiwavelength SEDs of blazars (Chang et al., 2020) was used to investigate the source emission properties in the infrared band. In particular, data extracted from the Wide-field Infrared Survey Explorer (WISE) and NEOWISE surveys (Mainzer et al., 2014) show that the source emission at 3.4 and 4.6 \(\mu m\) wavelengths (infrared) was at the level of several times \(10^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\). ## 5 Multiwavelength SEDs The data analyzed in this paper allows to build the SEDs of PKS 0537-286 in different periods. The single snapshot SED provides substantial information on the source emission properties whereas the variation of these SEDs in time is crucial for understanding the dynamical changes in the emission components. For this purpose, we generated SED/Light curve animation of PKS 0537-286 by showing the \(\gamma\)-ray spectra with all available data sets. For each adaptively binned interval we performed \(\gamma\)-ray spectral analysis using the unbinned likelihood method implemented in _gtlike_ tool. Then, for each \(\gamma\)-ray period, together with the \(\gamma\)-ray data we plotted the Swift XRT, NuSTAR and Swift UVOT data as well as archival data extracted with VOU-blazar tool. By going from one to another \(\gamma\)-ray period we can investigate the changes in the multiwavelength SED of PKS 0537-286. The SED/light curve animation is available here youtube.com/4UPgf-C/EWc. As the blazar is at \(z=3.10\) the UVOT flux could be affected by absorption of neutral hydrogen in intervening Lyman-\(\alpha\) absorption systems (e.g., Ghisellini et al., 2011) which was corrected using the attenuation calculated in Ghisellini et al. (2010) for the UVOT filters. The SED/light curve animation shows the high amplitude changes observed in the \(\gamma\)-ray band; the gray background data points show the \(\gamma\)-ray flux estimated in different periods. Also, the spectral hardening in several bright \(\gamma\)-ray periods can be seen. ## 6 Origin of broadband emission In the previous section, the generated multiwavelength SEDs show the features of PKS 0537-286 emission in different periods. It is especially important to investigate the processes taking place in the jet of high redshift blazars as they could provide information on the jet plasma state in the early Universe. For this reason, the following periods were considered for modeling: * The SED between MJD 55150-55330 when the source was in the quiescent state, i.e., the \(\gamma\)-ray flux above 100 MeV was \((2.77\pm 0.84)\times 10^{-8}\) photon cm\({}^{-2}\) s\({}^{-1}\) and the 2-10 keV X-ray flux was \((3.29\pm 1.11)\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\). * The SED between MJD 59208-59212 when the source was bright in the \(\gamma\)-ray and X-ray (2-10 keV) bands with corresponding fluxes of \((5.46\pm 0.83)\times 10^{-8}\) photon cm\({}^{-2}\) s\({}^{-1}\) and \((7.47\pm 1.18)\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\). This period coincides with the NuSTAR observation on MJD 59211.99 which showed that the source was in an elevated X-ray emission state also in the 3-30 keV range. The broadband SEDs were modeled using a one-zone leptonic scenario. In this model, it is assumed that the emission originates from a spherical blob of radius \(R\) moving with a bulk Lorentz factor of \(\pi\) at a small inclination angle of \(\theta\) with respect to the observer. Due to the relativistic motion and small \(\theta\) the radiation will be Doppler amplified by a factor of \(\delta\simeq\Gamma\). The emission region magnetized with a field strength of \(B\) is filled with relativistic electrons whose energy distribution is given by \[N_{\rm e}=N_{0}\ \gamma^{-p}\ exp(-\frac{\gamma}{\gamma_{\rm cut}})\qquad\gamma> \gamma_{\rm min} \tag{1}\] where \(\gamma=E_{\rm e}/m_{\rm e}c^{2}\) is the electron Lorentz factor, \(p\) is the power-law index, \(\gamma_{\rm min}\) and \(\gamma_{\rm crit}\) are the minimum and cut-off energy, respectively. The parameter \(N_{0}\) defines the electron energy density through \(U_{\rm e}=m_{\rm e}c^{2}\)\(\int\gamma N_{\rm e}\left(\gamma\right)d\gamma\). The electrons with energy distribution given by Eq. 1 undergo synchrotron losses under the magnetic field, producing the data observed between radio to X-ray bands. Instead, the second component in the SED, from X-rays to \(\gamma\)-rays, is from the inverse Compton scattering of internal and external photons on the same population of the electrons. When the electrons upscatter the synchrotron photons, the second component is explained by the SSC component (Ghisellini et al., 1985; Maraschi et al., 1992; Bloom & Marscher, 1996). Alternatively, if the emission region is within the BLR, the second component can be due to external Compton scattering of direct disk radiation (EIC-disk; Dermer et al., 1992; Dermer & Schlickeiser, 1994) and/or due to external Compton scattering of photon reflected from BLR clouds (EIC-BLR; Sikora et al., 1994). Instead, if the jet energy dissipation occurs at larger distances it can be due to external Compton scattering of dusty torus photons (EIC-torus; Blazejowski et al., 2000). In this paper, for a general view we consider three different scenarios: _i)_ the broadband emission from PKS 0537-286 is entirely due to synchrotron/SSC radiation, _ii)_ the jet dissipation region is close to the central black hole, and SSC, EIC-disk and EIC-BLR are contributing to the HE component and iii) the emission region is beyond the BLR and the HE component is due to EIC-torus. It is assumed that BLR is a spherical shell (e.g., Donea & Protheroe, 2003) with lower and upper boundaries of \(0.9\times R_{\rm BLR}\) and \(1.2\times R_{\rm BLR}\), respectively. \(R_{\rm BLR}\) is assumed to scale as \(R_{\rm BLR}=10^{17}\)\(L_{\rm disc,45}^{0.5}\) cm where \(L_{\rm disc,45}=L_{\rm disc}/10^{45}\) erg s\({}^{-1}\) is the accretion disk luminosity (Ghisellini & Tavecchio, 2015). Similarly, we assume that the distance of dusty torus is \(2\times 10^{18}\)\(L_{\rm disc,45}^{0.5}\)(Ghisellini & Tavecchio, 2015) which emits \(\eta=0.5\) fraction of disk luminosity in the IR range for which we adopted \(T_{\rm torus}=10^{3}\) K effective temperature. The disk luminosity and effective temperature are correspondingly \(8.7\times 10^{46}\) erg s\({}^{-1}\) and \(T_{\rm disk}=1.9\times 10^{4}\) K estimated by fitting the thermal blue-bump component in the SED with a black-body component. The remaining free model parameters are \(p\), \(\gamma_{\rm min}\), \(\gamma_{\rm crit}\), \(U_{\rm e}\), \(B\) and \(R\) which should be constrained during the fitting. The SED fitting is performed using publicly available code JetSet which is a numerical code allowing to fit the radiative models to data and obtain the parameters statistically better explaining them (Massaro et al., 2006; Tramacere et al., 2009, 2011; Tramacere, 2020). These parameters are initially constrained by using the Minuit optimizer and then improved by Markov Chain Monte Carlo (MCMC) sampling of their distributions. We applied the EBL model from Franceschini et al. (2008) to correct the attenuation in the HE \(\gamma\)-ray band, but as the \(\gamma\)-ray data extends to several tens of GeV it affects only the model extrapolation to higher energies. ## 7 Results and Discussions At \(z=3.10\), PKS 0537-286 is one of the most powerful FSRQs in the extragalactic \(\gamma\)-ray sky; the time-averaged \(\gamma\)-ray luminosity of the source is \(1.90\times 10^{48}\) erg s\({}^{-1}\) (assuming a distance of 27.08 Gpc). However, in several occasions, the source shows bright \(\gamma\)-ray flares when the flux substantially increases and the spectrum hardens. Fig. 2 left panel shows the \(\gamma\)-ray luminosity of PKS 0537-286 versus the photon index. During the bright periods, the luminosity increases, being above \(10^{49}\) erg s\({}^{-1}\); the maximum \(\gamma\)-ray luminosity corresponds to \(6.14\times 10^{49}\) erg s\({}^{-1}\). It should be noted that among 113 adaptively binned intervals, the source luminosity was above \(10^{49}\) erg s\({}^{-1}\) in 25 intervals amounting 61.8 days when extreme \(\gamma\)-ray luminosity was observed. Photon index hardening with increasing luminosity/flux can be noticed in Fig. 2 left panel. In order to test possible correlation/anti-correlation between the luminosity and photon index, a Pearson correlation test was applied which yielded \(-0.39\) with a probability of \(P=1.2\times 10^{-5}\). This indicates moderate anti-correlation between the luminosity and photon index, that is when the source emission becomes brighter the photon index hardens (harder-when-brighter trend). It should be noted that for blazars such trend is frequently observed in different bands (e.g., Abdo et al., 2010, 2010, 2011; Bahramyan et al., 2017; Sahakyan & Giommi, 2021; Sahakyan, 2021; Gasparyan et al., 2018; Sahakyan & Giommi, 2022) which can be interpreted as interplay between acceleration and cooling of the electrons (Kirk et al., 1998). PKS 0537-286 shows also interesting features in the X-ray band. The X-ray photon index versus the \(0.3-10\) keV X-ray luminosity is shown in Fig. 2 right panel. The X-ray emission is characterized by a hard spectrum (\(\Gamma_{\rm X-ray}<1.38\)) with a high luminosity (\(>10^{47}\) erg s\({}^{-1}\)). It should be noted that _XMM-Newton_ observations of PKS 0537-286 also showed a high luminosity of \(2\times 10^{47}\) erg s\({}^{-1}\) with a spectral index of \(1.27\pm 0.02\)(Re Reeves et al., 2001). There is no evidence of softening or hardening when the source gets brighter in the X-ray band; the highest luminosity in the X-ray band is \(8.74\times 10^{47}\) erg s\({}^{-1}\) observed on MJD 59213.18. Similarly, the \(3-30\) keV Figure 2: The luminosity versus the spectral index in the \(\gamma\)-ray (left panel) and X-ray bands (right panel). X-ray luminosity was \(1.40\times 10^{48}\) erg s\({}^{-1}\) on MJD 59211.99 and \(7.47\times 10^{47}\) erg s\({}^{-1}\) on MJD 59146.17. The SED of PKS 0537-286 was assembled in the flaring and quiescent periods (see Fig. 3). Comparing and contrasting the jet parameters obtained through modeling of the SED in different periods is crucial, allowing to understand the processes at work in the jet of PKS 0537-286. ### Synchrotron/SSC emission from the jet Fig. 3 panels a and b show the results of the modeling when the entire emission is due to synchrotron/SSC emission from a compact region of the jet when the source is in a quiescent and flaring state, respectively. The corresponding model parameters are given in Table 1. In the quiescent state, the SED modeling shows that the spectral slope of the emitting particle distribution is \(1.8\pm 0.1\) and their distribution extends up to \((1.2\pm 0.1)\times 10^{4}\). The strength of the magnetic field is found to be \((9.3\pm 0.8)\times 10^{-3}\) G. The emission region size is \((2.0\pm 0.1)\times 10^{17}\) cm, which is consistent with the flux variability of \(t_{\rm var}=(1+z)\times R/c\ \delta\approx 18.7\) days. The Doppler boosting factor is \(16.8\pm 1.2\) which is not different from the values usually estimated for FSRQs (e.g., see Ghisellini & Tavecchio 2015). In this case, the synchrotron component decreases at \(<10^{14}\) Hz and it does not take into account the observed optical/UV data which are interpreted as thermal emission from the accretion disk (see the next subsection). In the flaring period (Fig. 3 panel b), the SED modeling shows that the emitting electrons have a harder spectrum with \(p=1.6\pm 0.03\) as compared with that in the quiescent state. The electrons are accelerated up to \(\gamma_{\rm cut}=(1.1\pm 0.1)\times 10^{4}\) which is not significantly different from that in the quiescent state. In the flaring state, the magnetic field also increased, \(B=(1.7\pm 0.1)\times 10^{-2}\) G, which is caused by the increase of the synchrotron flux. Also, the Doppler boosting factor increased from \(16.8\pm 1.2\) to \(24.9\pm 1.1\) in order to explain the slight shift of the HE peak towards higher energies; above 100 MeV the \(\gamma\)-ray spectrum in the flaring period has a photon index of \(\Gamma_{\gamma}=2.73\pm 0.17\) as compared with that of \(\Gamma_{\gamma}=2.91\pm 0.16\) is the quiescent state. The modeling shows that during the flare, the emission is produced from a smaller region with a radius of \((1.6\pm 0.1)\times 10^{17}\) cm corresponding to \(t_{\rm var}\simeq 10.0\) days, which indicates that the flaring emission is from a compact and faster moving region. ### Emitting region within the BLR Fig. 3 panels c and d show the SED modeling assuming the jet dissipation occurred close to the central source. In this case, the density of disk-emitted and BLR-reflected photons in the jet frame (taking into account Doppler amplification) is comparable with or larger than that of synchrotron photons, so their inverse Compton scattering has a non-negligible contribution. The thermal emission from the accretion disk, modeled as a black body, is shown with a pink dashed line. In the quiescent state (Fig. 3 panel c), the low-energy component of the spectrum (up to \(10^{16}\) Hz) can be reasonably well explained by combined synchrotron and black body components. The data in the X-ray band is mostly dominated by the EIC-disk component (blue dashed line in Fig. 3) with SSC contributing in the soft X-ray band, whereas the emission in the \(\gamma\)-ray band is due to EIC-disk and EIC-BLR components (blue dashed and purple dot-dot-dashed lines in Fig. 3, respectively). As compared with the synchrotron/SSC model, the distribution of the electrons is narrower with \(p=2.3\pm 0.2\) and \(\gamma_{\rm cut}=(0.3\pm 0.1)\times 10^{3}\), because the average energy of the external photons is larger than that of the synchrotron one. The Doppler boosting factor is \(\delta=13.4\pm 1.3\) but the magnetic field is significantly larger, \(B=3.5\pm 0.4\) G. Since the flux in the HE band depends on the photon and particle density and the content of external photons is higher (inferred from the observed high Compton dominance, i.e., the ratio between the peak flux of inverse Compton and synchrotron components), the number of emitting electrons is reduced; to produce the synchrotron emission at the same level higher magnetic field is required. The emission is produced in a more compact region with a radius of \((0.2\pm 0.02)\times 10^{17}\) cm, smaller than \(R_{\rm BLR}=9.3\times 10^{17}\) cm. During the flaring period, in the optical/UV band, the synchrotron emission from the jet dominates over the thermal emission from the accretion disk (Fig. 3 panel d). As the synchrotron emission extends to higher frequencies, the SSC component makes a non-negligible contribution in the X-ray band (orange dot-dashed line in Fig. 3 panel d). The \(\gamma\)-ray emission is mostly due to the EIC-disk component (blue dashed line in Fig. 3 panel d) and EIC-BLR contributes at higher \(\gamma\)-ray energies (purple dot-dot-dashed line in Fig. 3 panel d). In this case, the electron distribution is nearly flat with \(p=2.2\pm 0.1\) and extends up to \((2.5\pm 0.2)\times 10^{3}\). The increase of the energy up to which the electrons are effectively accelerated (\(\gamma_{\rm cut}\)) resulted in the shift of the synchrotron component to higher frequencies and domination over the disk thermal emission. The Doppler boosting is \(\delta=11.4\pm 0.7\), the magnetic field is \(B=3.0\pm 0.2\) G and the emission region radius is \((0.1\pm 0.01)\times 10^{17}\) cm. This is smaller than that estimated in the quiescent state and corresponds to 1.5 days variability. ### Emitting region outside BLR Fig. 3 panels e and f show PKS 0537-286 SED modeling assuming the emission region is beyond the BLR. In the quiescent state, the HE component is entirely dominated by EIC-torus (yellow dot-dot-dashed line in Fig. 3 panel e) and SSC contributing in the soft X-ray band (orange dot-dashed line in Fig. 3 panel e). In the flaring state, the peak of the SSC component is around \(\sim 10^{22}\) Hz (orange dot-dashed line in Fig. 3 panel f) making a non-negligible contribution to the X-ray band, but again, the HE \(\gamma\)-ray data is solely explained by the EIC-torus component (yellow dot-dot-dashed line in Fig. 3 panel f). The model parameters given in Table 1 show that in the quiescent and flaring states the electron distribution has a similar power-law index \(p\simeq 2.4\), but in the flaring state the cut-off energy is larger, \(\gamma_{\rm cut}=(6.7\pm 0.7)\times 10^{3}\) as compared to \(\gamma_{\rm cut}=(1.3\pm 0.1)\times 10^{3}\). This is because _i)_ the synchrotron component should extend beyond the optical/UV band and _ii)_ during the flare the \(\gamma\)-ray spectrum is harder, shifting this component towards higher frequencies. Also, the modeling shows that the Doppler boosting and magnetic field do not substantially change, correspondingly being \(\delta=15.3\pm 0.7\) and \(B=0.2\pm 0.01\) G and \(\delta=14.2\pm 1.0\) and \(B=0.2\pm 0.02\) G for the flaring and quiescent states. However, again in the flaring state, the emission region has a slightly smaller radius \((1.4\pm 0.1)\times 10^{17}\) cm as compared with \((2.0\pm 0.1)\times 10^{17}\) cm. ### Jet luminosity The parameters estimated during the modeling are used to compute the jet luminosity. The jet power carried by the electrons, calculated as \(L_{e}=\pi cR_{\rm B}^{2}\Gamma^{2}U_{e}\), and by magnetic field, calculated as \(L_{B}=\pi cR_{\rm B}^{2}\Gamma^{2}U_{e}\), are given in Table 1. In the case of synchrotron/SSC scenario (Fig. 3 panels a and b), the jet should be strongly particle dominated with a luminosity of the order of \(\simeq 10^{47}\) erg s\({}^{-1}\) where the \begin{table} \begin{tabular}{c c c c c c c} & \multicolumn{2}{c}{SSC} & \multicolumn{2}{c}{EIC-BLR} & \multicolumn{2}{c}{EIC Torus} \\ \cline{2-7} & quiescent & flare & quiescent & flare & quiescent & flare \\ \hline \(p\) & \(1.8\pm 0.1\) & \(1.6\pm 0.03\) & \(2.3\pm 0.2\) & \(2.2\pm 0.1\) & \(2.4\pm 0.1\) & \(2.4\pm 0.1\) \\ \(\gamma_{\rm em}/10^{3}\) & \(11.8\pm 0.8\) & \(11.5\pm 0.6\) & \(0.3\pm 0.1\) & \(2.5\pm 0.2\) & \(1.3\pm 0.1\) & \(6.7\pm 0.7\) \\ \(\gamma_{\rm min}\) & \(9.5\pm 1.0\) & \(1.3\pm 0.1\) & \(26.1\pm 3.8\) & \(30.1+2.5\) & \(65.7\pm 3.4\) & \(103.9\pm 9.0\) \\ \(\delta\) & \(16.8\pm 1.2\) & \(24.9\pm 1.1\) & \(13.4\pm 1.3\) & \(11.4\pm 0.7\) & \(15.3\pm 0.7\) & \(14.2\pm 1.0\) \\ \(B\,[{\rm G}]\) & \((9.3\pm 0.8)\times 10^{-3}\) & \((1.7\pm 0.1)\times 10^{-2}\) & \(3.5\pm 0.4\) & \(3.0\pm 0.2\) & \(0.2\pm 0.01\) & \(0.2\pm 0.02\) \\ \(R\,[{\rm cm}]/10^{17}\) & \(2.0\pm 0.1\) & \(1.6\pm 0.1\) & \(0.2\pm 0.02\) & \(0.1\pm 0.01\) & \(2.0\pm 0.1\) & \(1.4\pm 0.1\) \\ \(L_{\rm e}[{\rm erg\ s^{-1}}]\) & \(1.2\times 10^{47}\) & \(1.2\times 10^{47}\) & \(1.3\times 10^{45}\) & \(2.0\times 10^{45}\) & \(1.1\times 10^{46}\) & \(1.3\times 10^{46}\) \\ \(L_{\rm h}[{\rm erg\ s^{-1}}]\) & \(3.5\times 10^{42}\) & \(1.8\times 10^{43}\) & \(2.2\times 10^{45}\) & \(5.3\times 10^{44}\) & \(1.3\times 10^{45}\) & \(7.9\times 10^{44}\) \\ \hline \end{tabular} \end{table} Table 1: Parameters best describing the PKS 0537-286 multiwavelength emission in the scenarios shown in Fig. 3. Figure 3: The broadband SEDs of PKS 0537-286 in the quiescent (left panels) and flaring (right panels) states. Panels a and b correspond to SED modeling when the emission is entirely due to synchrotron/SSC radiation. The SED modeling when the emission region is within the BLR is shown in panels c and d, while in panels e and f it is outside the BLR. The blue solid curve shows the sum of all components and the gray shaded area is the uncertainty region from MCMC sampling of the parameters. The color code of the other components is given in the legends. In panels a and b, the observed and absorption-corrected Swift UVOT fluxes are shown with empty and filled light blue circles, respectively. magnetic luminosity has a marginal contribution (\(L_{\rm e}/L_{\rm B}\simeq 3\times 10^{4}\)). This is natural, given the SED of PKS 0537-286 in both quiescent and flaring periods shows strong Compton dominance. Relatively high luminosity is also estimated in the modeling when the emitting region is outside the BLR; \({\rm L_{\rm e}+L_{\rm B}=1.2\times 10^{46}\ erg\ s^{-1}}\) with \(L_{\rm e}/L_{\rm B}=8.1\) and \({\rm L_{\rm e}+L_{\rm B}=1.4\times 10^{46}\ erg\ s^{-1}}\) with \(L_{\rm e}/L_{\rm B}\simeq 17\) for the quiescent and flaring states, respectively. When the emitting region is within the BLR, a lower jet luminosity is required, \({\rm L_{\rm e}+L_{\rm B}=6.3\times 10^{45}\ erg\ s^{-1}}\) and \({\rm L_{\rm e}+L_{\rm B}=2.5\times 10^{45}\ erg\ s^{-1}}\) for the quiescent and flaring states, respectively, and the system is close to equipartition with \(L_{\rm e}/L_{\rm B}=0.5\) and \(L_{\rm e}/L_{\rm B}=3.7\) for the quiescent and flaring states, respectively. Also, the total kinetic energy of the jet, defined as \(L_{\rm kin}=L_{\rm e}+L_{B}+L_{\rm p,cold}\), can be evaluated. Assuming a proton-to-electron comoving number density ratio of \(N_{p}/N_{e}\simeq 0.1\), in the most energy demanding model (synchrotron/SSC) \(L_{\rm kin}=3.2\times 10^{47}\ erg\ s^{-1}\) and \(L_{\rm kin}=4.4\times 10^{47}\ erg\ s^{-1}\) are estimated for the quiescent and flaring states, respectively. In the most optimistic scenario (EIC-BLR), \(L_{\rm kin}=1.1\times 10^{46}\ erg\ s^{-1}\) and \(L_{\rm kin}=6.0\times 10^{45}\ erg\ s^{-1}\) for the quiescent and flaring states, respectively. In this case, even if assuming \(N_{p}/N_{e}\simeq 1\), one would obtain \(L_{\rm kin}=4.8\times 10^{46}\ erg\ s^{-1}\) and \(L_{\rm kin}=3.8\times 10^{46}\ erg\ s^{-1}\) for the quiescent and flaring states, respectively. It is interesting that this luminosity is still lower than the disk luminosity estimated from the optical/UV data fitting. However, considering that the presence of the pairs can reduce the jet power (e.g., Pjanka et al. 2017), the estimated jet luminosity will be by several factors lower than the accretion disk luminosity. In Bottacini et al. (2010), the broadband SED of PKS 0537-286 was modeled assuming the particles are injected into the emitting region, which is inside the BLR, and interact with the internal and external photon fields. The disk luminosity was estimated to be \(1.7\times 10^{47}\ erg\ s^{-1}\) and \(8.4\times 10^{46}\ erg\ s^{-1}\) by fitting the data observed in 2008 and 2006, respectively; the latter value is similar to the disk luminosity estimated in the current study. Their modeling showed that \(L_{\rm e}\) is in the range of \((2.5-4.0)\times 10^{46}\ erg\ s^{-1}\) and \(L_{\rm B}\) in \((1.0-2.0)\times 10^{45}\ erg\ s^{-1}\). \(L_{\rm e}\) is slightly larger than that estimated in the current study (see Table 1) which is related with different assumptions made in the modelings (e.g., emitting region radius, Doppler boosting factor, etc.). ## 8 Summary In this work, we have performed a broadband study of the high redshift blazar PKS 0537-286. The main findings are summarized as follows: * PKS 0537-286 is among the highest redshift blazars detected by _Fermi_-LAT. Its \(\gamma\)-ray emission, monitored since 2008, appeared relatively constant until 2017, then multiple powerful \(\gamma\)-ray flares were observed. Starting from MJD 59170, the source was in an enhanced \(\gamma\)-ray emission state when the \(\gamma\)-ray luminosity reached \(6.14\times 10^{49}\ erg\ s^{-1}\). During the considered fourteen years, the \(\gamma\)-ray luminosity of the source exceeded \(10^{49}\ erg\ s^{-1}\) for 61.8 days in total. * The \(\gamma\)-ray photon index of the source varies as well. The mean of the \(\gamma\)-ray photon index during non flaring periods is \(\simeq 2.83\) which substantially hardens during the flares: the hardest index of \(2.23\pm 0.18\) was observed on MJD 59322. There is a moderate anti-correlation between the \(\gamma\)-ray photon index and luminosity. * The source is very luminous in the X-ray band with a \(0.3-10\) keV luminosity between \(10^{47}-10^{48}\ erg\ s^{-1}\) and with a hard spectrum (\(\Gamma_{\rm X-ray}<1.38\)). The available NuSTAR observations show that the hard X-ray spectrum extends up to 30 keV with \(\Gamma_{\rm X-ray}=1.26\) with a luminosity between \((0.75-1.40)\times 10^{48}\ erg\ s^{-1}\). * In order to understand the underlying physical processes at work in the jet of PKS 0537-286, the SEDs during the quiescent and flaring states were reproduced using a simple one-zone leptonic emission model considering different locations of the emission region. In the quiescent state, the combined synchrotron and thermal accretion disk components can explain the IR-optical-UV data, whereas X-ray to HE \(\gamma\)-ray data are due to inverse-Compton scattering of the disk and BLR-reflected photons. Instead, in the flaring state, the jet synchrotron emission dominates in the optical/UV band and the X-ray to HE \(\gamma\)-ray emission is due to combination of SSC, EIC-disk and EIC-BLR components. The modeling in the quiescent to flaring states showed that the flare was caused by the electron distribution changes, i.e., the electron power-law index hardened to \(p=2.2\pm 0.1\) and the cut-off energy was \(\gamma_{\rm cut}=(2.5\pm 0.2)\times 10^{3}\). * From the required jet energy point of view, the model with the emission region within the BLR is preferred. During the flaring event, the emitting region is nearly in equipartition with \(L_{\rm e}/L_{\rm B}=3.7\) and the jet total luminosity is \(L_{\rm tot}=3.8\times 10^{46}\ erg\ s^{-1}\) when assuming a proton-to-electron comoving number density ratio of \(N_{p}/N_{e}\simeq 1\). This luminosity is slightly lower than the accretion disk luminosity of \(L_{\rm disc}=8.7\times 10^{46}\ erg\ s^{-1}\) estimated through fitting of UV/optical data. Among the high red-shift blazars, PKS 0537-286 is exceptional, having a reach multiwavelength data set (especially in the X-ray and \(\gamma\)-ray bands) which allows to investigate the processes taking place in the jet. Further multiwavelength monitoring of such distant and powerful sources will improve our understanding of the radiative processes at work in the relativistic jets in the early Universe. ## Acknowledgements We acknowledge the use of data, analysis tools and services from the Open Universe platform, the ASI Space Science Data Center (SSDC), the Astrophysics Science Archive Research Center (HEAASARC), the Fermi Science Tools, the Astrophysics Data System (ADS), and the National Extra-galactic Database (NED). This work was supported by the Science Committee of the Republic of Armenia, in the frames of the research project No 21T-IC260. This work used resources from the ASNET cloud. ## Data Availability All the data used in this paper is public and available from the Swift, Fermi and NuSTAR archives. The _Fermi_-LAT, Swift XRT/UVOT and NuSTAR data analyzed in this paper can be shared on a reasonable request to the corresponding author.
2310.07493
Diversity for Contingency: Learning Diverse Behaviors for Efficient Adaptation and Transfer
Discovering all useful solutions for a given task is crucial for transferable RL agents, to account for changes in the task or transition dynamics. This is not considered by classical RL algorithms that are only concerned with finding the optimal policy, given the current task and dynamics. We propose a simple method for discovering all possible solutions of a given task, to obtain an agent that performs well in the transfer setting and adapts quickly to changes in the task or transition dynamics. Our method iteratively learns a set of policies, while each subsequent policy is constrained to yield a solution that is unlikely under all previous policies. Unlike prior methods, our approach does not require learning additional models for novelty detection and avoids balancing task and novelty reward signals, by directly incorporating the constraint into the action selection and optimization steps.
Finn Rietz, Johannes Andreas Stork
2023-10-11T13:39:35Z
http://arxiv.org/abs/2310.07493v1
# Diversity for Contingency: Learning Diverse Behaviors ###### Abstract Discovering all useful solutions for a given task is crucial for transferable RL agents, to account for changes in the task or transition dynamics. This is not considered by classical RL algorithms that are only concerned with finding the optimal policy, given the current task and dynamics. We propose a simple method for discovering all possible solutions of a given task, to obtain an agent that performs well in the transfer setting and adapts quickly to changes in the task or transition dynamics. Our method iteratively learns a set of policies, while each subsequent policy is constrained to yield a solution that is unlikely under all previous policies. Unlike prior methods, our approach does not require learning additional models for novelty detection and avoids balancing task and novelty reward signals, by directly incorporating the constraint into the action selection and optimization steps. ## I Introduction The standard reinforcement learning (RL) approach [1] learns deterministic policies [2, 3, 4] for each task from scratch, despite the notorious sample inefficiency of deep RL algorithms. Instead, it would be preferable to learn transferable and reusable policies and to adapt them to different downstream tasks, with a fraction of data and compute needed compared to learning from scratch. A promising approach for learning transferable RL agents is multi-objective RL (MORL), where vectorized value functions can be shared for many tasks [9, 10, 11]. A key requirement for learning such transferable agents is to allow stochasticity and diversity in the learned behavior [5, 6], as opposed to learning one overly specific, deterministic policy. While MaxEnt RL [7, 8, 9] regularizes policies in an attempt to prevent them from becoming overly specific, entropy-regularized (MO) RL is not sufficient for inducing agents that learn all behaviors that solve the given tasks, as can be seen in Fig. 1. To adapt transferred agents efficiently it is important to discover _all_ useful behaviors, to account for possible contingencies, e.g. parts of the middle pathway becoming blocked. In this paper, we first review MaxEnt RL and methods that learn diverse behaviors, either unsupervised or for a given task. In Sec. III, we then propose a novel method for learning policies that discover different solutions for the given task, accounting for possible contingencies in transfer settings. ## II Related work ### _Entropy regularization_ A common approach to learning stochastic policies with wide and smooth maxima is MaxEnt RL [8, 7]. Maximum entropy (MaxEnt) RL augments the RL objective by adding a term proportional to the policy's entropy to the reward \[J(\pi)=\sum_{t=1}^{\infty}\mathbb{E}_{(\mathbf{s}_{t},\mathbf{a}_{t})}\bigg{[} \gamma^{t-1}r(\mathbf{s}_{t},\mathbf{a}_{t})+\alpha\mathcal{H}\big{(}\pi(\cdot \mid\mathbf{s}_{t})\big{)}\bigg{]}, \tag{1}\] where \(\mathcal{H}(X)=\mathbb{E}[-\log p(x)]\) is Shannon's entropy, thereby punishing unnecessarily deterministic policies. The coefficient \(\alpha\) balances the reward and the entropy terms, thereby giving some control over the stochasticity in the learned policy, however, this coefficient is usually annealed towards zero as training progresses. The primary algorithm for MaxEnt RL is soft actor-critic (SAC) [12, 13], which learns an on-policy, soft Q-function \(Q_{\text{soft}}^{\pi}\) for an univariate Gaussian actor model. As can be seen in Fig. 1, SAC learns one behavior (with local variations) but disregards other behaviors that reach the goal. This has two reasons. Firstly, SAC's actor model is unimodal and thus can not capture all possible modes, e.g. at the forks or intersections in the environment. While some prior works [14, 8] can learn multi-modal, entropy-regularized policies, multi-modality is not the key requirement to learning diverse behaviors. The second and more important reason why SAC disregards the other possible behaviors is that they are clearly sub-optimal, since their trajectories are longer and have higher costs compared to driving straight down the middle, from start to goal. RL Fig. 1: Rollouts generated by a soft actor-critic agent. The behavior emits local stochasticity but does not learn the alternative paths. Discovering all possible solutions is crucial for transfer and adaptation. is fundamentally only concerned with finding one optimal policy that solves the task, whether alternative solutions are possible is not considered, although this is crucial for transfer RL. In the next section, we review methods that, unlike classical and MaxEnt RL, account for this and explicitly aim to learn diverse behavior alongside the optimal policy. ### _Learning diverse behaviors_ Popular approaches to learning diverse behaviors originate from unsupervised _option_[15] discovery [16, 17]. One such method is DIAYN [18], which discovers distinct behaviors in an unsupervised manner and in the absence of a reward function, by maximizing the mutual information between behaviors and states [18]. Similarly, VALOR [19] discovers distinct behaviors by maximizing the mutual information between behaviors and context vectors [19], again without access to a reward function. Both of these methods subsequently use the learned behaviors as low-level options in a hierarchical RL agent [15] to solve downstream tasks efficiently. In this paper, we instead assume access to the reward function from the beginning and wish to exploit this information during learning, to discover alternative solutions to the given task. In this setting, Zhang, Yu, and Turk [5] learn multiple distinct policies for a task reward function \(r_{\text{task}}\) by training an autoencoder \(\mathbf{D}=\{\mathcal{D}_{1},\ldots,\mathcal{D}_{n}\}\) for each available policy \(\pi_{1},\ldots,\pi_{n}\) on state sequences \(\mathbf{S}_{i}=(\mathbf{s}_{t},\mathbf{s}_{t+1},\mathbf{s}_{t+K})\) of that policy and constructing a _novelty_ reward function \[r_{\text{novel}}=-\exp\big{(}-w\underset{\mathcal{D}\in\mathbf{D}}{\min}|| \mathcal{D}(\mathbf{S})-\mathbf{S}||^{2}\big{)}. \tag{2}\] Zhang, Yu, and Turk [5] then update the policy using the angular bisector of the gradients on the expected novelty and task reward, to ensure that both objectives are improved. Similarly, Zhou et al. [20] learn distinct policies for a given task by constraining policy search to trajectories \(\tau\) that have low log-likelihood under already learned policies. To promote diverse exploration, Zhou et al. [20] define, in addition to the extrinsic task reward \(r^{\text{ext}}\), an intrinsic reward function \(r^{\text{int}}\) based on learned, policy-specific reward models, to boost diverse exploration: \[\bar{J}(\theta)=\mathbb{E}_{\tau\sim\pi_{\theta}}\bigg{[}\phi(\tau)\sum_{t=1 }^{\infty}\gamma^{t-1}r_{t}^{\text{ext}}+\lambda\big{(}1-\phi(\tau)\big{)} \sum_{j}r_{t}^{\text{int}}\bigg{]}, \tag{3}\] where \[\phi(\tau)=\prod_{j=1}^{k-1}\mathbb{I}[\text{NLL}(\tau,\pi_{j})\geq\delta] \tag{4}\] is an indicator function on negative log-likelihood of trajectories with threshold \(\delta\). While [5, 20] exploit the task reward signal for learning novel policies for the given task, these methods either require learning additional novelty detectors, have to balance multiple reward signals or rely on expensive Monte Carlo updates. In the next section, we propose a simple method for discovering alternative solutions for a given task, while avoiding these shortcomings. ## III Learning contingent policies via novelty constraints We propose an iteratively-constrained algorithm for learning alternative policies that can recover from contingent events. In each iteration, our algorithm learns a novel policy that attempts to solve the task, while its solution space is constrained to behavior that is unlikely under all previous policies for that task. Unlike [5, 20], we refrain from changing the agent's objective by introducing auxiliary novelty rewards, our agent still maximizes the expected task reward (subject to entropy-regularization), as in Eq. (1). This avoids the trade-off between the (potentially conflicting) objectives of maximizing return and behaving novel. Instead and intuitively, to learn novel behavior, the agent should, in every state, only execute actions that are unlikely under prior policies for the same task. Following this intuition, we constraint policy search in the \(i\)-th iteration (i.e. learning of the \(i\)-th policy) to a set \(\Pi_{i-1}^{\pi}\) of policies, where policies in this set only select actions that are unlikely under all prior policies: \[\pi_{i}^{*}=\max_{\pi^{\prime}}\,J(\pi^{\prime})\mid\pi^{\prime}\in\Pi_{i-1}^{ \pi}. \tag{5}\] To perform policy search as in Eq. 5, the agent needs a way to sample actions from policies in \(\Pi_{i-1}^{\pi}\). Implementing Eq. (5) locally and state-based, action selection for policies in \(\Pi_{i-1}^{\pi}\) is constrained: \[\mathbf{a} \sim\pi_{i}(\mathbf{s})\] (6) subject to \[\pi_{j}(\mathbf{s},\mathbf{a})\leq\varepsilon_{j},\forall j\{1, \ldots,i-1\},\] where \(\varepsilon_{j}\) are thresholds specifying the maximally allowed action likelihood under policies from previous iterations \(1,\ldots,i-1\). Fig. 2 provides a visualization of this constraint and how it forbids actions that fall into the high-likelihood region of prior policies. For each previous policy, we define Fig. 2: Q-function, action samples, and high-likelihood region of the agent evaluated at the starting position. Novelty-constraints on policy-likelihood prevent the agent from using actions that fall into the high-likelihood region of prior policies. an indicator function \[\mathbb{I}_{j}^{\pi}(\mathbf{s},\mathbf{a})=\begin{cases}1&\text{if }\pi_{j}( \mathbf{s},\mathbf{a}),\leq\varepsilon_{j}\\ 0&\text{otherwise},\end{cases} \tag{7}\] that can be used to check whether an action \(\mathbf{a}\) in state \(\mathbf{s}\) satisfies the _novelty constraint_ in Eq. (6). With the novelty constraint indicator functions we can project any policy into \(\Pi_{i-1}^{\pi}\): \[\hat{\pi}_{i}(\mathbf{a}\mid\mathbf{s})\propto\pi_{i}(\mathbf{a}\mid\mathbf{ s})\prod_{j=1}^{i-1}\mathbb{I}_{j}^{\pi}(\mathbf{s},\mathbf{a}). \tag{8}\] Projecting policies into \(\Pi_{i-1}^{\pi}\) and sampling from \(\hat{\pi}_{i}\) via rejection sampling is thus straightforward, however, we still require an algorithm for learning policies \(\pi_{i}\) whose projections \(\hat{\pi}_{i}\) perform well. We propose such a learning algorithm in the next section. ### _Iterative novelty-constrained SAC_ In the \(i\)-th iteration of the novelty-constrained setting, the agent's true (novelty-constrained) policy is \(\hat{\pi}_{i}\), to which we only have access via rejection sampling. Thus, a learning algorithm for \(\pi_{i}\) does not learn the agent's true policy but a proposal distribution for \(\hat{\pi}_{i}\). To account for this, we propose an iterative and novelty-constrained version of SAC [12]. Learning a critic for \(\hat{\pi}_{i}\), the novelty-constrained policy, is straightforward by ensuring that the expectation of future actions in the TD-backup matches the (novelty-constrained) actor: \[J_{Q}(\theta_{i})=\mathbb{E}_{\mathbf{s}_{t},\mathbf{a}_{t}, \mathbf{s}_{t+1}\sim\mathcal{D}}\bigg{[}\frac{1}{2}\big{(}Q_{\theta_{i}}( \mathbf{s}_{t},\mathbf{a}_{t})\!-\!\hat{Q}_{\theta_{i}}(\mathbf{s}_{t}, \mathbf{a}_{t})\big{)}\bigg{]}, \tag{9}\] with \[\hat{Q}_{\theta_{i}}(\mathbf{s}_{t},\mathbf{a}_{t}) =r(\mathbf{s}_{t},\mathbf{a}_{t})+\gamma \tag{10}\] \[\mathbb{E}_{\mathbf{a}_{t+1}\sim\hat{\pi}_{i}}\big{[}Q_{\hat{ \theta}_{i}}(\mathbf{s}_{t+1},\mathbf{a}_{t+1})-\log(\pi_{i}(\mathbf{a}_{t+1} \mid\mathbf{s}_{t+1}))\big{]},\] where \(\bar{\theta}_{i}\) refers to the target network parameter for \(\hat{\pi}_{i}\)'s critic. Similarly, the actor update must reflect the novelty constraint and rejection sampling step as well. A key property of SAC is that it updates the actor by minimizing the KL divergence between the actor and the critic: \[J_{\pi}(\phi) =\mathbb{E}_{\mathbf{s}_{t}\sim\mathcal{D}}\bigg{[}\text{D}_{ \text{KL}}\bigg{(}\pi_{\phi}(\cdot\mid\mathbf{s}_{t})\bigg{\|}\ \frac{\exp(Q_{\theta_{i}}(\mathbf{s},\cdot))}{Z_{\theta_{i}}(\mathbf{s}_{t})} \bigg{)}\bigg{]} \tag{11}\] \[=\mathbb{E}_{\mathbf{s}_{t}\sim\mathcal{D},\mathbf{a}_{t}\sim \pi_{\phi}}\big{[}\log\pi_{\phi}(\mathbf{a}_{t}\mid\mathbf{s}_{t})-Q_{\theta_ {i}}(\mathbf{s}_{t},\mathbf{a}_{t})\big{]}.\] In our case, we can still backpropagate through the critic, however, only for action samples that have low likelihood under previous policies and satisfy the novelty constraint (i.e. for which \(\prod_{j=1}^{i-1}\mathbb{I}_{j}^{\pi}(\mathbf{s},\mathbf{a})=1\)). Action samples that violate the novelty constraint, i.e. are likely under previous policies, should follow a different gradient because they would be discarded by the rejection and sampling step. Since these actions are never executed, the critic never observes a reward signal for those actions and hallucinates unreliable value estimates whose gradients are not suited for learning the novelty-constrained actor. Thus, to account for the rejection sampling step and to encourage learning an actor that respects the novelty constraint in Eq. (6), for actions that violate any of the \(i-1\) constraints, we instead use the gradient of the KL divergence between the current policy \(\pi_{i}\) and the policies whose constraints are violated. This leads to the following actor update for the proposed, novelty-constrained SAC: \[J_{\pi}(\phi_{i}) =\mathbb{E}_{\mathbf{s}_{t}\sim\mathcal{D},\mathbf{a}_{t}\sim\pi_{ \phi_{i}}} \tag{12}\] \[\bigg{[}\prod_{j=1}^{i-1}\mathbb{I}_{j}^{\pi}(\mathbf{s},\mathbf{ a})\bigg{]}\underbrace{\log\pi_{\phi_{i}}(\mathbf{a}_{t}\mid\mathbf{s}_{t})-Q_{ \theta_{i}}(\mathbf{s}_{t},\mathbf{a}_{t})}_{\text{DKL}(\pi_{i}||\pi_{j})} \bigg{]}.\] Thus, our iterative algorithm for learning contingent behaviors operates as follows. The first policy \(\pi_{1}\) is learned unconstrained, using normal SAC [9], thus \(\pi_{1}\) is the optimal soft policy for the given task. Once \(\pi_{1}\) has converged, our algorithm proceeds with the learning of \(\pi_{2}\), which is novelty-constrained w.r.t \(\pi_{1}\), meaning it has to solve the given task as best as possible while respecting the constraint in Eq. (6). \(\pi_{2}\) is learned using the critic update in Eq. (9), the actor update in Eq. (12) and relies on rejection sampling to generate actions from \(\hat{\pi}_{2}\). Once \(\pi_{2}\) has converged, \(\pi_{3}\) can be learned, being novelty constrained w.r.t \(\pi_{1}\) and \(\pi_{2}\), and so on. In the next section, we show how these additional policies can be used to recover from unforeseen events in the transfer task. ## IV Recovering from unforeseen events using contingency policies When we transfer a pre-trained agent to a new task, the agent can be exposed to situations that require it to deviate from its behavior learned during pre-training. For example, one such event might be when the middle path in Fig. 1 Fig. 3: Rollouts generated by our backtracking algorithm. The algorithm attempts to recover by rolling out the available contingency policies, followed by rollouts of the optimal policy. If unsuccessful, the agent backtracks to the next checkpoints and executes the contingency policies and optimal policy again. This repeats until the task is finished successfully or all checkpoints are exhausted. becomes blocked. Our proposed method accounts for this by learning additional policies during pre-training, to be used in and recover from such situations. To know when we should use one of the contingency policies instead of the optimal policy, we require a method for detecting contingent events, where the optimal pre-trained policy behaves sub-optimally. In the scope of this workshop paper, we simply rely on \(\Delta\)s, i.e.changes in the state variable, to detect such events and leave a more sophisticated method as future work. Given we detect a contingency, the agent iterates between backtracking for \(k\) steps, running a contingency policy for \(m\) steps and then rolling out the optimal policy. This uninformed process does not require additional knowledge, e.g. a model of environment transition dynamics, and works well in practice, as seen in Figure 3. This is in contrast to a baseline comparison in Fig. 4, where the agent executes random actions to recover from the contingencies, instead of the recovery policies learned with our proposed method. There are a number of points that should be addressed in future work. Instead of treating \(k\) and \(m\) as hyperparameters, it would be preferable to automatically identify states to backtrack to and automatically decide for how long and which contingency policy to execute. For the result in Fig. 3, we manually selected values that were adequate for our simple testing environment, which is not practical for more sophisticated problems. We leave these points, as well as more thorough experimentation and baseline comparisons, as important future work.
2310.06799
The Relational Interpretations on soft matter as intermediate asymptoitcs
In this paper, it is demonstrated that there is a parallelism between the relational interpretation of Rovelli and the interpretation of soft matter based on intermediate asymptotics. The general interpretation of physics strongly assumes the duality of the observer and the world, and the uniqueness of the world though the relational interpretation suggested different conclusions: {\ it no properties, no interaction}, and {\ it facts are relative}. These conclusions are seemingly counterintuitive, though this work shows that similar conclusions are found in the interpretation of soft matter based on the concept of intermediate asymptotics. The interpretation of soft matter based on intermediate asymptotics also concludes that the properties are not determined without the scale. This is due to the conclusion of intermediate asymptotics that any formalization and its interpretation are localized by the scale. It is demonstrated that the similarity between the two interpretations originated from its monism of relations. This logical structure is also compared with the works in other disciplines. This work reports the insight that the relational interpretation can be a general and fundamental concept, not the one applicable to special cases.
Hirokazu Maruoka
2023-10-10T17:09:35Z
http://arxiv.org/abs/2310.06799v1
# The Relational Interpretations on soft matter ###### Abstract In this paper, it is demonstrated that there is a parallelism between the relational interpretation of Rovelli and the interpretation of soft matter based on intermediate asymptotics. The general interpretation of physics strongly assumes the duality of the observer and the world, and the uniqueness of the world though the relational interpretation suggested different conclusions: _no properties, no interaction_, and _facts are relative_. These conclusions are seemingly counterintuitive, though this work shows that similar conclusions are found in the interpretation of soft matter based on the concept of intermediate asymptotics. The interpretation of soft matter based on intermediate asymptotics also concludes that the properties are not determined without the scale. This is due to the conclusion of intermediate asymptotics that any formalization and its interpretation are localized by the scale. It is demonstrated that the similarity between the two interpretations originated from its monism of relations. This logical structure is also compared with the works in other disciplines. This work reports the insight that the relational interpretation can be a general and fundamental concept, not the one applicable to special cases. ## 1 Introduction Physics is the most fundamental scientific discipline concerned with the nature and properties of matter and energy[1; 2]. Originating from the Greek word for 'nature', it has been the study of nature, namely the world surrounding us. Physics seeks universal properties about nature, the properties that are independent of time and space. This strong motivation for universality is an outstanding character of physics, and it has become the basis of application sciences including chemistry, and engineering. The study of nature naturally leads to the objectivization of nature. Nature has been dehumanized and mechanized[3]. This process is based on the assumption of the distinguishability of subject and nature. Here a fundamental duality, the subject and the object, observers and the world can be found. If the world is distinguishable from the observer and is isolated, and if the world we investigate is unique, then the properties of that world should be unique as well. This leads to the concept of the uniqueness of the world. These assumptions of the duality and uniqueness of the world constitute the general interpretation of physics. However, a fundamentally different interpretation is proposed by Rovelli[4; 5; 6]. His theory of the Relational Interpretation suggested the interpretation that is beginning from their _relations_. Relations are interactions connecting physical objects. The relations constitute the network in which objects and subjects are equally involved. This leads to counter-intuitive conclusions that are fundamentally different from the general interpretation of physics though this interpretation provides a consistent view on quantum mechanics. Rovelli developed his idea in quantum mechanics though the essential idea with regard to relations can be general. In this paper, the author attempts to support the relational interpretation and to demonstrate that there is an interpretation similar to Rovelli's relational interpretation in soft matter physics. In the field of soft matter, one frequently observes the phenomena in which its property changes depending on the _scale_ of the physical parameters. Here we found that the _scale_ plays the same role as _relations_ do in the relational interpretation. This interpretation can be formalized by _intermediate asymptotics_ proposed by Barenblatt[7; 8]. The relational interpretation in the soft matter also reveals the same conclusions as the relational interpretation in quantum mechanics. The property fundamentally depends on the scale in the same way as it depends on the relations. In particular, when the parameters related to subjects are involved, it also leads to the conclusion that facts are fundamentally relative. I will demonstrate that such a similarity between the Relational interpretation on the soft matter and that of quantum mechanics is derived from the fundamental monism of relations in the relational interpretation, which is different from the dualism in the general interpretation of physics. ## 2 The general interpretation of physics Physics is a scientific discipline in which one explores the universality and laws of nature, namely the world. The study of nature has its origins from Greek philosophy though it has undergone great variations up to this day. Its methodology and its style of interpretation have been gradually established. Nature in physics became dehumanized, isolated objects that were supposed to be independent of subjects. The method of physics spontaneously became the established manner of objectivization of nature in which they consider nature as independent of humans, and its law as independent of space and time[9]. Throughout the history of the development of physics, we can find its strategy and that there are some assumptions for the general interpretation of physics. They can be characterized as follows, "the duality of the subject from the object", and "uniqueness of the world". * Assumption 1. _The duality of subject and object that are independent of each other_ : it means that physics always starts from the fundamental duality which are independent, distinguishable from each other; the subject and the object, the observer and the world. One can separate the subjective domain from the objective domain without changing their properties because two elements are independent entities in principle. Therefore, the proposition of physics can be exemplified by the influence of subjects as far as it is concerned about only objects. * Assumption 2. _The uniqueness of the world_ : the world is unique, therefore we can explore universality. Due to the first assumption, the subjective domain and the objective domain are distinguishable without losing their entities. The objective domain is free from individuality and constitutes the world surrounding other subjects. If we eliminated the subjective matters, the description of the world should converge uniquely and the laws or knowledge obtained from local experimentation can be globally applicable. Therefore, the propositions of the objective domain possess universality. The isolation and the objectivization of nature require the assumption 1. The isolation of nature corresponds to the elimination of subjective matters, the human. The exploration of the universality requires the assumption 2. Experimentation provides knowledge of the unique world surrounding us. Therefore, the knowledge of local experimentation can be the global one, which is applicable for other subjects. It makes scientists believe that the properties of matter potentially exist without human, observers. The existence of human beings or subjects is not essential for nature and physics[10]. These two assumptions are essential for exploring universality. It seems impossible to explore universality without them. It does not matter whether these assumptions are conscious or unconscious for scientists. These two assumptions are conditions of possibility for exploring the universality. The enterprise of physics in which they pursue the global truth has already been possible by assuming the duality and the uniqueness. It is the general style of interpretation of physics. Scientists attempt to interpret nature based on this interpretation consisting of these assumptions. ## 3 Rovelli's relational interpretation While the general interpretation of physics is based on duality and uniqueness which makes scientists have the interpretation that the world is positive, independent of human being and unique. Rovelli, however, proposed a new interpretation for physics[4; 5; 6]. This new approach had been elaborated while he was pursuing the origin of time, a consistent interpretation of quantum physics. Since Rovelli's relational interpretation was developed in quantum mechanics, it is heavily based on the formalization of quantum mechanics. However, its essential idea is simple and its key concept is _relations_. Here I call the relational interpretation the interpretation based on the relation, which was mainly summarized and described in his book[5]. Then I demonstrate that the similar structure can be found in the theory of intermediate asymptotics in soft matter even if it belongs to classical mechanics. Here I define that the essential idea of the relational interpretation is _to start with relations for interpretation of physics_. Here relations mean any interactions that connect between objects. The important point is that this idea is applied to subjects or observers as well. The relational interpretation considers subjects or observers as an involved part of nature. He emphasized that "the simple observation that scientists as well, and their measurement instruments, are all part of nature[11]." Starting from relations corresponds to starting from a perspective in which the subject is part of nature. Then the world is a network of relations and objects. This idea leads to the following radical conclusions[12]. * Conclusion 1. _There are no properties outside of relations_. It insists that any properties do not exist without interactions. Here the properties of an object are the way in which it acts upon other objects. Unrelated objects and properties do not simply mean unawareness but nonexistence. Generally, we believe that the property potentially is attributed when it does not interact. However, the relational interpretation concludes such a notion is superfluous and misleading. * Conclusion 2. _Facts are relative_. As the first conclusion said, there are no properties outside of relations. This immediately means that it is possible that a property manifests itself for a related observer but it does not for another unrelated observer. Property is relation-dependent, which means that facts for observers are also relative. Rovelli exercised his idea in quantum mechanics and he insists that quantum mechanics describes the way in which one part of nature manifests itself to any other single part of nature. He treats the state of a quantum system as being observer-dependent. The state is the relation between the observer and the system. This interpretation succeeded in eliminating the paradox of quantum theory. For example, the relational interpretation shows that the apparent incongruuity on _entanglement_ raised by what seemed like communication at a distance between two entangled objects was due to neglecting the existence of a third object that interacts with both systems. This paradox stems from our dogma that the properties of objects potentially exist even when they do not interact. Comparing the general interpretation of physics, it is easily found that the fundamental difference is the form of its origin. The general interpretation of physics always starts from the dualism of subject and object, observer and nature while the relational interpretation is the monism of relations though relations are the origin of the duality. Relations create elements of the dualism of subjects and objects. This monism simply means the fundamental indistinguishability of subjects and objects. Therefore, this dualism can be seen as relation-dependent and it is relative. Therefore, the second assumption is simply relativized by introducing a relation. If we say that the world is a network of relations and objects, the world is no longer unique for _any_ objects but it is unique within _their own_ network. This is not a contradiction. Note that the subject in the relational interpretation is different from the subjects in any other interpretation of physics. QBism (Quantum-Bayesianism) also introduces the subject to describe the world in the limited information given to the subject[13]. However, the subject of QBism is fundamentally different from the subject in relational interpretation in the point that QBism considers the subject as an entity independent of the objects, and the world. It also starts from the dualism of two elements independent of each other. The subject interacts with the world through the information but this interaction never changes how two elements are. However, the subjects in the relational interpretation reveal a fundamental dependence on others. How it manifests to others can vary depending on the relation. The subjects are no longer in privileged positions that are exempted from the interference of others but are fully involved in the network. This subjectivity indistinguishable from objects is essential on the subject of relational interpretation. In the next section, I will show this based on the framework of intermediate asymptotics. In the framework, the _scale_ plays the same role as the _relation_ in the relational interpretation, which is manifested by a dimensionless number De. The framework is provided by the concept of intermediate asymptotics. ## 4 Intermediate asymptotics and the scale as relation The idea of Rovelli's Relational interpretation is to start from relations. As a result, properties are relative in the sense that they can be different in different network of relations but it is unique in the sense that it is uniquely defined within the network of their relation. Therefore, the uniqueness is _localized by relation_ in the relational interpretation while in the general interpretation, the uniqueness is global, universal for any observers. The property is localized by the relation. In fact, I found that such a logical structure can be found in the concept of intermediate asymptotics as well. An _intermediate asymptotic_ is an asymptotic solution valid in a certain scale range, which has been introduced and formalized by Barenblatt[7; 8; 14; 15; 16; 17; 18; 19]. It can be considered as the formalization of a physical model and its idealization. Suppose an arbitrary physical function, \[y=f\left(x,z,t\right) \tag{1}\] of which \(y,x,z,t\) are physical parameters having certain dimensions. Any physical functions such as Eq. 1 can be transformed to a following self-similar solution, \[\Pi=\Phi\left(\eta,\xi\right) \tag{2}\] where \(\Pi=y/t^{\alpha}\), \(\eta=x/t^{\beta},\xi=z/t^{\gamma}\). Their power exponents \(\alpha,\beta,\gamma\) are determined by dimensional analysis or consideration of invariance[20]. According to the recipe by Barenblatt[21], nextly one considers the convergence of \(\Phi\). If \(\Phi\) converges to a finite limit as \(\xi\) goes to zero or infinity, \(\eta\) can be excluded from consideration. For example, if \(\Phi\rightarrow\text{const}\) as \(\xi\ll 1\) then we have an following intermediate asymptotics, \[\Pi=\Phi\left(\eta\right)\ \left(\xi\ll 1\right) \tag{3}\] while Eq. 3 is valid in the range for \(\xi\ll 1\) or \(0<z\ll t^{\delta}\)[22]. Eq. 3 is an intermediate asymptotics as it is valid in a certain scale range for \(\xi\ll 1\). If \(\eta\) also satisfy the same condition, \(\Phi\rightarrow\text{const}\) as \(\eta\ll 1\), then \(\Pi=\text{const}\). Thus we have a following intermediate asymptotics, \(y=\text{const}\ t^{\alpha}\ \left(\eta\ll 1,0<x\ll t^{\beta}\right)\). This is a power-law valid in a local scale. It is also an intermediate asymptotics. Functions of which parameters consist of power law monomials are called self-similar solutions as they are invariant for scale transformations such as \(x\to A^{\alpha}x\ \left(\alpha>0\right)\). Their parameters, \(\eta\), \(\xi\) are called similarity parameters. Any physical functions have physical parameters with dimensions. There is at least one self-similar form invariant for scale-change of dimensions. As has been shown, intermediate asymptotic is an asymptotic expression valid in a local scale range in which similarity variables go to \(0\) or infinity. To demonstrate this concept, the following example is helpful. Imagine that a circle is pictured on the surface of a sphere (See Fig. 1). In this problem, the involved physical parameters are the surface area of circle \(S\), radius of the circle \(r\) and the radius of sphere \(R\). Here we would like to know the scaling behavior between \(S\) and \(r\). Therefore we assume the functional relation as follows: \(S=\Phi(r,R)\). In this case, we attempt to obtain the exact scaling behavior by dimensional consideration. According to dimensional analysis, as the dimension of physical parameters \([S]=L^{2}\), \([r]=L\) and \([R]=L\), selecting \(r\) as a governing parameter of independent dimension, we have the following self-similar function, \[\Pi=\Phi\left(\theta\right) \tag{4}\] where \(\Pi=\frac{S}{r^{2}}\) and \(\theta=\frac{r}{R}\). Eq. 4 suggests that we expect the following scaling relation, \(S\sim r^{2}\), if \(\Pi\) is constant. However, we easily find that this guess depends on the behavior of \(\Phi\). By the geometrical consideration, we can calculate the exact form of \(\Phi\) as follows, \[\Phi\left(\theta\right)=2\pi\frac{1-\mathrm{cos}\theta}{\theta^{2}}. \tag{5}\] To know the behavior of \(\Phi\) in the case in which \(\theta\to 0\), which corresponds to the increase of \(R\) or the decrease of \(r\), Taylor expansion is applied to Eq. (5) then we have, \[\Pi=\Phi\left(\theta\right)\simeq\pi-\frac{\pi}{12}\theta^{2}\cdots+\underset {\theta\to 0}{\longrightarrow}\pi. \tag{6}\] As Eq. (6) shows, \(\Phi\) converges to a finite limit \(\pi\), then we have a following intermeidate asymptotics as \(\Pi=\frac{S}{r^{2}}\), \[S=\pi r^{2}\ \left(0<r\ll R\right) \tag{7}\] as far as the asymptotic condition \(\theta\ll 1\), corresponding to \(0<r\ll R\), is satisfied. Note that the scaling law Eq. (7) is valid in the scale range (\(0<r\ll R\)), in which the circle is significantly smaller than the sphere. Therefore, Eq. (7) is an asymptotic expression which is valid in the certain range of variable \(r\). This scaling law formalized _locally_ is an intermediate asymptotic in this problem. Figure 1: A circle of which radius is \(r\) and surface area is \(S\), is illustrated on a sphere of which radius is \(R\). There is a self-similar function \(\Pi=\Phi\left(\theta\right)\) where \(\Pi=S/r^{2}\) and \(\theta=r/R\) to connect these parameters. The plot between \(\Pi\) and \(\theta\) suggests, the dimensionless number \(\Pi\) is not constant in all scale range of parameter \(r\), but dependent on \(\theta\). However, there is a scale range in which \(\Phi\) converges to a finite limit,\(\Phi\rightarrow\pi\) as \(\theta\to 0\), namely \(\theta\ll 1\). In this scale range, there is a scaling law valid in a certain scale range as \(S=\pi r^{2}\ \left(0<r\ll R\right)\), which corresponds to an _intermediate asymptotics_. Every physical problems can be transformed to self-similar solutions as far as they have dimensions. By considering the convergence of \(\Phi\), essential similarity variables can be selected to have the idealized solution effectively and practically. The convergence of \(\Phi\) can be verified by the experimental or simulational results. The exact form of \(\Phi\) is not necessary for this procedure. The important point of this concept is that this process, in which one screens the self-similar variables of \(\Phi\) depending on their convergence, corresponds to the idealization of the problems. More or less, all the physical models involve idealizations such as ignorance of friction force, ignorance of quantum or relativity effect. All these assumptions correspond to the idealizing process of dimensionless function. For example, ideal gas equation can be considered as an intermediate asymptotic valid in the range where the volume of molecules \(b\) and the molecular interaction \(a\) are negligible on the van der Waals equation as follows, \[p=\frac{nRT}{V-nb}-\frac{an^{2}}{V^{2}}\longrightarrow\frac{nRT}{V}\ \ \left(\frac{an^{2}}{V^{2}}\ll p\ll\frac{RT}{b}\right). \tag{8}\] This idealizing scale range is satisfied as far as \(\Pi_{a}=\frac{an^{2}}{pV^{2}}\ll 1\) and \(\Pi_{b}=\frac{pb}{RT}\ll 1\). The concept of intermediate asymptotics suggests that actual problems are fundamentally _local_ for the observers describing the problems. Therefore, the description is relative in the sense that it is locally valid for a certain observer. However, it is unique if the observer belongs to the same locality. It is easy to see that this logical structure is similar to that of the relational interpretation. I will show that intermediate asymptotics reveal similarity with the relational interpretation if the similarity parameter includes the condition of the subject. If the intermediate asymptotics is the representation localized by the condition of a certain subject, the phenomenon is localized by a certain subject. ### The pitch drop experiment The terms of soft matter is introduced by Pierre-Gilles de Gennes, which denotes the materials characterized by complexity and flexibility[23; 24]. It deals with the polymer, the dispersed solution, the emulsion. In this field, one occasionally encounters the materials or phenomena which drastically change their behaviour depending on the scale of the physical parameters. The pitch drop experiment is a good example of this. I will show how the pitch drop experiment is an intermediate asymptotics localized by the time of observation. The pitch drop experiment is an experiment to observe the long time-scale behavior of pitch[25]. It is performed by pouring the heated sample of the pitch into a sealed funnel, then cutting the neck of the funnel to let it flow. The solid pitch is so heavily viscous material that any flow of the pitch is not observed for months or years. For 75 years, however, seven drops were observed. This experiment was intended to teach that any solid looking materials can behave like a fluid on a larger scale of time. One of the studies won Ig-Nobel prize. This experiment is occasionally treated as a trivia but it seems that it is posing a fundamental question of physics. In general, the physical properties, whether it is solid or fluid, are totally considered to be objective properties, which do not depend on time. However, the pitch drop experiment raises a question to this point. If one is asked a question whether the pitch is solid or fluid when months or years have passed since the pitch was prepared, one may answer that it is solid because any fluidic behavior is not objectively observed. However, if one is asked the same question after 100 years, the answer will be different. This seems contradictory to the fact that physical properties are objective. If one asks the question: Is the pitch fluid or solid? how can we answer this question? It seems that this experiment suggests that any physical materials and properties are "relative" on the time-scale, therefore we have to consider the material properties with the scale. Let us formalize this problem by intermediate asymptotics to see how the behavior of pitch varies depending on time. Assuming that the flow through stem obeys Poiseuille's law, the rate of flow of the pitch out of the funnel is described as follows, \[\frac{dV}{d\tau}=\frac{\pi d^{4}\rho g}{128\mu}\left(1+\frac{h}{l}\right) \tag{9}\] where \(V\) is the volume of the pitch, \(\tau\) is the time, \(\rho\) is the density of pitch, \(\mu\) is a viscous coefficient, \(d\), \(l\) are the diameter and the length of the stem, \(h\) is the depth of pitch in the funnel. These parameters belong to the material itself. However, as the qualitative discussion shows, the property of the material varies depending on the time scale of the observation. Now let us introduce the time scale of observation \(T\) and see how the behavior reveals depending on \(T\), not \(\tau\). It corresponds to seeing the phenomenon in different time-scales, and can be realized by the method of multiple scales[26]. Assuming a different time scale of the observation \(T_{1}=\tau\), \(T_{2}=\varepsilon\tau\) where \(\varepsilon=\frac{T_{2}}{\tau}\ll 1\). Here we have two different time scales: \(T_{1}\) is the same time scale as the actual relaxation time, and \(T_{2}\) is a much shorter time scale than its actual relaxation time. Assuming these time parameters, \(T_{1}\) and \(T_{2}\), we have \[\left(\frac{\partial}{\partial T_{1}}+\varepsilon\frac{\partial}{\partial T_{ 2}}\right)V=\frac{\pi d^{4}\rho g}{128\mu}\left(1+\frac{h}{l}\right). \tag{10}\] From this, in the time scale \(O\left(1\right)\) we have \[\frac{\partial V}{\partial T_{1}}=\frac{\pi d^{4}\rho g}{128\mu}\left(1+\frac {h}{l}\right) \tag{11}\] which corresponds to Eq. (9). However, in the scale of \(O\left(\varepsilon\right)\), we have \[\frac{\partial V}{\partial T_{2}}=0, \tag{12}\] which means that no flow is observed. Here we introduce the Deborah number \(\mathrm{De}=\frac{\tau}{T}\), which is defined as the ratio between the relaxation time \(\tau\) and the observation time \(T\). In the time scale in which \(T=T_{2}\) or \(\mathrm{De}\gg 1\), which corresponds to the observation of the pitch in the time scale much smaller than the actual relaxation, the behavior is described in Eq. 11, which means that no flow is observed and it can be considered as the solid behavior. However, in the time scale of \(T=T_{1}\) or \(\mathrm{De}=1\), which corresponds to the observation of the pitch in the same time scale as the actual relaxation, the behavior is described by Eq. 12. It corresponds to Eq. 9 and the behavior of fluidity reveals. Therefore the entire behavior can be described as follows, \[\frac{dV}{dT}=\begin{cases}0\ \left(\mathrm{De}\gg 1\right)\\ \frac{\pi d^{4}\rho g}{128\mu}\left(1+\frac{h}{l}\right)\ \left(\mathrm{De}\sim 1 \right).\end{cases} \tag{13}\] These two asymptiotic exspressions _locally_ defined by De are intermediate asymototics. In this formulation, this result is somehow seemingly trivial for the multiple scale, since there is no term for \(O\left(\varepsilon\right)\). There can be critics that there is no need to apply the multiple scale. However, we must remember that the lesson of the pitch drop experiment is how the property of the material changes depending on the time scale of the observation \(T\). To describe this lesson, the observation time \(T\) is necessary to consider and the application of the multiple scale is essential to describe the different behavior in different time scale. The consideration of \(O\left(\varepsilon\right)\) is essential to show the solid behavior; there is no flow on the solid as \(\frac{dV}{dT}=0\). ### Viscoelastic behaviors on the dynamical contact A distinct properties which depends on the time scale can be seen not only in the pitch drop experiment but in the various phenomena, dealt in the area of rheology or soft matter physics. Silly Putty is a toy which entertains by its behavior depending on the time scale of the loading force. If one throws the Silly Putty against the wall, it will bounce like an elastic ball though it will flow and spread on the ground by settling on the floor. This is also exactly a similar case. If one throws the ball onto the wall, the loading force \(T\) is applied in smaller time scale than relaxation time \(\tau\), then Deborah number is \(\mathrm{De}\gg 1\). If one settle it on the ground the time range of applying force \(T\) is constantly applied then it decrease De to reveal the fluidic behavior. Recently the author described such a functional change for the dynamical impact on the dusted viscoelastic board[19]. Here I show such variation of response arises from Maxwell viscoelastic model. Maxwell model is the model for viscoelastic materials consisting of a unit in which the spring and the dash-pod are serially connected. The behavior of materials to which the certain deformation \(\epsilon\) is applied can be described by the following differential equation, \(\frac{\mu}{E}\frac{d\sigma}{dt}+\sigma=\mu\frac{d\epsilon}{dt}\) where \(\mu\) is viscous coefficient originated from dash-pod and \(E\) is elastic modulus derived from spring part. It was found that the variation of response arises when the material receives the constant rate of deformation as \(\frac{d\epsilon}{dt}=\text{const}\). In this case, the stress \(\sigma\) exerted on the material is described as follows: \[\sigma=\mu\frac{d\epsilon}{dt}\left[1-\exp\left(-\frac{Et_{c}}{\mu}\right)\right] \tag{14}\] where \(t_{c}\) is contact time. Here an inverse Deborah number is \(Z=1/\text{De}=\frac{Et_{c}}{\mu}\). To visualize the transition of the character of the material, the following transformation takes place. The solid behavior is here characterized by Hooke's law, which is expressed by the \(\sigma=E\epsilon\). Here one can define a dimensionless number representing the Hookean behavior, namely solid behavior, as \(\Pi=\frac{\sigma}{E\epsilon}\). Since constant deformation is here assumed, the contact time is estimated as \(t_{c}=\epsilon/\frac{d\epsilon}{dt}\). Then \(Z=\frac{E\epsilon}{\mu}/\frac{d\epsilon}{dt}\). Thus, Eq. (9) is expressed as: \[\Pi=\frac{1}{Z}\left[1-\exp\left(-Z\right)\right]. \tag{15}\] Here we have a self-similar solution \(\Pi=\Phi\left(Z\right)\). To see the behavior of \(Z\ll 1\) and the other, let us see the asymptotic behavior of Eq. (15) by applying Taylor expansion as follows, \[\Pi=\Phi\left(Z\right)\simeq 1-\frac{1}{2}Z\cdots\underset{Z\ll 1}{\longrightarrow}1. \tag{16}\] As the expansion of \(\Phi\) shows, \(\Phi\) converges to a finite limit \(\Phi\to 1\) as \(Z\ll 1\), thus \(\Pi=\frac{\sigma}{E\epsilon}=1\). Then the behavior is totally characterized by the hookean behavior in \(Z\ll 1\). In case where \(Z\) is not small enough, the different behavior starts to appear, as \(\Pi=1-\frac{1}{2}Z\). Thus, depending on the \(Z\), the following behaviors appear, \[\sigma=\begin{cases}E\epsilon\;\left(Z\ll 1\right)\\ E\epsilon-\frac{1}{2}E^{2}\epsilon^{2}/\mu\frac{d\epsilon}{dt}\;\left(Z\sim 1 \right).\end{cases} \tag{17}\] These are intermediate asymptotics _localized_ by \(Z\) in the contact. Such a variation of response is observed in the dynamical impact of a solid sphere onto PDMS viscoelastic board and that scaling behavior also varied[19]. Such variation of scaling law is called crossover of scaling law. The scaling law of \(t_{c}=\epsilon/\frac{d\epsilon}{dt}\) was experimentally confirmed, and the author successfully explained this crossover of scaling law by the following self-similar solution, \[\Psi=\frac{2}{3}\frac{Z}{[1-\exp{(-Z)}]} \tag{18}\] where \(\Psi=\frac{\delta_{m}^{3}E\phi}{R^{2}h\rho v_{i}^{2}}\) and \(Z=\frac{E\delta_{m}}{\mu v_{i}}\), and \(\delta_{m}\) is a maximum deformation, \(\phi\) is the fraction of contact, \(\rho\) is the density of the impactor, \(R\) is a radius of impactor of sphere, \(h\) is the thickness of PDMS board, \(v_{i}\) is the impact-velocity. By applying the singular perturbation method to this self-similar solution, one has the following asymptotic solution, \[\delta_{m}=\frac{E\rho hR^{2}}{54\phi\mu^{2}}+\left(\frac{E\rho^{2}R^{4}h^{2} }{486\phi^{2}\mu^{3}}\right)^{\frac{1}{3}}v_{i}^{\frac{1}{3}}+\left(\frac{2hR ^{2}\rho}{3\phi E}\right)^{\frac{1}{3}}v_{i}^{\frac{2}{3}} \tag{19}\] as \(\varepsilon=\frac{ER}{\mu v_{i}}\to 0\). As Eq.19 shows, the high impact-velocity and the impact of larger sphere increase \(Z\), then the impact is elastic and \(\delta_{m}\sim v_{i}^{2/3}\) is dominant. On the other hand, the low impact-velocity and the impact of smaller sphere decrease \(Z\), then the impact is viscoelastic and \(\delta_{m}\sim v_{i}^{1/3}\) is dominant. Therefore depending on \(Z=\frac{E\delta_{m}}{\mu v_{i}}\), the scaling law changes as follows, \[\delta_{m}\sim\begin{cases}v_{i}^{\frac{2}{3}}&(Z\ll 1)\text{ Elastic impact}\\ v_{i}^{\frac{1}{3}}&(Z\sim 1)\text{ Viscoelastic impact}.\end{cases} \tag{20}\] These two are two intermediate asymptotics localized by \(Z\) in the dynamical contact of Maxwell viscoelastic materials. In this case, we can find that even scaling laws vary depending on the scale as _relation_. This insight suggested that the property of material cannot be determined without consideration of the scale. Again we see that the property of the material, which we tend to consider as the objective property, was localized by the scale. ## 5 The relational interpretation of soft matter as intermediate asymptotic Both two cases revealed that the physical properties cannot be determined without considering the scale. This scale is expressed by the dimensionless number De. De measures the scale. In the pitch drop experiment, the subject is clearly introduced as the observation time \(T\). \(T\) is essential to consider how the property of the pitch changes depending on the scale of the observation time. Since the essence of the lesson of the pitch drop experiment was how the observation time scale changes the property, the introduction of \(T\) into the problem was essential. However, one must note that it is the ratio of the relaxation time \(\tau\) and the observation time \(T\) that forms its relation on the pitch drop experiment(see Eq. 21). De is the relation of the pitch. The reason why the pitch drop experiment seems to reveal the bizarre behavior is that simply it has the larger, but not too large relaxation time \(\tau\) to observe its fluidity. Fundamentally, all materials should posses the same properties. The glass has the relaxation time of a hundreds thousand years[27]. Conversely, the fluid of water reveals the solid behavior by an extremely instantaneous load of force. The soft materials are nothing but this intermediate behavior is apparent for _our_ familiar scale. De provides different intermediate asymptotiiics as in Eq. 13, \[\mathrm{De}=\frac{\tau}{T}=\frac{\mathrm{relaxation\ time}}{\mathrm{ observation\ time}}. \tag{21}\] In the viscoelasticity on the contact, it is different from the pitch drop experiments in the point that the scale does not act in a passive way like the observation. However, in this case, the scale appears as the contact time \(t_{c}\). \(t_{c}\) is the active parameter in the sense that the contact itself is the active process (for the impactor). However, in this case, the scale of \(t_{c}\) changes the response properties of the material as it was seen in the pitch drop experiment for \(T\). When \(t_{c}\) is so short, the behavior is totally elastic, which is the property of the solid. \(t_{c}\) is larger the viscous behavior starts to appear as the viscosity is proportional to the duration, then the behavior changes to reveal the mixed properties of elasticity and the viscosity. In this case, the subject is not directly the observer but it appears as the impactor which is one part of nature and it contacts with the viscoelastic surface which is another part of nature. Here the Deborah number (Eq.22) plays the same role of relations to give different intermediate asymptotics for Eq. 17 and Eq. 20, \[\mathrm{De}=1/Z=\frac{\tau}{t_{c}}=\frac{\mu v_{i}}{E\delta_{m}}=\frac{ \mathrm{relaxation\ time}}{\mathrm{contact\ time}}. \tag{22}\] In both cases, the interaction or relation is clearly introduced by a single dimensionless number De, then it is related to the properties. Here it is found a similar proposition of Rovelli; _No interaction, no properties_. In our case, it would be _no scale, no properties_. These cases also show that the phenomena cannot be described from the idea that the observer and nature are distinguishable. It can be described only when both are involved. Thus the description is completed in _complicit_ manner in which the subject and the object are equally involved. It does not mean that the description is originated from either the subject or the object but it is originated from _relation_ in which two are fundamentally, indistinguishably involved. This complicity is essential for the relational interpretation on the soft matter. It also concludes that the world cannot be unique in the sense that the world is unique for any subjects or observers. It is because the properties of the pitch and the viscoelastic materials cannot be determined without the interactions or relations between the subjects and the observers. This finally leads to one of his conclusions. _The world is made of events, not things[28]_. In his interpretation, the difference between _things_ and _events_ is _that things persist in time; events have a limited duration[29]_. This paper has shown how the limited duration of the observation of time changes the properties of the pitch. If the limited duration of the observation time is smaller enough than the relaxation time, \(T\ll\tau\), the pitch behaves as solid. If the limited duration of the observation time is closer to the relaxation time, \(T\sim\tau\), then the pitch starts to reveal the fluid behavior. As for the dynamical contact with viscoelastic boards, the time duration of the contact time plays the same role. If the limited duration of the contact time is much smaller than the relaxation time, \(t_{c}\ll\tau\), the material will respond by the elastic behavior. When the time duration of the contact time is closer to the relaxation time as \(t_{c}\sim\tau\), the response changes and the scaling law also changes. Considering how the time duration changes the nature of the material, we have to conclude that the world is made of events, not things. From this importance of time-duration, the relational interpretation on soft matter also concludes that _facts are relative_. If observers have their own observation time scale, the observational properties can be different from observers. For example, if there is a creature with a life span of 1 ms, water must be solid for that creature, while it is fluid for us. The fact is different depending on the time scale. Here we find that the intermediate asymptotics is another version of _the way in which one part of nature manifests itself to any other single part of nature_. If its self-similar solution involves the similarity parameters including the scale of the subjet. The intermediate asymptotics which is localized by the limit of its similarity parameter is localized by the subject as well. It means that the subjectivity must essentially constitute to the description, and formalization of the problem though it had been in a hidden way. While we have found that there is an obvious similarity between the Relational interpretation of soft matter and that of quantum mechanics, the two interpretations are expressed in different ways. Relational Quantum Mechanics is formalized in quantum mechanics while the relational interpretation in soft matter is formalized through intermediate asymptotics. Relativity was introduced by the scale-dependence in the relational interpretation of soft matter while Relational Quantum Mechanics introduces relativity through the observer-dependence. However, the fundamental similarity is its monism. The relational interpretation is the monism of relations while the general interpretation is the dualism of subject and object. The dualism is static in the sense that elements have already been distinguished. Monism is dynamic in the sense that the elements are in the process in which they are generated. Relations are the origin of the fundamental dualism of subject and object which constitutes the facts and uniqueness. Uniqueness is realized by this dualism but this dualism is relativized by monism of relations. Relations generate these two elements and can change these two elements. Thus the relational interpretation includes the essence of the general interpretation of physics and that it extends its concept(see Fig. 2). The Relational interpretation reveals the fundamental incompleteness of the process, which relativizes facts and their uniqueness. Such a fundamental structure can be found in both Relational interpretations. Dorto also discussed the monistic aspect on the relational quantum mechanics[30]. The context of monism is different though the monism here I define is closer to the _priority monism_ in which they admit the existence of parts, but holds at the same time that the whole is prior to its parts. In this paper, I discussed monism and dualism in terms of the _degree of participation of subjects and objects_ for the interpretation[31]. The author stressed the fundamental participation of subjects to describe the phenomenon based on the intermediate asymptotics. We cannot fundamentally eliminate the participation from subjects as the scale, which corresponds to relations, cannot be determined by the _only_ objects but determined by the _relation_ between subjects and objects. Here we are forced to find the _complicit relation_ of subjects on the interpretation. These are the major points on the monistic side of the relational interpretation of this paper. We can say that similar incidents in which relations generate and relativise the duality of subject and object are discussed in other disciplines. Karl von Uexkull described the environment native to organisms as "Umwelt", which is generally translated as the environment in English[32]. Organisms possess different organs of perception. When organisms are considered as the unique systems of organs, the environment involving the subject of the organism is fundamentally different, not unique and objective. Such an environment inherent to orgasm was defined as "Umwelt". This concept is also quite similar to relational interpretation in the point that the different relations, which is a different system of organs of perception, leads to the fundamentally different Figure 2: Monisitic structure of the relational interpretation. Relations between subjects and objects are origine of the subjects and objects in dual structure. environment and it rejects the unique, objective world. Here the subject is fundamentally dependent on the "Umwelt" and two elements are fundamentally entangled. A German physician and physiologist, Viktor von Weizsacker, also developed a similar idea in his theory of "Gestaltkreis". Weizsacker also detected that the subject of a biological system is fundamentally entangled (Verschrankung) with the environment "Umwelt", which can be illustrated as a circular relation. The properties of objects and subjects originate from this circular relation, which corresponds to _relations_ in the Relational interpretation[33]. Bin Kimura, a Japanese psychiatrist, also emphasized the _relations_, which is "_aida_" in Japanese or should be translated to "_betweenness_", to question the condition of the possibility of havinng'self' naturally established, to explore the mental disease including Schizophrenia[34; 35; 36]. According to his theory, "_something_" is "_aida_" or "_betweenness_," and it is not that "_betweenness_" arises between individuals, but on the contrary, individuals arise on both sides of "_betweenness_"[36]. Here we can see the monism of "betweenness" for individuals, which is similar to the monism of relational interpretation. In the relational interpretation, it is not that _relations_ arises between subjects and objects, but subjects and objects arise on both sides of _relations_. As these theories suggest, the relations are expressed by the circular form in which two elements of dualism are entangled. Heidegger also criticized the explicitness of the unique, detached, objective world in which we are involved by posing a question to the nature of being. The Relational interpretation is no longer epistemological theory but it is ontological theory in the sense that it does not simply assume any existence without any relation[37; 38]. Heidegger's questions and interests are always posed to not _being_ but _the way of being_. The relation is the way how one subject or object manifests itself to others. The Relational interpretation is close to Heidegger's interest and approach. ## 6 Conclusion In this paper, the fundamental assumption of general physics was characterized and it was demonstrated how the Relational interpretation by Rovelli relativized these assumptions and led to different conclusions. The Relational interpretation localizes the uniqueness by relations. Then I showed that this logical structure can be found in the concept of intermediate asymptotics and how the phenomena of soft matter reveal similar conlusions. Therefore, the phenomenon of soft matter can also be relationally interpreted by the concept of intermediate asymptotics. The similarity between the two formalizations is thought to derive from the monism of origin. The relational interpretation of quantum mechanics and the relational interpretation of soft matter both begin from the relation. The former starts from the interaction and the latter starts from the scale. Both formalization includes the monistic structure and it relativizes the interpretation in different relations. This structure is also found in other fields. This work insists that the relational interpretation can be more general and fundamental concept, not a concept for special cases. Author Contribution Statement.The author confirms sole responsibility for the following: study conception, design and manuscript preparation. Acknowledgments.The author wishes to thank J. Aames, K. Koseki, M. Murase, A. Koide, Y. Hirai, and the participants of the seminar of Aporia, K. Fukao, K. Shimizu, Y. Miwaki, Y. Maruhashi, Y. Niwa for their fruitful discussion. This work was motivated by the online seminar by the author for the Time and Contingency Research Group, which was organized by R. Ohmaya and I. Motoaki ([https://www.youtube.com/watch?v=3mcycHS4xFg](https://www.youtube.com/watch?v=3mcycHS4xFg)).
2307.14752
Dynamical theory of single-photon transport through a qubit chain coupled to a one-dimensional nanophotonic waveguide
We study the dynamics of a single-photon pulse travelling through a linear qubit chain coupled to continuum modes in a one-dimensional (1D) photonic waveguide. We derive a time-dependent dynamical theory for qubit amplitudes and for transmitted and reflected spectra. We show that the requirement for the photon-qubit coupling to exist only for positive frequencies can significantly change the dynamics of the system. First, it leads to an additional photon-mediated dipole-dipole interaction between qubits which results in the violation of the phase coherence between them. Second, the spectral lines of transmitted and reflected spectra crucially depend on the shape of the incident pulse and the initial distance between the pulse center and the first qubit in the chain. We apply our theory to one-qubit and two-qubit systems. For these two cases, we obtain the explicit expressions for the qubits' amplitudes and the photon radiation spectra as time tends to infinity. For the incident Gaussian wave packet we calculate the line shapes of transmitted and reflected photons.
Ya. S. Greenberg, O. A. Chuikin, A. A. Shtygashev, A. G. Moiseev
2023-07-27T10:21:16Z
http://arxiv.org/abs/2307.14752v2
Dynamical theory of single-photon transport through a qubit chain coupled to a one-dimensional nanophotonic waveguide. Beyond the Markovian approximation ###### Abstract We study the dynamics of a single-photon pulse travelling through a linear qubit chain coupled to continuum modes in a one-dimensional (1D) photonic waveguide. We derive a time-dependent dynamical theory for qubits' amplitudes and for transmitted and reflected spectra. We show that the requirement for the photon-qubit coupling to exist only for positive frequencies can significantly change the dynamics of the system. First, it leads to the additional photon-mediated dipole-dipole interaction between qubits which results in the violation of the phase coherence between them. Second, the spectral lines of transmitted and reflected spectra crucially depend on the shape of incident pulse. We apply our theory to one-qubit and two-qubit systems. For these two cases we obtain the explicit expressions for the qubits' amplitudes and for the photon radiation spectra as time tends to infinity. For the incident Gaussian wave packet we calculate the line shapes of transmitted and reflected photons. ## I Introduction Manipulating the propagation of photons in a one-dimensional waveguide coupled to an array of two-level atoms (qubits) may have important applications in quantum devices and quantum information technologies [1; 2; 3; 4]. Quantum bits can be implemented with a variety of quantum systems, such as trapped ions [5], photons [6; 7; 8], and quantum dots [9; 10]. In particular, superconducting qubits [11; 12] have emerged as one of the leading candidate for scalable quantum processor architecture. Transmission of a single photon through an array of two-level atoms embedded in a 1D open waveguide has been extensively studied both theoretically [13; 14; 15] and experimentally [16; 17; 18]. Most of theoretical calculations of the transmitted and reflected photon amplitudes in a 1D open waveguide with the atoms placed inside have been performed within a framework of the stationary theory in a configuration space [19; 20; 21; 22] or by alternative methods such as those based on Lippmann-Schwinger scattering theory [23; 24; 25], the input-output formalism [26; 27; 14], the non-Hermitian Hamiltonian [28], and the matrix methods [29; 30]. Even though the stationary theory of the photon transport provides a useful guide to what one would expect in real experiment, it does not allow for a description of the dynamics of a qubit excitation and the evolution of the scattered photon amplitudes. In practice, the qubits are excited by the photon pulses with finite duration and finite bandwidth. Therefore, to study the real time evolution of the photon transport and atomic excitation the time-dependent dynamical theory was developed [31; 32; 33; 34; 35]. The theory relies on two assumptions. The first one is the Wigner-Weisskopf approximation in which the rate of spontaneous emission to the guided mode is much less than the qubit frequency, \(\Gamma(\omega)\ll\Omega\). Therefore, the decay rate \(\Gamma\) is assumed frequency independent and is taken at the resonant frequency \(\Omega\), \(\Gamma(\omega)=\Gamma(\Omega)\equiv\Gamma\). The second assumption is more tricky. It is assumed that the photon-mediated coupling \(g(\omega)\) may be extended to negative frequencies which allows to move the lower bound of some frequency integrals to minus infinity. In this case, the transmitted and reflected photon amplitudes become proportional to the spectral density of incident pulse, \(\gamma_{0}(\omega)\)[32]. It is believed that the continuation of \(g(\omega)\) to negative frequencies is justified beyond rotating wave approximation (RWA) by accounting for counter rotating terms (see Supplement in [36]). The extension to negative frequencies are generally relies on the assumption that non-RWA contribution is negligible which is not always the case. Moreover, it is not justified from physical arguments: because the continuum starts at \(\omega=0\), the density of states is zero for \(\omega<0\). Therefore, \(g(\omega)=0\) for \(\omega<0\), that is the coupling does not exist at negative frequencies [37]. Thus, its continuation to negative frequencies makes no sense. From the other hand, we may consider this continuation as a pure mathematical trick which can be justified if the negative frequency integral provides a small correction. This is indeed the case if the distance between qubits \(d\) exceeds the qubit wavelength \(\lambda\)[38]. However, as is shown in this paper, if \(d<\lambda\) the discrepancy can be significant. Another consequence of "no continuation" of \(g(\omega)\) to negative frequencies is more complicated dependence of photon radiation on the shape of incident pulse: the transmitted and reflected photon amplitudes are no longer proportional to \(\gamma_{0}(\omega)\). From the point of view of device applications, a control pulse generator should be placed as close as possible to the measured qubit circuitry, for example, in the same chip with superconducting qubit at millikelvin temperatures [39]. Here we show that such an arrangement leads to significant modifications of the photon radiation spectra.
2306.03107
Eliminating the Second-Order Time Dependence from the Time Dependent Schrödinger Equation Using Recursive Fourier Transforms
A strategy is developed for writing the time-dependent Schr\"{o}dinger Equation (TDSE), and more generally the Dyson Series, as a convolution equation using recursive Fourier transforms, thereby decoupling the second-order integral from the first without using the time ordering operator. The energy distribution is calculated for a number of standard perturbation theory examples at first- and second-order. Possible applications include characterization of photonic spectra for bosonic sampling and four-wave mixing in quantum computation and Bardeen tunneling amplitude in quantum mechanics.
Sky Nelson-Isaacs
2023-06-04T17:47:34Z
http://arxiv.org/abs/2306.03107v5
# Decoupling recursion in the time dependent Schrodinger equation using Fourier transforms ###### Abstract A strategy is developed for writing the time-dependent Schrodinger equation (TDSE), and more generally the Dyson Series, as a convolution equation using recursive Fourier transforms, thereby decoupling the second-order integral from the first without using the time ordering operator. The energy distribution is calculated for a number of standard perturbation theory example at first- and second-order. Possible applications include characterization of photonic spectra for bosonic sampling in quantum computation and Bardeen tunneling amplitude in quantum mechanics. ## I Introduction The Time-Dependent Schrodinger Equation (TDSE), although unsolvable in exact terms, is often approached through various perturbative methodologies, such as those pioneered by Rayleigh-Schrodinger [1], Dirac [2], Dyson [3], Lippmann-Schwinger [4], WKB [5], Feynman, [6][7][8] and others [9]. Under a weak perturbation the TDSE solution can be written in terms of the known eigenvectors of the time independent Schrodinger equation (TISE), which results in a Dyson series-an infinite recursion of coupled non-linear time integrals. The standard approach for decoupling uses the time-ordering operator. Numerous methods exist to integrate the Schrodinger equation, including matrix [5], slicing [10], and predictor-corrector/finite-difference methods (e.g. Runge-Kutta) [11], the Hartree-Fock method [12][13], and the high-energy scattering approximation [14]. In some techniques, the Fourier transform is a trick used to calculate derivatives efficiently. In other techniques, such as split-step, multi-slice, or Fourier space filtration,[15][16] the Fourier transform plays a more fundamental role relating to the Fourier dual spaces. The split step method (or Fourier space filtration or multi-slice methods) is frequently applied in the paraxial regime [17]. In this study, we employ a similar Recursive Fourier Transform (RFT) technique in a novel way to decouple the second-order integral from the first, bypassing the need to invoke time ordering. Thus, we can represent TDSE, plausibly to any order, as a convolution equation by invoking the Convolution Theorem. This presentation aims to offer an efficient second-order analytical solution to the TDSE, while also emphasizing the methods utility as a first principle.[18] By efficiently calculating the spectral response of a time-limited perturbation as is done here, one may be able to improve experimental results in quantum computing (bosonic sampling [19, 20, 21, 22, 23, 24]) and quantum mechanics (quantum tunneling [25]). This method may also hold pedagogical promise in physics education by expanding the range of calculable use cases for the TDSE. A detailed breakdown of the paper is as follows: first-order RFT technique introduction to familiar use cases (Section II), new second-order RFT decoupling technique (Section III), mathematical property examination (Section IV), experimental and theoretical applications (Section V). An appendix (Section A) and supplemental sections on numerical accuracy of the method (Section S1) and interpretation of the results (Section S2) are also included. ## II Formulating the time dependent Schrodinger equation through recursive Fourier transforms (RFT) We provide a brief overview of the standard formulation of TDSE, and then introduce the new approach. When the Hamiltonian is constant in time, the time _independent_ Schrodinger equation (TISE), \[\frac{\partial\Psi(x,t)}{\partial t}=\hat{H}\Psi(x,t), \tag{1}\] is separable between the time and space variables and can be solved exactly. The solutions are called stationary states \(\psi(x,t)=\psi_{n}(x,t=0)\exp\left(i\omega_{n}t\right)\). To solve the time dependant equation (TDSE), where \(\hat{H}=\hat{H}(t)\), Dirac[2] began with the integral form of Eq. 1, \[\left|\Psi(t^{\prime})\right\rangle=\int_{t_{0}}^{t^{\prime}}dt_{1}\hat{H}(t_ {1})\left|\Psi(t_{1})\right\rangle, \tag{2}\] then proposed a recursive method to evaluate \(\left|\Psi(t^{\prime})\right\rangle\). For a small time-dependent perturbation potential \(V(t)\), Dirac described the system as a time-evolving superposition of the (known) eigenstates of the unperturbed Hamiltonian. The wavefunction given by Dirac's method [2] is: \[\begin{split}\left|\tilde{\psi}(t)\right\rangle&= \sum_{n}c_{n}(t)\left|n\right\rangle\\ &=\sum_{n}\left(c_{n}^{(0)}+c_{n}^{(1)}(t)+c_{n}^{(2)}(t)+... \right)\left|n\right\rangle\end{split} \tag{3}\] where each coefficient \(c_{n}(t)\) is expanded to higher-order approximations by recursively integrating Eq. 2. The time dependence is evaluated with the following expression: \[\begin{split}\left\langle m|\tilde{\psi}(\Delta t)\right\rangle =\sum_{n}\Big{(}&\left\langle m\right|(1-\frac{i}{\hbar}\int_{0}^ {t_{1}}dt_{1}\hat{V}_{I}(t_{1}) \tag{4}\] where \(\Delta t_{0}\) is the measured time interval, \(t_{1}\) and \(t_{2}\) are distinct time integration parameters, and the potential \(\hat{V}_{I}\) is written in the interaction picture. The first term in the final line in Eq. 4 is the second-order term in the Dyson series. It's difficulty lies in the nested time integrals. [26] ### First-order recursive Fourier transform We now derive a new result. The first-order term in Eq. 4 has the well-known form: \[\begin{split} c_{a}^{(1)}(\Delta t_{0})&\propto\int_{0 }^{\Delta t_{0}}dt_{1}\left\langle\omega_{a}|\tilde{V}|\omega_{z}\right\rangle V (t_{1})e^{i(\omega_{a}-\omega_{z})t_{1}}c_{z}(0)\\ &\approx\tilde{V}\left(\omega_{a}-\omega_{z}\right)\end{split} \tag{5}\] where \(\sim\) indicates the Fourier transform. In the second line, a standard approximation was made by allowing the time interval to become infinite, "asymptotic time," so that Eq. 5 becomes the Fourier transform of the potential. For the alternate approach, we used the definition of the interaction picture \(V_{I}\equiv\exp{(i\hat{H}_{0}t)}V(t)\exp{(-i\hat{H}_{0}t)}\) to write the first-order term in Eq. 4 in a suggestive form, \[\begin{split} c_{a}^{(1)}(\Delta t_{0})&=\frac{1}{ i\hbar}\sum_{z}\int_{0}^{\Delta t_{0}}dt_{1}\left\langle\omega_{a}\right|e^{i \hat{H}_{0}t_{1}}V(t_{1})e^{-i\hat{H}_{0}t_{1}}\left|\omega_{z}\right\rangle c_ {z}(0)\\ &=\frac{1}{i\hbar}\int_{0}^{\Delta t_{0}}dt_{1}e^{i\omega_{a}t_{ 1}}V(t_{1})\sum_{z}e^{-i\omega_{z}t_{1}}\left\langle\omega_{a}\right|\tilde{V} |\omega_{z}\right\rangle\,c_{z}(0)\end{split} \tag{6}\] where we have strategically arranged the sum and integral operations as far right as possible. Next, we extend the domain of integration to all time by inserting an indicator function (or mask) that is non-zero only within the specified time range, \(0\rightarrow\Delta t_{0}\), \[c_{a}^{(1)}(\Delta t_{0})=\frac{1}{i\hbar}\int_{-\infty}^{\infty}dt_{1}e^{i \omega_{a}t_{1}}rect\left(\frac{t_{1}}{\Delta t_{0}}-\frac{1}{2}\right)V(t_{1 })\sum_{z}e^{-i\omega_{z}t_{1}}V_{az}\,c_{z}(0) \tag{7}\] where \(V_{az}=\langle\omega_{a}|\tilde{V}|\omega_{z}\rangle\) is the matrix element of the potential that connects the initial and final states. Everything on the right-hand side is written in the time domain over parameter \(t_{1}\), but we perform the calculation in the frequency domain. By writing \(rect(t)\), \(V\), and \(\exp{(-i\omega_{z}t_{1})}\) as the inverse Fourier transforms of their Fourier transforms, we can write the integral as: \[\begin{split}\underset{t_{1}\rightarrow\omega}{\mathcal{F}}& \left\{\mathcal{F}_{\omega^{\prime}\to t_{1}}^{-1}\Big{\{} \Delta t_{0}\exp{(i\omega^{\prime}\Delta t_{0}/2)}sinc(\omega^{\prime}\Delta t _{0}/2)\Big{\}}\underset{\omega^{\prime}\to t_{1}}{\mathcal{F}}^{-1} \Big{\{}V(\omega^{\prime})\Big{\}}\underset{\omega^{\prime}\to t_{1}}{ \mathcal{F}}^{-1}\Big{\{}\delta(\omega^{\prime}-\omega_{z})\Big{\}}\right\} \Bigg{|}_{\omega=\omega_{a}},\end{split} \tag{8}\] where \(sinc(x)\equiv sin(x)/x\), and the symbol \(\underset{t_{1}\rightarrow\omega}{\mathcal{F}}\) is an obvious notation that makes explicit that the transform is converting from an expression in \(\Delta t_{1}\) to an expression in \(\omega\). By applying the convolution theorem to Eq. 8, we obtain a first-order transition amplitude, \[\boxed{c_{a}^{(1)}(\omega_{a};\Delta t_{0})=\frac{\Delta t_{0}}{2\pi i\hbar} \sum_{z}V_{az}\left(e^{i\omega\Delta t_{0}/2}sinc(\omega\Delta t_{0}/2)* \tilde{V}(\omega)*\delta(\omega-\omega_{z})\right)\Big{|}_{\omega=\omega_{a}}c _{z}(0)} \tag{9}\] where \(c_{z}(0)\) is the amplitude of state \(\omega_{Z}\) at \(t=0\). This is the first main result, for an incoming eigenstate of energy \(\omega_{z}\). Note that the outer operation is not technically a Fourier transform to \(\omega\) but rather a projection onto a single \(\exp{(i\omega_{a}t_{1})}\) basis state. To accomplish this, we computed a Fourier transform to switch to a continuous energy basis and then evaluated the result at a discrete energy \(\omega=\omega_{a}\). Conceptually this is important because \(\omega^{\prime}\) is a dummy variable that is replaced by the measurable energy \(\omega_{a}\). \(c(\omega_{a})\) is the amplitude of a particle with a sharply defined energy to be deflected to the energy \(\omega_{a}\). Applied to an ensemble of identical incoming particles, \(c(\omega_{a})\) provides the distribution of the energies of the outgoing particles. For ease of calculation and to be concrete, we used a Gaussian potential 13 in the simulation, as shown in Fig. 1. ### Examples Two standard cases will be considered to illustrate the validity of the result above. #### ii.2.1 Example: Fermi's Golden Rule It will first be verified that Eq. 9 reduces to the well-known Fermi Golden Rule. Consider the Hamiltonian for an oscillation perturbation potential: \[\begin{split}\hat{H}&=\hat{H}_{0}\text{ for }t\leq 0\\ \hat{H}&=\hat{H}_{0}+\hat{V}(t)\text{ for }t>0\\ \hat{V}(t)&=2\hat{V}cos(\omega_{a}t)\end{split} \tag{10}\] Figure 1: Experimental case: The Gaussian potential’s tails are truncated by the measurement. The integration windows for the TDSE are marked as vertical lines. The tails of the Gaussian are excluded, leading to Gibbs ringing in the frequency domain (not shown). The standard integral in Eq. 5 results in the following coefficient for the \(z\to a\) transition: \[\begin{split} c_{a}^{(1)}(t)&=\frac{1}{i\hbar}\int_{0}^ {t}dt_{1}e^{i(\omega_{a}-\omega_{z})t_{1}}(e^{i\omega_{d}t_{1}}+e^{-i\omega_{d}t _{1}})\left\langle\omega_{a}|\hat{V}|\omega_{z}\right\rangle\\ &=\frac{V_{az}}{i\hbar}\left(\frac{e^{i(\omega_{az}+\omega_{d})t}- 1}{i(\omega_{az}+\omega_{d})}+\frac{e^{i(\omega_{az}-\omega_{d})t}-1}{i(\omega _{az}-\omega_{d})}\right)\\ &\approx\frac{V_{az}}{i\hbar}e^{i\frac{(\omega_{az}-\omega_{d})}{ 2}t}\left(\frac{sin((\omega_{az}-\omega_{d})t/2)}{(\omega_{az}-\omega_{d})} \right),\end{split} \tag{11}\] where \(\omega_{az}\equiv\omega_{a}-\omega_{z}\), and \(t\rightarrow\Delta t_{0}\) is defined as a finite-time interval. We dropped the first term, as is customary, in favor of the second term, which dominates around the resonant frequency \(\omega_{az}=\omega_{d}\). [27] Starting in a known state \(c_{z}=1\) so the sum over \(z\) can be dropped, then computing \(\mathcal{F}\{V(t)\}\approx\frac{1}{2}\left(\delta(\omega-\omega_{d})+\delta( \omega+\omega_{d})\right)\) but dropping the \(+\) term as above, Eq. 8 gives \[\begin{split} c_{a}^{(1)}(\Delta t_{0})&=\frac{1}{2 \pi i\hbar}V_{az}\left(\Delta t_{0}e^{i\omega\Delta t_{0}/2}sinc(\omega\Delta t _{0}/2)*\delta(\omega-\omega_{d})*\delta(\omega-\omega_{z})\right)\Big{|}_{ \omega=\omega_{a}}\\ &=\frac{1}{2\pi i\hbar}V_{az}\,\Delta t_{0}e^{i(\omega_{a}-\omega _{z}-\omega_{d})\Delta t_{0}/2}sinc((\omega_{a}-\omega_{z}-\omega_{d})\Delta t _{0}/2)\end{split} \tag{12}\] which is the standard result (up to constant factors). Figure 2: A Gaussian potential in \(\omega\)-space convolved with \(sinc(\omega\Delta t)\) in Eq. 9. Example: Kicked harmonic oscillator Another simple system to consider is the harmonic oscillator "kicked" by a small Gaussian pulse, \[V(t)\propto e^{-\frac{t^{2}}{2\tau^{2}}}, \tag{13}\] where \(\tau\) is the characteristic time of the gaussian. Using Eq. 5, the integral over a finite duration is typically extended to asymptotic time to become a Fourier transform, resulting in the _approximate_ expression \[c^{(1)}(t\rightarrow\infty)\propto\frac{\tau}{\sqrt{\omega_{(0)}}}e^{-\omega^ {2}\tau^{2}/2}, \tag{14}\] where \(\omega_{(0)}\) is a normalization constant owing to the fixed natural frequency of the oscillator, and \(\omega\) parameterizes the frequency response of the Gaussian perturbation. Using instead Eq. 9, assuming that the initial oscillator is in a single \(z\) energy eigenstate \(\exp\left(-i\omega_{z}\tau\right)\left|z\right>\) with certainty (\(c_{z}=1\)), an expression can be written for the amplitude associated with obtaining the \(a\)th energy eigenstate, \[c^{(1)}(\omega_{a})=\frac{\tau}{2\pi i\hbar}V_{az}\left(e^{i\omega\tau/2}sinc( \omega\tau/2)*e^{-\omega^{2}\tau^{2}/2}*\delta(\omega-\omega_{z})\right)\Bigr{|} _{\omega=\omega_{a}} \tag{15}\] Eqs. 14 and 15 convey similar information; however, Eq. 15 is _exact_. In Eq. 15, the normalization factor based on the oscillator frequency was incorporated into the definition of \(V_{az}\). Both equations include a linear factor for the time interval \(\tau\). In the latter expression, the Gaussian dependence on \(\omega\) is centered on the initial state, as expected for a kicked state, and is convolved with a phase shifted \(sinc\) function. The effect of convolution is to add a small ripple to the Gaussian. (See Fig. 2) This ripple was ignored in the standard approach (Eq. 14), when the limits of integration are arbitrarily set to infinity. ## III Decoupling Tdse through second-order Rft Having examined the familiar first-order result using recursive Fourier transform methods, we now derive our second main result: an expression for the second-order term in the TDSE expansion. The integrals for the second-order amplitudes are more complicated because the upper limit of integration for the nested integral is the integration parameter for the outer integral \(t_{1}\). Starting from Eq. 4, by inserting a discrete basis of equally spaced states, the second-order transition amplitude is: \[\begin{split} c_{a}^{(2)}(\omega_{a};\Delta t_{0})=&\frac {1}{(i\hbar)^{2}}\sum_{bz}\int_{0}^{\Delta t_{0}}dt_{1}\int_{0}^{t_{1}}dt_{2} \bra{\omega_{a}}\hat{V}_{I}(t_{1})\ket{\omega_{b}}\bra{\omega_{b}}\hat{V}_{I}(t _{2})\ket{\omega_{z}}c_{z}(0)\\ =&\frac{1}{(i\hbar)^{2}}\int_{0}^{\Delta t_{0}}dt_{1} \Big{\{}e^{i\omega_{a}t_{1}}V(t_{1})\sum_{b}e^{-i\omega_{b}t_{1}}V_{ab}\\ &\qquad\qquad\qquad\int_{0}^{t_{1}}dt_{2}e^{i\omega_{b}t_{2}}V(t _{2})\sum_{z}e^{-i\omega_{z}t_{2}}V_{bz}\,c_{z}(0)\Big{\}}\end{split} \tag{16}\] The integrals are therefore coupled, and the method in sec. II.1 must be modified. This is a Dyson series, and was decoupled by Dyson by introducing the time ordering operator. This is used widely in quantum field theory.[3] Here the integrals will be decoupled in a new way in the following four steps. **Step 1**.: _Apply the convolution theorem to the nested integral_ The limits of integration of the nested integral are extended to infinity, using a rectangular mask, as in Eq. 7, \[\begin{split} c_{a}^{(2)}(\omega_{a};\Delta t_{0})=& \frac{1}{(i\hbar)^{2}}\int_{0}^{\Delta t_{0}}dt_{1}e^{i\omega_{a}t_{1}}V(t_{1}) \sum_{b}e^{-i\omega_{b}t_{1}}V_{ab}\\ &\Big{\{}\int_{-\infty}^{\infty}dt_{2}e^{i\omega_{b}t_{2}}rect \left(\frac{t_{2}}{\Delta t_{1}}-\frac{1}{2}\right)V(t_{2})\sum_{z}e^{-i\omega_ {z}t_{2}}V_{bz}\,c_{z}(0)\Big{\}}\end{split} \tag{17}\] Because we truncated the signal using a \(rect(t)\) mask, this step was exact. The integrals are still coupled via \(\Delta t_{1}\), but _the coupling now parameterizes the width of the mask rather than the integration domain_. By following the steps in Eq. 7, we can write each factor in the integrand of the second line of Eq. 17 in the frequency domain. \[\begin{split} c_{a}^{(2)}=&\frac{1}{2\pi(i\hbar)^{ 2}}\int_{0}^{\Delta t_{0}}dt_{1}e^{i\omega_{a}t_{1}}V(t_{1})\sum_{bz}e^{-i \omega_{b}t_{1}}V_{ab}V_{bz}c_{z}(0)\\ &\underbrace{\mathcal{F}}_{t_{2}\rightarrow\omega_{b}}\Big{\{} \mathcal{F}^{-1}_{\omega\to t_{2}}\{\Delta t_{1}\exp{(i\omega\Delta t _{1}/2)}sinc(\omega\Delta t_{1}/2)\cdot\mathcal{F}^{-1}_{\omega\to t _{2}}\{\tilde{V}(\omega)\}\mathcal{F}^{-1}_{\omega\to t_{2}}\{ \delta(\omega-\omega_{z})\}\Big{\}}}\end{split} \tag{18}\] and apply the convolution theorem, \[\begin{split} c_{a}^{(2)}=&\frac{1}{2\pi(i\hbar)^{ 2}}\int_{0}^{\Delta t_{0}}dt_{1}e^{i\omega_{a}t_{1}}V(t_{1})\sum_{bz}e^{-i \omega_{b}t_{1}}V_{ab}V_{bz}c_{z}(0)\\ &\qquad\left(\Delta t_{1}e^{i(\omega-\omega_{z})\Delta t_{1}/2}sinc ((\omega-\omega_{z})\Delta t_{1}/2)*\tilde{V}(\omega)\right)\Big{|}_{\omega= \omega_{b}}\end{split} \tag{19}\] In evaluating the Fourier transforms, we have transformed bases from the original parameter of integration, \(\omega\), to \(t_{2}\) and then to \(\omega_{b}\), an intermediate basis of energy states. Note that the expression inside the parenthesis on the last line of Eq. 19_is a continuous distribution in a dummy parameter \(\omega\), evaluated at a specific value \(\omega=\omega_{b}\) after performing the convolution_. The nested integral is now a convolution in \(\omega\)-space, but the \(sinc\) function's width depends on \(t_{1}\), which is coupled to the outer integral. How do we compute a convolution of a signal whose shape is changing as \(t_{1}\) is integrated over? **Step 2.**_Discretize the integral over \(\Delta t_{1}\) as a Riemann sum and move it inside the sum over \(b\) and \(z\)_ It is easier to handle Eq. 19 by writing the integral over \(\Delta t_{1}\) as a Riemann sum of step size \(T\), and rearranging the sums:[28] \[\begin{split} c_{a}^{(2)}=&\frac{1}{2\pi(i\hbar)^{2} }\sum_{bz}V_{ab}V_{bz}c_{z}(0)\,\sum_{n=n_{i}}^{n_{f}}T\,e^{i\omega_{a}nT}V(nT) e^{-i\omega_{b}nT}\\ &\Big{(}nT\exp{(i(\omega-\omega_{z})nT/2)}sinc((\omega-\omega_{z })nT/2)*\tilde{V}(\omega)\Big{)}\Big{|}_{\omega=\omega_{b}}\end{split} \tag{20}\] Because the second line is a distribution in \(\omega\) evaluated at a specific point, it is simply a c-number for each term in the Riemann sum. **Step 3.**_Allow the variation over time to vary the width of the distributions \(sinc(\omega\Delta t_{1})\)_ Here is the central insight to decouple the integrals. For each step in the Riemann sum over \(n\) (coupling variable), the impulse response is defined as: \[\boxed{h_{bz}[\Delta t_{1}]\equiv\Big{(}\exp{(i(\omega-\omega_{z})\Delta t_{1 }/2)}\frac{sin((\omega-\omega_{z})\Delta t_{1}/2)}{(\omega-\omega_{z})/2}* \tilde{V}(\omega_{a})\Big{)}\Big{|}_{\omega=\omega_{b}}} \tag{21}\] where \(\Delta t_{1}=nT\). Eq. 21 can be seen as an impulse response of the system to a perturbation of duration \(\Delta t_{1}\). Eq. 21 is the second line of Eq. 20 (see fig. 4). **Step 4.**_Apply the convolution theorem to the outer integral_ Figure 3: The width of the \(sinc\) function depends on \(\Delta t_{1}\). In the figure, \(\Delta t_{1}\) increase from top to bottom. While varying the \(sinc\) over \(\Delta t_{1}\) we take samples at \(\omega_{b}\) (diamond on vertical line). The sample values trace out another \(sinc\) function. Now we can change the Riemann sum back to an integral over \(\Delta t_{1}\). _Crucially, \(h_{bz}\), which is an explicit distribution in \(\omega\)-space, appears inside an integral over time \(\Delta t_{1}\), we interpret it as a function of time rather than frequency_. We can now repeat the earlier technique of extending the integration domain in the first-order to \(\pm\infty\) and inserting a rectangular function of width \(\Delta t_{0}\), \[c_{a}^{(2)}= \frac{1}{2\pi(i\hbar)^{2}}\sum_{bz}V_{ab}V_{bz}c_{z}(0)\int_{- \infty}^{\infty}dt_{1}e^{i\omega_{a}\Delta t_{1}}rect\left(\frac{t_{1}}{ \Delta t_{0}}-\frac{1}{2}\right)V(t_{1})e^{-i\omega_{b}\Delta t_{1}}h_{bz}[t_ {1}]. \tag{22}\] The effect of the outer integral over \(t_{1}\) on the nested integral is to _vary the width \(t_{1}\) (or more precisely \(\Delta t_{1}\)) of the impulse response across the duration of the measurement window from \(0\rightarrow\Delta t_{0}\), and sample it at \(\omega_{b}\) to generate \(h_{bz}[\Delta t_{1}]\)_ (see fig. 3). Because of Step 3, everything inside the \(t_{1}\) integral in Eq. 22 can be treated as a distribution in \(t_{1}\), and the convolution theorem can be used again. The \(t_{1}\) integral becomes a Fourier transform by explicitly writing each factor in the \(\omega\)-domain, \[\underset{t_{1}\rightarrow\omega_{a}}{\mathcal{F}}\Bigg{\{}\underset{\omega \to t_{1}}{\mathcal{F}}^{-1}\Big{\{}\Delta t_{0}e^{i\omega\Delta t_{0}/2} since(\omega\Delta t_{0}/2)\Big{\}}\cdot\underset{\omega\to t_{1}}{ \mathcal{F}}^{-1}\Big{\{}\tilde{V}(\omega)\Big{\}}\underset{\omega\to t _{1}}{\mathcal{F}}^{-1}\Big{\{}\delta(\omega-\omega_{b})\Big{\}}\cdot \underset{\omega\to t_{1}}{\mathcal{F}}^{-1}\Big{\{}\tilde{H}_{bz}( \omega)\Big{\}}\Bigg{\}}, \tag{23}\] where \(\tilde{H}_{bz}(\omega_{b},\omega_{z})\equiv\mathcal{F}\{h_{bz}[t_{1}]\}\) is the Fourier transform of the impulse response, which is also called the amplitude transfer function. Figure 4: (Left) Nested \(sinc\) distribution (second line in Eq. 19), varied in width over \(\Delta t_{1}\), while repeatedly sampled at \(\omega_{b}=-2\omega_{(0)}\) (vertical line). As the width of the \(sinc\) decreases its height grows due to the linear factor of \(\Delta t_{1}\). (Middle) The samples oscillate (\(h_{bz}[\Delta t]\)) as \(\Delta t\) is varied. (Right) The Fourier transform \(\tilde{H}_{bz}(\omega)=\mathcal{F}\{h_{bz}[\Delta t]\}\) is a series of spikes representing second-order impulses. These \(\delta\)-functions are convolved in Eq. 24 to place a copy of the outer \(sinc\) at each impulse. The final \(\omega\)-domain expression for second-order transition amplitude from \(z\to a\) is \[\boxed{c_{a}^{(2)}(\omega_{a};\Delta t_{0})=\sum_{bz}c_{bz}\Bigg{\{}\Delta t_{0}e^ {i\omega_{a}\Delta t_{0}/2}sinc\left(\frac{\omega_{a}\Delta t_{0}}{2}\right)* \tilde{V}(\omega_{a})*\delta(\omega_{a}-\omega_{\text{b}})*\tilde{H}_{bz}( \omega_{a})\Bigg{\}}} \tag{24}\] where \[\boxed{\tilde{H}_{bz}(\omega)=\frac{\mathcal{F}}{\Delta t_{1}\to\omega} \Big{\{}\Big{(}\exp{(i(\omega^{\prime}-\omega_{z})\Delta t_{1}/2)}\frac{sin(( \omega^{\prime}-\omega_{z})\Delta t_{1}/2)}{(\omega^{\prime}-\omega_{z})/2}* \tilde{V}(\omega^{\prime})\Big{)}\Big{|}_{\omega^{\prime}=\omega_{b}}\Big{\}}} \tag{25}\] For notational simplicity we have defined \(c_{bz}\equiv c_{z}V_{ab}V_{bz}/(2\pi i\hbar)^{2}\). This is the second main result, expressing the second-order contribution to the transition amplitude owing to the time-limited perturbation of an arbitrary system with energy eigenfunctions \(\psi_{z}\). We assume that the spectra of the energy eigenstates \(\omega_{b}\) and \(\omega_{z}\), are discrete. For simplicity, we consider only the case in which they are equally spaced, that is, a simple harmonic oscillator. Then we can write \(\omega_{bz}=b\omega_{(0)}\). See fig. 5. ## IV Functional analysis We will now examine Eqs. 24 and 25 in more detail. Figure 5: Comparison of first and second-order convolution for a system in an initial state \(\omega_{z}=0\). The second-order calculation computes paths through intermediate energies at \(\pm 20\omega_{(0)}\). For the second-order calculation, the central peak is reduced, the wings are amplified, and the minima are increased. Only the range \(\pm 10\omega_{(0)}\) is shown, but the contributions from terms outside this range have a significant effect on the accuracy of the result. ### The transfer function \(\tilde{H}_{bz}\) Understanding the form of \(\tilde{H}_{bz}\) can be performed first for the case of a negligible potential, \(\tilde{V}(\omega_{a})=\delta(\omega)\) (infinitely wide in the time domain). Because convolution with a \(\delta\)-function is the identity operation, we can then evaluate \(h_{bz}[t]\) explicitly and take its Fourier transform, \[\begin{split}\tilde{H}_{bz}&=\underset{\Delta t_{1} \rightarrow\omega}{\mathcal{F}}\Big{\{}\exp\left(i\omega_{bz}\Delta t_{1}/2 \right)\frac{sin(\omega_{bz}\Delta t_{1}/2)}{\omega_{bz}/2}\Big{\}}\\ &=\frac{2\pi i}{\omega_{bz}}\left(\delta(\omega+\omega_{bz}/2+ \omega_{bz}/2)-\delta(\omega-\omega_{bz}/2+\omega_{bz}/2)\right)\\ &=\frac{2\pi i}{\omega_{bz}}\left(\delta(\omega+\omega_{bz})- \delta(\omega)\right).\end{split} \tag{26}\] In other words, the transfer function is composed of a series of discrete impulses spaced at integer multiples of \(\omega_{(0)}\) (because \(\omega_{bz}=b\omega_{(0)}\)). See fig. 4. Next \(\tilde{H}_{bz}\) is convolved with the other factors in Eq. 25, resulting in \[\begin{split}\delta(\omega-\omega_{b})*\tilde{H}_{bz}& =\frac{2\pi i}{\omega_{bz}}\delta(\omega-\omega_{b})*\left( \delta(\omega+\omega_{bz})-\delta(\omega)\right)\\ &=\frac{2\pi i}{\omega_{bz}}\left(\delta(\omega-\omega_{z})- \delta(\omega-\omega_{b})\right)\end{split} \tag{27}\] As shown in figs. 6 and 7, each term in the sum over \(b\) contributes complex impulses at \(\omega_{b}\) and \(\omega_{z}\). From top to bottom the cases \(b=-1,0,1\) are graphed consecutively, with \(z=0\). (imaginary portion is solid, real portion is dashed.) The top and bottom graphs correspond to Eq. 26. The special case \(b=z\) must be handled separately. Here, the desirable properties of the \(sinc\) function at the origin are required, and Eq. 26 is the Fourier transform of a constant: \[\tilde{H}_{00}=\mathcal{F}\Big{\{}\text{constant}\Big{\}}=2\pi\delta(\omega). \tag{28}\] This contributed to a purely real amplitude at the origin, as shown in the middle graph of fig. 6. Summing over all \(\tilde{H}_{bz}\) contributions over the range \(b=\pm 25\), it can be seen in fig. 8 that for \(b\neq 0\), \(\tilde{H}_{bz}\) contributes a real portion at \(\omega_{b}=0\) which is amplified as more frequencies are included (\(b\rightarrow\pm\infty\)) whereas the real portion remains small and finite at every other \(\omega_{b}\), vanishing when the distribution is normalized (top). Conversely, the imaginary portions cancel at \(\omega_{b}=0\) but are significant everywhere else, decaying inversely with respect to \(|b|\) (middle). An analogy can be drawn to the frequency-domain decomposition of sound signals. In the second-order calculation, the probability amplitude signal was deflected into a series of higher harmonics. Similarly, an acoustic musical instrument generates sound through the combination of a pluck (impulse) and resonant cavity that amplifies higher harmonics (impulse response). This is similar to the relationship between \(\tilde{H}_{bz}\) in Eq. 24 and the rest of that equation. Figure 6: Contributions to the second-order complex valued “transfer function” \(\tilde{H}_{bz}\) from the frequencies \(b=-1,0,1\). The cases \(b=\pm 1\) contribute a real portion (solid) and an imaginary portion (dashed) at \(\omega_{b}=\pm 1\). The case \(b=0\) contributes only a real portion (solid). See Eq. 26 and 28. Figure 7: The frequency-distributions for \(b=2,3\) contributions. (Top) The real (solid) and imaginary (dashed) parts of the complex \(sinc\) function in Eq. 24, corresponding to the outer integral over \(t_{1}\) in Eq. 19. (2nd/3rd row) The transfer function \(\tilde{H}_{bz}(\omega)\) in Eq. 25 captures information from the nested \(t_{2}\) integral as a series of spikes, \(\delta(\omega-2\omega_{(0)})+\delta(\omega)\), (left), \(\delta(\omega-3\omega_{(0)})+\delta(\omega)\), (right). See Eq. 26. (4th row) The real (solid) and imaginary (dashed) parts of the convolution in Eq. 24. (Bottom) The combined result from \(b=2\) and \(b=3\). ### Stepping through the algorithm for transfer function A comparison was made between the direct integration of Eq. 4 and the convolution approach in Eqs. 24 and 25 using MATLAB. The program begins by generating a nested impulse response, Eq. 21. This has the form \(sinc*V\) and width \(\Delta t_{1}\), as shows in Fig. 4 (far-left). This impulse response (in \(\omega\)) is then _varied over time_ across the integration limits \(0<\Delta t_{1}<\Delta t_{0}\), generating a sequence of \(sinc\) graphs of varying widths. The sample value at the vertical line \(\omega_{b}\) for each graph was stored as a new array \(h_{bz}[\Delta t]\) (Fig. 4, middle). For a given \(\omega_{b}\), these samples oscillate with a frequency profile that is dependent on the physics of the experiment (such as the properties of the potential and duration of the window of measurement). The Fourier transform of \(h_{bz}[\Delta t]\) is \(\tilde{H}_{bz}(\omega)\) (fig. 4 right panel). It is a series of impulses representing each intermediate contribution to the second-order amplitude (see figs. 6 and 8). If the perturbation is negligible, we can write \(\tilde{V}(\omega)=\delta(\omega)\). In this case \(\tilde{H}_{bz}\) is composed of \(\delta\)-function impulses at \(\omega_{b}\) and \(\omega_{z}\). If the potential is strong, other harmonics appear in this graph (see fig. 9 bottom-right). Finally, in Eq. 24, \(\tilde{H}_{bz}\) is convolved with a phase-shifted \(sinc\) impulse response so that a copy of the impulse response is placed wherever \(\tilde{H}_{bz}\) has a spike, as shown in Fig. 7. This is performed for every possible intermediate state \(\omega_{b}\), and the amplitude plots for each are summed. Each code loop over \(\omega_{b}\) contributes an impulse response centered at \(\omega_{b}\) and another centered at \(\omega_{z}\) (fig. 6). After looping over all \(2b+1\) intermediate distributions, the impulses centered on \(\omega_{z}\) reinforce \(2b+1\) times, whereas Figure 8: \(\tilde{H}_{bz}(\omega)\) for \(-25<b<25\). The contribution to \(\tilde{H}_{bz}(\omega)\) at \(b=0\) is non-zero for every \(b\neq 0\), growing without bound, as we include more momenta (\(b\rightarrow\infty\)), see Eq. 28. There is also a small, constant, real contribution at each \(\omega_{b}\), vanishes when the distribution is normalized (top). The imaginary portions cancel at \(\omega_{b}=0\) and are built everywhere else, decaying inversely to \(|b|\) (middle), see Eq. 26. The absolute magnitude of \(\tilde{H}_{bz}\) was a harmonic series of impulses (bottom). These impulses are convolved with the first-order impulse response, Eq. 9. the second-order signal at each \(\omega_{b}\) appears only once. The result is a strong central peak, and decaying wings (fig. 5). ### Domain and resolution of transfer function In the code implementation of Eq. 21, the length of \(h_{bz}[\Delta t_{1}]\) is not equal to the length of the original signal. This is because \(h_{bz}[\Delta t_{1}]\) is generated by scanning \(\Delta t_{2}\) over the variable range \(0<\Delta t_{2}<t_{1}\). Thus, the corresponding resolution of its Fourier transform, \(\tilde{H}_{bz}\), is reduced (fig. 9, top left). To compensate for the band-limited spectrum, \(\tilde{H}_{bz}\) was padded with copies of itself. This is necessary for the convolution operation to be well-defined. The time window \(\Delta t_{0}\) was chosen to be an integer fraction of the duration of the original signal so that the padding fits evenly (this is necessary to avoid artifacts in the Fourier transform). This defines a fundamental harmonic frequency associated with the measurement, \[\omega_{(1)} \equiv\text{total duration of original signal/duration of time integration} \tag{29}\] \[=\frac{1}{\Delta t_{0}},\] (see the harmonic spacing in Fig. 8). When the Gaussian potential is weak, the tiled instances of \(h_{bz}[\Delta t_{1}]\) line up smoothly, and \(\tilde{H}_{bz}\) only contains two spikes, as in Eq. 26 and fig. 6. When the potential is stronger, \(h_{bz}[\Delta t_{1}]\) does not line up on its endpoints, and spectral artifacts occur at integer multiples of \(\omega_{(1)}\). For reasons that are not fully understood by the author, the interpretation of the second-order results is clear only when \(\omega_{(1)}=\omega_{(0)}\), which is known as cyclotron resonance. This appears to be related to the interpretation of Eqs. 24 and 25 as a signal reconstruction problem using sinc-interpolation: This is the only case considered in this study. ### Effect of time window shift on \(\tilde{H}_{bz}\) In Fig. 8, the imaginary part of \(\tilde{H}_{bz}\) decays inversely with \(|b|\). The time measurement window was shown to extend from the origin to \(\Delta t\), leading to a translational factor of \(\frac{1}{2}\) in the \(rect(t)\) function. In the general case, the measurement window can be translated \(r\) units by shifting the \(rect(t)\) function again, \(rect(\frac{t}{\Delta t}-\frac{1}{2}-\frac{r}{\Delta t})\), leading to an overall phase shift in the frequency domain, \(\exp\left(i\omega_{b}(r/\Delta t)\right)\). This leads to an oscillating envelope for the impulses, as shown in Fig. 8 (not shown). In the MATLAB simulation a phase shift of this sort was introduced to compensate for coding artifacts (the base index for the time window started at 1 instead of 0). ### Normalizability The appearance of \(\omega_{bz}\) in the denominator of Eq. 21 inside summation over both \(z\) and \(b\) is the cause for questioning whether this expression can be normalized. However, owing to the good properties of the \(sinc\) function, \(h_{bz}\) and \(\tilde{H}_{bz}\) are non-singular. To observe this, note that when \(1/\omega_{bz}\) becomes singular, we use Eq. 28 (which is well-defined) instead of Eq. 26. In general, \(\tilde{H}_{bz}\) is a series of harmonics of spacing \(\omega_{(0)}\), as shown in Fig. 8. The middle plot shows imaginary impulses at every non-zero integer \(\omega_{b}\) that form a harmonic series, which is well-known to not converge: therefore it is not clear whether the final expression eqn, 24 is convergent. The upper plot shows an impulse at \(w_{z}=0\) resulting from each term in the sum over \(b\). The height of this impulse increases without bounds for \(b\to\pm\infty\). This is ultraviolet divergence. This can easily be resolved from a practical perspective. Because the height of the impulse at the origin is proportional to the size of the domain, \(|b_{max}|\), in the code implementation, this expression can be normalized by dividing by the maximum value of \(b\), where only a finite number of terms are included. From a theoretical perspective regarding the convergence of the second-order, the issue is whether the _sinc_ functions, each of which are normalizable and arranged in a harmonic series (which does not converge) are normalizable. This was not addressed in this study. ### Example: Second-order harmonic perturbation Golden Rule To illustrate the results of Section III we compare the recursive Fourier transform method with the standard approach in the second-order. Figure 9: Bandlimited signal: Because the duration of the time integration of the TDSE is less than the full signal (5% shown here, top left), \(h_{bz}[t]\) is smaller than \(\tilde{V}\) by that factor, and the resolution is decreased (lower left, the distinct spikes are only resolvable because horizontal scale is expanded). In the upper right, \(h_{bz}[t]\) was padded with copies of itself (upper right) to compensate for the limited bandwidth of the signal. This ensures its Fourier transform has the desired high resolution (lower right). Shown here is the \(b=+2\) term of Eq. 25. A moderate strength potential was used, resulting in harmonics at nearby states (see small spikes at \(b=+1,+3\) and other integers in bottom right). Consider the ramped up oscillating potential, \[V(t)=e^{\epsilon t}e^{-i\omega_{d}t}, \tag{30}\] where \(\epsilon\) is a small constant which ensures ramp up of the potential from \(t=-\infty\), and \(\omega_{d}\) is the driving frequency of the potential. Via traditional application of the TDSE to second-order, we integrate twice to obtain \[c(t)=\left(-\frac{i}{\hbar}\right)^{2}\frac{e^{i(\omega_{fi}-2\omega_{d})t}e^{ 2\epsilon t}}{\omega_{fi}-2\omega_{d}-2i\epsilon}\sum_{n}\frac{V_{fn}V_{ni}}{ \omega_{n}-\omega_{i}-\omega_{d}-i\epsilon}. \tag{31}\] Taking the rate of the squared amplitude in the small \(\epsilon\) limit results in \[\lim_{\epsilon\to 0}\frac{d}{dt}|c_{f}(t)|^{2}\propto\Big{|}\sum_{n}\frac{V_{ fn}V_{ni}}{\omega_{n}-\omega_{i}-\omega_{d}-i\epsilon}\Big{|}^{2}\delta( \omega_{f}-\omega_{i}-2\omega_{d}) \tag{32}\] where the \(\delta\)-function comes from the small \(\epsilon\) limit of \[\lim_{\epsilon\to 0}\frac{2\epsilon}{(\omega_{fi}-2\omega_{d})^{2}+\epsilon^{2}} \Rightarrow\delta(\omega_{fi}-2\omega_{d}) \tag{33}\] Alternatively, using Eqs. 24 and 25 (to find the probability amplitude rather than the probability), \[c_{f}(\omega_{f},t)\propto\sum_{n}V_{fn}V_{ni}te^{i(\omega_{f}-\omega_{n})t/2 }sinc((\omega_{f}-\omega_{n})t/2)*\tilde{V}(\omega_{f})*\delta(\omega_{f}- \omega_{n})*\tilde{H}_{ni}(\omega_{f}), \tag{34}\] where \[\tilde{H}_{ni}(\omega_{f})=\underset{t_{1}\rightarrow\omega_{f}}{\mathcal{F} }\Big{\{}\frac{e^{i(\omega^{\prime}-\omega_{i})\frac{t_{1}}{2}}sin((\omega^{ \prime}-\omega_{i})\frac{t_{1}}{2})}{(\omega^{\prime}-\omega_{i})/2}*\tilde{V }(\omega^{\prime})\Big{|}_{\omega^{\prime}\rightarrow\omega_{n}}\Big{\}} \tag{35}\] Again this simplifies. At the limit of small \(\epsilon\), the Fourier transform of the potential (Eq. 30) is the same as that in Eq., in which case Eq. 26 and 28 can be used to obtain the so-called transfer function, \[\tilde{H}_{ni}(\omega_{f})\propto\frac{1}{(\omega_{n}-\omega_{i}-\omega_{d})/ 2}\left(\delta(\omega_{f}+(\omega_{n}-\omega_{i}-\omega_{d}))-\delta(\omega_{f })\right). \tag{36}\] Plugging into Eq. 34 obtains \[\begin{split} c_{f}(\omega,t)\propto t\sum_{n}\frac{i}{\omega_{ n}-\omega_{i}-\omega_{d}}\Big{(}& e^{i(\omega_{fi}-2\omega_{d})\frac{t}{2}}sinc((\omega_{fi}-2\omega_{d})\frac{t}{2} )\\ &-e^{i(\omega_{f}-\omega_{n}-\omega_{d})\frac{t}{2}}sinc((\omega_ {f}-\omega_{n}-\omega_{d})\frac{t}{2})\Big{)}.\end{split} \tag{37}\] In the limit of small \(\epsilon\) and small \(t\), the prefactor denominator matches Eq. 32, and the first term inside the parentheses reduces to \(\delta(\omega_{fi}-2\omega_{d})\), which is again consistent with Eq. 32. The second term inside the parenthesis peaks at \(\omega_{f}-\omega_{n}=\omega_{d}\) whereas the prefactor peaks at \(\omega_{n}-\omega_{i}=\omega_{d}\), so that the only non-zero contribution occurs when \(\omega_{fi}=2\omega_{n}\). Summing over \(n\) gives \(n\) peaks located at \(\omega_{fi}=2\omega_{d}\) (combining to form one very high peak), plus one peak for each \(n\) at \(\omega_{fn}=\omega_{d}\) (a discrete series of small peaks, decaying in height as \(\omega_{n}-\omega_{i}-\omega_{d}\) increases), which justifies the approximation that the first term dominates, resulting in the standard result, Eq. 32. At this limit, the time dependence of Eq. 37 is linear; thus the time derivative (of the amplitude, in this case) is constant, as in Eq. 32. ## V Discussion The recursive Fourier transform method for decoupling the Dyson series has application in both experiment and theory. A few possibilities are discussed. ### Bosonic sampling and quantum computation Bosonic sampling [19] with indistinguishable photons represents a computational challenge that can only be tackled by quantum computers, thus would demonstrate so-called quantum supremacy. Tamma and Laibacher explain, "for a given interferometric network, the interference of all the possible multi-photon detection amplitudes...depends only on the pairwise overlap of the spectral distributions of the single photons". [29] They emphasize extracting quantum information from the "spectral correlation landscapes" of photons. [20] So characterizing the frequency spectrum of a single photon is an essential task. Various physical properties are related to the spectra of the photon. Further elaborating, Tamma and Laibacher assert that their results reveal the "ability to zoom into the structure of the frequency correlations to directly probe the spectral properties of the full N-photon input state as well as the symmetries of the linear network which become manifest on the level of the multi-photon amplitudes" [20], where "single-photon states" \[\left|\psi\right\rangle:=\int_{0}^{\infty}d\omega\,c(\omega)e^{i\omega\Delta t _{0}}\hat{a}^{\dagger}(\omega)\left|0\right\rangle \tag{38}\] are characterized by a "frequency distribution \(c(\omega)\in\mathbb{R}(\int d\omega\,c^{2}(\omega)=1)\) and the time of emission." [20] The definition used by the authors for measuring indistinguishability of pairs of photons is \[g(s,s^{\prime})=\int_{-\infty}^{\infty}d\omega\,c_{s}(\omega)\cdot c_{s^{ \prime}}(\omega), \tag{39}\] where the spectra are normalized, \(\int d\omega\,c_{a}^{(1)}(\omega_{a};\Delta t_{0})^{2}(\omega)=1\). [19] Finally, the probability of a given detection in a multi-boson interference set-up is proportional to the spectra, \[P_{a\gets z}\propto\Big{|}\sum_{b}\prod_{a}\mathcal{U}_{ab}c_{\sigma}( \omega_{a})e^{i\omega_{a}\Delta t_{b}}\Big{|}^{2}. \tag{40}\] If the experimental goal is to measure time delays between photons input into the interferometer, Eq. 40 predicts a beating phenomenon that depends on the time difference. Knowing the spectrum precisely also enables the generation of extremely short photons without the need for a finely-tuned experimental apparatus [24]. If we know \(c(\omega)\) we can know precisely the time of emission of a photon, or the duration of a process creating the photon. The approach we explored in this study aims to calculate the spectral distributions \(c(\omega)\) of photons with greater precision and efficiency, potentially improving the measurements above. ### Quantum field theory Dyson decoupled the nested integrals in higher-order TDSE (Eq. 4) by introducing a time-ordering operator that places all operators in order of increasing time from right to left. Then, TDSE can be written as a complex exponential, \[\begin{split} U_{fi}&=1-i\int_{i}^{f}dtV_{I}(t)+ \frac{(-i)^{2}}{2}\int_{i}^{f}dt_{1}\int_{i}^{t_{1}}dt_{2}V_{I}(t_{1})V_{I}(t_{ 2})+\mathcal{O}(V_{I}^{3})...\\ &=1-i\int_{i}^{f}dtV_{I}(t)+\frac{(-i)^{2}}{2}\int_{i}^{f}dt_{1} \int_{i}^{f}dt_{2}T\{V_{I}(t_{1})V_{I}(t_{2})\}+\mathcal{O}(V_{I}^{3})...\\ &=T\Big{\{}\exp\Big{[}-i\int_{i}^{f}dtV_{I}(t)\Big{]}\Big{\}}\end{split} \tag{41}\] where \(V_{I}\) is expressed in the Interaction Picture of Dirac. From this method the usual field theory methods for calculating field correlation functions are typically derived. In this study we accomplished decoupling in a novel way, with no appeal to time ordering. This may be a more efficient method for directly calculating higher-order correlation functions or Feynman amplitudes by using convolution. It also removes the asymptotic time assumption, because the limited time intervals are fully accounted for by the Fourier transforms. A further point is that perturbation theory applies only for small perturbations, so that the higher order terms in the Taylor expansion of the integrand can be ignored. If the recursive Fourier transform method allows for efficient calculation of higher order terms, one may be able to relax the constraint for small perturbations and allow for a broader range of potential strengths, moving out of the regime of weak coupling forces. ### Bardeen tunneling Bardeen investigated electron tunneling at a biased junction between the conductive components. He portrayed the potential on one junction side as a disturbance for electrons transitioning to the junction, writing the TDSE as: \[i\hbar\frac{d}{dt}c_{f}(t)=e^{-i\omega_{i}t}M_{if}+\omega_{f}c_{f}(t)+\sum_{k }c_{k}(t)M_{fk} \tag{42}\] \(M_{ab}\) is the matrix element of the Hamiltonian perturbation. With small tunneling current assumption, the \(c(t)\) dependent term is omitted, yielding a first-order tunneling amplitude for a certain outgoing frequency \(\omega_{f}\): [25] \[c_{f}(t)=\hbar M_{if}\frac{e^{-iE_{i}t/\hbar}-e^{-iE_{f}t/\hbar}}{(E_{f}-E_{i})}, \tag{43}\] \(E_{i}\) and \(E_{f}\) represent the initial and final electron energies or the sample and tip of an electron microscope, respectively. Electrons come in with energy \(E_{i}\), and in a junction biased at voltage \(V_{0}\), \(E_{i}\to E_{i}+eV_{0}\). Bardeen's formulation is valid under certain conditions [25]. This result is useful because it describes tunneling in terms of time rather than space. The same calculation can be executed using Eq. 9: \[c_{f}(t)=i\hbar M_{fi}\frac{t}{2}\left(\frac{\exp{(iE\frac{t}{2\hbar})}sinc(E \frac{t}{2\hbar})}{E}*\delta(E-E_{i}-eV_{0})*\delta(E)\right)\Big{|}_{E=E_{f}} \tag{44}\] The potential (in the frequency representation) is identified as \(\tilde{V}=\delta(E)\), and the initial state is \(\hbar\omega_{z}=E_{i}+eV_{0}\). Eq. 44 is typically an intermediary step for calculating tunneling current. With increasing time, Eq. 9 converges to a \(\delta\)-function, and the contribution of the central lobe becomes predominant. Utilizing Fermi's Golden Rule, one integrates over outgoing energy modes to derive an expression for the total electron flux crossing the barrier. The recursive Fourier transform approach efficiently determines the probability amplitude for each outgoing energy mode. This method could permit a more detailed description of the energy transitions across the tunneling barrier. For instance, in tunneling across a voltage-biased barrier, the energy profile Eq. 9 correlates with the excess kinetic energy profile of an electron ensemble post-barrier-crossing. The ensemble's velocity profile could be measured. The Bardeen approach applies only to short-time tunneling and therefore calculates a transient diffractive effect. #### iv.2.1 Example: 2nd order Bardeen tunneling Although a second-order expression is not typically attempted with Bardeen's approach, in the recursive Fourier transform approach to the general TDSE we have derived a second-order result, Eqs. 24 and 25. Here we use these to extend Bardeen's result to second-order. First we determine the transfer function for second order Bardeen tunneling. Identifying the potential as constant in time, and offsetting the initial energy state by the amount of the voltage bias across the junction, \(\omega_{z}=eV_{0}/\hbar\), using Eq. 25 we obtain \[\begin{split}\tilde{H}_{bz}(\omega_{a})&=\underset{ t_{1}\to\omega_{a}}{\mathcal{F}}\Big{\{}\left(\frac{\exp{(i(\omega^{\prime}-eV_{0}/ \hbar)\frac{t_{1}}{2})}sin((\omega^{\prime}-eV_{0}/\hbar)\frac{t_{1}}{2})}{( \omega^{\prime}-eV_{0}/\hbar)}*\delta(\omega^{\prime})\right)\Big{|}_{\omega^ {\prime}=\omega_{b}}\Big{\}}\\ &=\underset{t_{1}\to\omega_{a}}{\mathcal{F}}\Big{\{}\frac{\exp{ (i(\omega_{b}-eV_{0}/\hbar)\frac{t_{1}}{2})}sin((\omega_{b}-eV_{0}/\hbar)\frac {t_{1}}{2})}{(\omega_{b}-eV_{0}/\hbar)}\Big{\}}\end{split} \tag{45}\] Note that \(\omega^{\prime}\) is the convolution parameter. Performing the Fourier transform results in \[\tilde{H}_{bz}(\omega_{a})=\frac{1}{2\pi i(\omega_{b}-eV_{0}/\hbar)}\Big{(}\delta (\omega_{a}+(\omega_{b}-eV_{0}/\hbar))-\delta(\omega_{a})\Big{)}. \tag{46}\] Following the steps in sec. IV.1, we write \[\delta(\omega_{a}-\omega_{b})*\tilde{H}_{bz}(\omega_{a})=\frac{1}{2\pi i( \omega_{b}-eV_{0}/\hbar)}\Big{(}\delta(\omega_{a}-eV_{0}/\hbar)-\delta(\omega_ {a}-\omega_{b})\Big{)} \tag{47}\] so using Eq. 24 the 2nd order Bardeen tunneling amplitude is \[\begin{split} c^{(2)}(\omega_{a};\Delta t_{0})=2\pi i\Delta t_{0 }V_{ab}V_{bz}&\,e^{i(\omega_{a}-\frac{eV_{0}}{\hbar})\frac{ \Delta t_{0}}{2}}sinc\left(\omega_{a}\frac{\Delta t_{0}}{2}\right)\\ &*\sum_{b}\frac{1}{(\omega_{b}-\frac{eV_{0}}{\hbar})}\Bigg{(} \delta(\omega_{a}-\frac{eV_{0}}{\hbar})-\delta(\omega_{a}-\omega_{b})\Bigg{)},\end{split} \tag{48}\] where \(V_{ij}\) denotes the matrix elements of the potential operator. Eq. 48 is a complex \(sinc\) function depending on the final energy, \(\omega_{a}\), convolved with a sum of \(\delta\)-functions over the index \(b\). The \(\delta\)-functions are centered on \(\frac{eV_{0}}{\hbar}\) and equally spaced in those units, descending in amplitude as an inverse function of the index \(b\). The result of second-order Bardeen tunneling via this method is a series of \(sinc\) functions placed at regular intervals of descending amplitude from a point \(\frac{eV_{0}}{\hbar}\) in \(\omega\)-space. The second-order tunneling amplitude leads to electrons being deflected into a distribution of energies. This result is significant as a transient effect for small times only. It illustrates the usefulness of this method in extending existing methods of calculation. ### Other applications The method proposed based upon for your transforms has a quite general form and might be used in other scenarios. The TDSE describes the diffraction of the wave function around a small temporal perturbation. One might consider application in the spatial domain, rederiving the usual single slit diffraction formula, and then extending this to a second order calculation. Also in the spatial domain, one might apply the RFM technique to a tunneling barrier, for instance in a scanning tunneling microscope or in alpha decay. ## VI Summary In this work, we devised a novel technique that decouples nested integrals in the Dyson series for the time-dependent Schrodinger equation (TDSE) using recursive Fourier transforms (RFT). This provides an approach which is particularly suited for computation on both classical and quantum computers. This method shares similarities with existing multi-slice or split-operator techniques, but is used to refine accuracy of wavefunction spectra rather than propagate a wavefunction over time. The RFT approach computes the temporal diffraction of a wavefunction under a perturbing force of finite duration. It can be used, for instance, in the characterization of single photons in cases where indistinguishability is important. The decoupling of the integrals at second-order is achieved by shifting to the frequency domain to obtain a nested \(sinc\) function, then _interpreting_ the nested \(sinc\) as a function of _time_ while also swapping the order of operators to perform the outer time integral before the sum over energy. This varies the width of the \(sinc\) function in the frequency domain, which can be _sampled_ at a given frequency to extract an amplitude in the frequency domain. This allows the TDSE to be expressed as a sum of (non-nested) convolutions. We anticipate that this procedure can be iterated to higher orders. ## Funding Statement This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. ## Acknowledgements The author is grateful to Jeff Butler, Richard Pham Vo, Marcin Nowakowski, Paul Borrill, Andrei Vazhnov, Stefano Gottardi, Daniel Sheehan, Joe Schindler, Justin Kader for helpful comments and feedback. ## Conflict of Interest Statement None declared. ### Using the appropriate dual domain In systems linked by Fourier transforms, a proper domain emphasis can sometimes be overlooked. For example, TDSE coefficients \(c(t)\), are written as functions of time to establish the time dependence of the wavefunction. However, Eq. 8 (fig. 2) represents a \(\omega\)-space distribution featuring an \(\omega\) convolution. Time does not appear directly in this expression. Instead, \(\Delta t_{0}\) is a constant that shapes the oscillatory pattern of the distribution at a given moment. By varying \(\Delta t_{0}\), we must recompute the convolution at each time step and then sample the distribution at point \(\omega_{a}\) to yield a meaningful transition amplitude. This requires distinguishing "integration parameters" from "coordinates", in the sense of [18]. Consider Fermi's Golden Rule: a bound state \(\omega_{i}\) transitions to a continuum state \(\omega_{f}\) under a driving frequency \(\omega_{d}\), producing the transition amplitude expressed in Eq. 11. By varying \(\omega_{d}\), \(\omega_{i}\), or \(\omega_{f}\), fig. 10 is useful for identifying the relevant dependencies. However, fig. 10 can also be interpreted time-wise because the \(sinc\) function depends symmetrically on time and energy. Over time, the \(sinc\) function (as a function of \(\omega_{fi}-\omega_{d}\)) becomes more peaked, and the image in fig. 10 is considered to be a snapshot of time. Thus, we interpret the amplitude as time-dependent, \(c(t)\). However, this interpretation misreads the proper domain. Amplitude \(c_{fi}\) is a frequency distribution and not a time distribution. The time dependence is implicit; evolving time means updating the entire distribution, after which we can derive the frequency-dependent amplitude at that time. These processes are distinct. The former involves only number reading from the graph, while the latter requires repeated graphing and sampling. The former is a function, whereas the latter is a functional. The same reasoning applies to the usual kicked harmonic oscillator treatment (perturbed by a small Gaussian pulse, Section \(II\,A\)). The standard methods lead to the coefficient in Eq. 14, which is implicitly defined by the elapsed time \(\Delta t_{0}\) but explicitly a function of \(\omega\). This can help determine the best pulse duration \(\Delta t_{0}\) to match the natural oscillator frequency \(\omega\), but it overshadows the more natural \(c=c(\omega)\) dependence. Each \(\Delta t\) value is a unique experiment leading to a different distribution. In varying \(\Delta t\), Eq. 14 becomes a functional by creating a configuration space for each \(\Delta t\) value.
2306.13314
Reduced Power Graphs of $\mathrm{PGL}_3(\mathbb{F}_q)$
Given a group $G$, let us connect two non-identity elements by an edge if and only if one is a power of another. This gives a graph structure on $G$ minus identity, called the reduced power graph. In this paper, we shall find the exact number of connected components and the exact diameter of each component for the reduced power graphs of $\mathrm{PGL}_3(\mathbb{F}_q)$ for all prime power $q$.
Yilong Yang
2023-06-23T06:29:58Z
http://arxiv.org/abs/2306.13314v1
# Reduced Power Graphs of \(PGL_{3}(\mathbb{F}_{q})\) ###### Abstract Given a group \(G\), let us connect two non-identity elements by an edge if and only if one is a power of another. This gives a graph structure on \(G\) minus identity, called the reduced power graph. In this paper, we shall find the exact number of connected components and the exact diameter of each component for the reduced power graphs of \(PGL_{3}(\mathbb{F}_{q})\) for all prime power \(q\). ## 1 Introduction Given a group \(G\), the _reduced power graph_ of \(G\) has all non-identity elements of \(G\) as vertices, and two distinct vertices are connected by an edge if and only if one is a power of the other. One interesting aspect of the reduced power graph is its connectivity. In particular, Akbari and Ashrafi proposed the following conjecture in 2015. **Conjectures 1.1** (Akbari and Ashrafi,[1]).: _The reduced power graph of a non-abelian simple group \(G\) is connected only if \(G\) is isomorphic to some alternating group \(A_{n}\)._ In a previous work [Yan], we showed that the conjecture above is false, and have provided a description about the connected components of the reduced power graphs of \(\mathrm{PGL}_{n}(\mathbb{F}_{q})\) and an upper bound for these components. In this paper, we shall find the precise diameter bounds when \(n=3\). In particular, we have the following result: **Theorem 1.2**.: _Let \(p\) be the characteristic of a finite field \(\mathbb{F}_{q}\). The reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\) must fall into one of the following cases._ 1. _The reduced power graph of_ \(\mathrm{PGL}_{3}(\mathbb{F}_{2})\) _has_ \(51\) _components, each with diameter_ \(1\)_._ 2. _The reduced power graph of_ \(\mathrm{PGL}_{3}(\mathbb{F}_{3})\) _has_ \(321\) _components. One has diameter_ \(11\)_, and_ \(312\) _components have diameter_ \(1\)_. and_ \(8\) _components have diameter_ \(2\)_._ 3. _If_ \(q\neq 2\) _is even and_ \(q-1\) _is prime, then the reduced power graph of_ \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\) _has_ \(1+q^{3}(q^{3}-1)(q+1)(q-3)+\frac{1}{3}q^{3}(q^{2}-1)(q-1)\) _components. One has diameter_ \(13\)_,_ \(q^{3}(q^{3}-1)(q+1)(q-3)\) _components has diameter_ \(1\)_, and the other_ \(\frac{1}{3}q^{3}(q^{2}-1)(q-1)\) _components have diameter_ \(1\) _if_ \(q=8\)_, and diameter_ \(2\) _otherwise._ 4. _If_ \(q\neq 2\) _is even and_ \(q-1\) _is not prime, then the reduced power graph of_ \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\) _has_ \(1+\frac{1}{3}q^{3}(q^{2}-1)(q-1)\) _components. One has diameter_ \(10\)_, and the other_ \(\frac{1}{3}q^{3}(q^{2}-1)(q-1)\) _components have diameter_ \(2\)_._ 5. _If_ \(q\neq 3\) _is odd and_ \(q-1\) _is a prime power (i.e., when_ \(q=9\) _or_ \(q\) _is a Fermat prime), then the reduced power graph of_ \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\) _has_ \(1+\frac{1}{p-1}(q^{3}-1)(q^{3}-q)+\frac{1}{3}q^{3}(q^{2}-1)(q-1)\) _components. One has diameter_ \(12\)_,_ \(\frac{1}{p-1}(q^{3}-1)(q^{3}-q)\) _components has diameter_ \(1\)_, and the other_ \(\frac{1}{3}q^{3}(q^{2}-1)(q-1)\) _components have diameter_ \(1\) _if_ \(q^{2}+q+1\) _is a prime, and diameter_ \(2\) _otherwise._ 6. _If_ \(q\neq 3\) _is odd and_ \(q-1\) _is not a prime power, then the reduced power graph of_ \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\) _has_ \(1+\frac{1}{p-1}(q^{3}-1)(q^{3}-q)+\frac{1}{3}q^{3}(q^{2}-1)(q-1)\) _components. One has diameter_ \(8\)_, all other components have diameter_ \(2\) Preliminary ### Projectively Reduced Power Graph and Pivot Component In this paper, we shall study a special graph on \(\mathrm{GL}_{n}(q)\), which has the reduced power graph of \(\mathrm{PGL}_{n}(q)\) as a quotient graph. **Definition 2.1**.: _Given a group \(G\), let \(Z\) be its center. Then the projectively reduced power graph of \(G\) is obtained from the reduced power graph of \(G\) by deleting vertices in \(Z\)._ **Proposition 2.2**.: _Given a group \(G\), let \(Z\) be its center. Let \(\Gamma\) be projective reduced power graph of \(G\), and let \(\Gamma^{\prime}\) be the reduced power graph of \(G/Z\). If \(\Gamma\) is connected, then \(\Gamma^{\prime}\) is connected with the same or less diameter._ Proof.: \(\Gamma^{\prime}\) is a quotient graph of \(\Gamma\). When \(q>2\) is a prime power, we say a matrix \(A\in\mathrm{GL}_{3}(\mathbb{F}_{q})\) is a _pivot matrix_ if it is diagonalizable over \(\mathbb{F}_{q}\) with exactly two distinct eigenvalues. In the projectively reduced power graph of \(G\), the connected component containing a pivot matrix is called a _pivot component_. We also call the corresponding connected component in \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\) a _pivot component_ as well. In the previous work [Yan], we have shown that for \(q\neq 2\), all pivot matrices are connected in the projectively reduced power graph of \(\mathrm{GL}_{3}(\mathbb{F}_{q})\). So in particular, when \(q\neq 2\), the pivot component of \(\mathrm{GL}_{3}(\mathbb{F}_{q})\) or \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\) is unique. Furthermore, the precise diameter of all non-pivot components of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\) is already obtained in [Yan]. So we only need to figure out the precise diameter of the pivot component. ### Obstructions and Connections We now list some important results from [Yan] about obstructions to the connectivity of the reduced power graph, and about certain connection results. We say a matrix \(A\in\mathrm{GL}_{n}(\mathbb{F}_{q})\) is a _Jordan pivot matrix_ if \(A-I\) has rank one and \((A-I)^{2}=0\). **Proposition 2.3** (Diagonalizable obstruction).: _Suppose \(q>2\) is a power of \(2\) with \(q-1\) prime, and \(2\leq n<q\). Let \(A\) be an \(n\times n\) diagonalizable matrix with \(n\) distinct non-zero eigenvalues over \(\mathbb{F}_{q}\). Then the image of \(A\) in the reduced power graph of \(\mathrm{PGL}_{n}(q)\) is trapped in a connected component with \(q-2\) vertices and diameter \(1\), made of images of non-identity powers of \(A\)._ **Proposition 2.4** (Irredicible obstruction).: _Let \(n\) be prime and \(A\) be a matrix over \(\mathbb{F}_{q}\) whose characteristic polynomial is irreducible. Then the image of \(A\) in the reduced power graph of \(\mathrm{PGL}_{n}(q)\) is trapped in a connected component with \((q^{n}-q)/(q-1)\) vertices, made of images of polynomials of \(A\) that are not multiples of identity. The diameter of this component is at most \(2\)._ **Proposition 2.5** (Jordan-type obstruction).: _Consider a finite field \(\mathbb{F}_{q}\) with characteristic \(p\). If \(2\leq n\leq p\), let \(A\) be any matrix over \(\mathbb{F}_{q}\) similar to the \(n\times n\) matrix \(\left[\begin{matrix}\lambda&1&&\\ &\ddots&\ddots&\\ &&\ddots&1\\ &&&\lambda\end{matrix}\right]\) for some \(\lambda\in\mathbb{F}_{q}^{*}\). Then the image of \(A\) in the reduced power graph of \(\mathrm{PGL}_{n}(q)\) is trapped in a connected component with \(p-1\) vertices and diameter \(1\), made of images of powers of \(A\) that are not multiples of identity._ **Proposition 2.6** (Quasi-diagonalizable obstruction when \(q=2\)).: _Suppose we have a prime \(p_{0}=2^{p_{1}}-1\) (i.e., Mersenne prime) for some positive integer \(p_{1}\). Suppose \(n\leq p_{0}\). Let \(A\) be an \(n\times n\) non-identity matrix such that \(A^{p_{0}}=I\), and its minimal polynomial equal to its characteristic polynomial. Then the \(A\) in the reduced power graph of \(\mathrm{GL}_{n}(2)\) is trapped in a connected component with \(p_{0}-1\) vertices and diameter \(1\), made of non-identity powers of \(A\)._ _Note that in this case, \(p_{1}\) must be a prime factor of \(n\) or \(n-1\)._ **Proposition 2.7** (Extra irredicible obstruction when \(q=2\)).: _Let \(n-1\) be prime and \(A\) be an \(n\times n\) matrix over \(\mathbb{F}_{2}\) whose characteristic polynomial has an irreducible factor of degree \(n-1\). Then the image of \(A\) in the reduced power graph of \(\mathrm{PGL}_{n}(2)\) is trapped in a connected component with \(2^{n-1}-2\) vertices, made of non-identity powers of a matrix \(C\in\mathrm{PGL}_{n}(2)\). The diameter of this component is at most \(2\)._ **Proposition 2.8**.: _For \(q\neq 2\) and \(n=3\), any two pivot matrices in the projectively reduced power graph of \(\operatorname{GL}_{n}(\mathbb{F}_{q})\) will have distance at most \(8\)._ **Proposition 2.9**.: _For \(q\neq 2\) and \(n\geq 3\) or for \(q=2\) and \(n\geq 4\), any two Jordan pivot matrices in the projectively reduced power graph of \(\operatorname{GL}_{n}(\mathbb{F}_{q})\) will have distance at most \(8\)._ ### Other useful results We now list some important results that will be useful here. Proofs and references will be found in [Yan]. **Lemma 2.10**.: _If \(q,q-1\) are both prime powers, then either \(q\) is Fermat prime, or \(q-1\) is a Mersenne prime, or \(q=9\)._ **Corollary 2.11**.: _Let \(q\) be a power of a prime \(p\), and let \(n\geq 1\). Let \(p_{0}\) be a prime factor of \(q-1\). If any \(n\times n\) matrix \(A\) over \(\mathbb{F}_{q}\) has multiplicative order coprime to \(p_{0}\), then \(A\) has a \(p_{0}\)-th root whose multiplicative order is a multiple of \(p_{0}\)._ **Lemma 2.12**.: _Let \(q\) be a power of a prime \(p\), and let \(n\geq 2\). If a matrix \(A\in\operatorname{GL}_{n}(\mathbb{F}_{q})-Z(\operatorname{GL}_{n}(\mathbb{F}_ {q}))\) has projective order coprime to \(p\), then its generalized Jordan canonical form is a block diagonal matrix where each diagonal block is either \(1\times 1\) or some companion matrix of an irreducible polynomial._ **Lemma 2.13**.: _Suppose the characteristic polynomial of a matrix \(C\in\operatorname{GL}_{n}(\mathbb{F}_{q})\) is irreducible, and the multiplicative order of \(C\) is \(k\). For any positive integer \(t\), the minimal polynomial of \(C^{t}\) is irreducible. For any factor \(t\) of \(k\), if \(\frac{k}{t}\) is a factor of \(q^{m}-1\), then the minimal polynomial of \(C^{t}\) is irreducible with degree at most \(m\)._ **Corollary 2.14**.: _Suppose the characteristic polynomial of a matrix \(C\in\operatorname{GL}_{n}(\mathbb{F}_{q})\) is irreducible, and the multiplicative order of \(C\) is \(k\). For any factor \(t\) of \(k\), \(\frac{k}{t}\) is a factor of \(q-1\) if and only if \(C^{t}\) is a multiple of identity._ ## 3 Classification and Connections By standard enumeration, here is a list of possible types of generalized Jordan canonical forms for matrices in \(\operatorname{GL}_{3}(\mathbb{F}_{q})\). 1. (Multiples of identity) \(A\) is a multiplies of identity. These are excluded in the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\). 2. (Pivot matrices) The canonical form of \(A\) is \(\begin{bmatrix}a&&\\ &a&\\ &&b\end{bmatrix}\) diagonalizable and \(a\neq b\). (Only possible if \(q\neq 2\).) 3. (LLL matrices) The canonical form of \(A\) is \(\begin{bmatrix}a&&\\ &b&\\ &&c\end{bmatrix}\) all eigenvalues are distinct. (Only possible if \(q\neq 2,3\).) We call them LLL matrices because the most important invaraint subspaces for this kind of matrices are the three eigenlines. By Proposition 2.3, when \(q-1\) is an odd prime, images of these matrices form isolated connected components in the reduced power graphs of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), and each component has diameter at most \(2\). 4. (Irreducible Matrices) The canonical form of \(A\) is the companion matrix to an irreducible polynomial of degree three over \(\mathbb{F}_{q}\). By Proposition 2.4, images of these matrices form isolated connected components in the reduced power graphs of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), and each component has diameter at most \(1\). 5. (LP matrices) The canonical form of \(A\) is \(\begin{bmatrix}a&&\\ &C\end{bmatrix}\) where \(C\) is the companion matrix to an irreducible polynomial of degree two over \(\mathbb{F}_{q}\). We call them LP matrices because the most important invaraint subspaces for this matrix are the eigenline and an invariant plane. By Proposition 2.6 or Proposition 2.7, when \(q=2\), images of these matrices form isolated connected components in the reduced power graphs of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), and each component has diameter at most \(1\). 6. (Jordan pivot matrices) The canonical form of \(A\) is \(\begin{bmatrix}a&&1\\ &a&\\ &&a\end{bmatrix}\). 7. (LLP matrices)The canonical form of \(A\) is \(\begin{bmatrix}a&&\\ &b&1\\ &&b\end{bmatrix}\). (Only possible if \(q\neq 2\).) We call them LLP matrices because the most important invaraint subspaces for this matrix are the two eigenlines and an invariant plane. 8. (NPJ matrices) The canonical form of \(A\) is \(\begin{bmatrix}a&1\\ &a&1\\ &&a\end{bmatrix}\). We call them NPJ matrices because they are non-pivot Jordan matrices. When \(q\) is odd, by Proposition 2.5, images of these matrices form isolated connected components in the reduced power graphs of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), and each component has diameter at most \(1\). When \(n\) is even, then the square of this matrix is a multiple of a Jordan pivot matrix. **Definition 3.1**.: _For any prime power \(q\), we say the Jordan type of a matrix \(A\in\mathrm{GL}_{3}(\mathbb{F}_{q})\) is identity, pivot, LLL, irreducible, LP, Jordan pivot, LLP, or NPJ if its generalized Jordan normal form has the corresponding form classified as above._ We want to analyze the pivot components, which contains pivot matrices, Jordan pivot matrices, LLL matrices, LP matrices, and (when \(q\) is even) NPJ matrices. Now, by calculating potential powers, we can obtain the connection diagrams below. We use solid arrows to indicate that some matrices of a certain type has a power in another type. If the arrow has the word "char 2", then the connection is only possible when \(q\) is even. Figure 1: When \(q-1\) is not a prime power Figure 3: When \(q\) even and \(q-1\) prime Figure 4: When \(q=3\) Figure 2: When \(q=9\) or \(q\) is a Fermat prime, \(q\neq 3\) \begin{tabular}{|c|} \hline LLL Matrices \\ (N/A) \\ \(\begin{bmatrix}a\\ b\\ \end{bmatrix}\) \\ \hline \end{tabular} \begin{tabular}{|c|} \hline Pivot Matrices \\ (N/A) \\ \(\begin{bmatrix}a\\ b\\ \end{bmatrix}\) \\ \hline \end{tabular} \begin{tabular}{|c|} \hline LLP Matrices \\ (N/A) \\ \(\begin{bmatrix}a\\ \\ b\end{bmatrix}\) \\ \hline \end{tabular} \begin{tabular}{|c|} \hline Jordan Pivot Matrices \\ \(\begin{bmatrix}a\\ \\ \end{bmatrix}\) \\ \hline \end{tabular} \begin{tabular}{|c|} \hline NPJ Matrices \\ \(\begin{bmatrix}a&1\\ a&1\\ a\end{bmatrix}\) \\ \hline \end{tabular} Proof.: Check the diagrams above one by one. For this purpose, we make the following definition. **Definition 3.3**.: _For any prime power \(q\), if \(A\in\operatorname{GL}_{3}(\mathbb{F}_{q})\) is similar to \(\begin{bmatrix}a&\\ &C\end{bmatrix}\) for some \(a\in\mathbb{F}_{q}^{*}\) and invertible \(2\times 2\) matrix \(C\), then we say \(A\) is decomposable._ Note that decomposable matrices are exactly the LLL matirces, pivot matrices, LP matrices, LLP matrices, and Jordan pivot matrices. ## 4 Decomposable Matrices to Pivot Matrices **Lemma 4.1**.: _Let \(q\neq 2\) be a power of a prime \(p\), and pick any factor \(p_{0}\) of \(q-1\). Suppose a decomposable matrix \(A\in\operatorname{GL}_{3}(\mathbb{F}_{q})-Z(\operatorname{GL}_{n}(\mathbb{F}_ {q}))\) has projective order coprime to \(p_{0}\). Then a power of a root of a scalar multiple of \(A\) is a pivot matrix with multiplicative order \(p_{0}\)._ Proof.: Suppose \(A\) has projective order \(k\). If \(A\) is similar to \(\begin{bmatrix}a&\\ &C\end{bmatrix}\) for some \(a\in\mathbb{F}_{q}^{*}\) and invertible \(2\times 2\) matrix \(C\), consider \(a^{-1}A\), which is similar to \(\begin{bmatrix}1&\\ &a^{-1}C\end{bmatrix}\). Its multiplicative order must be \(k\). So \(a^{-1}C\) has multiplicative order \(k\). Since \(k\) is coprime to \(p_{0}\), by Corollary 2.11, \(a^{-1}C\) has a \(p_{0}\)-th root \(C^{\prime}\) whose multiplicative order is a multiple of \(p_{0}\). Let \(x\in\mathbb{F}_{q}^{*}\) be any element with multiplicative order \(p_{0}\). Then since \((C^{\prime})^{kp_{0}}\) must be identity and \(p_{0}\) is coprime to \(p\), therefore \((C^{\prime})^{k}\) must be diagonalizable with eigenvalue \(x^{s_{1}},x^{s_{2}}\) for some positive integer \(s_{1},s_{2}\). Note that \(k\) is coprime to \(p_{0}\). So we can find a positive integer \(k^{\prime}\) such that \(kk^{\prime}\) is \(1\) modulus \(p_{0}\). Figure 5: When \(q=2\) Suppose \(s_{1}-s_{2}\) is a multiple of \(p_{0}\). So since \(a^{-1}A\) is similar to \(\begin{bmatrix}1\\ &\\ &a^{-1}C\end{bmatrix}\), therefore \(a^{-1}A\) has a \(p_{0}\)-th root similar to \(\begin{bmatrix}1\\ &x^{k^{\prime}(1-s_{1})}C^{\prime}\end{bmatrix}\), whose \(k\)-th power is similar to \(\begin{bmatrix}1\\ &\\ &x\end{bmatrix}\), a pivot matrix with multiplicative order \(p_{0}\). Suppose \(s_{1}-s_{2}\) is not a multiple of \(p_{0}\). So since \(a^{-1}A\) is similar to \(\begin{bmatrix}1\\ &\\ &x^{-k^{\prime}s_{1}}C^{\prime}\end{bmatrix}\), whose \(k\)-th power is similar to \(\begin{bmatrix}1\\ &\\ &1\\ &x^{s_{2}-s_{1}}\end{bmatrix}\), a pivot matrix with multiplicative order \(p_{0}\). **Lemma 4.2**.: _If \(q\) is a power of an odd prime \(p\). Suppose a decomposable matrix \(A\in\operatorname{GL}_{3}(\mathbb{F}_{q})-Z(\operatorname{GL}_{n}(\mathbb{F}_ {q}))\) has even projective order. Then a power of a scalar multiple of \(A\) is a pivot matrix with multiplicative order \(2\)._ Proof.: Suppose \(A\) has projective order \(k\). If \(A\) is similar to \(\begin{bmatrix}a\\ &C\end{bmatrix}\) for some \(a\in\mathbb{F}_{q}^{*}\) and invertible \(2\times 2\) matrix \(C\), consider \(a^{-1}A\), which is similar to \(\begin{bmatrix}1\\ &a^{-1}C\end{bmatrix}\). Its multiplicative order must be \(k\). So \(a^{-1}C\) has multiplicative order \(k\). Now \(k\) is even. Therefore, consider \((a^{-1}A)^{\frac{k}{2}}\), which has multiplicative order \(2\), and hence it must be diagonalizable over \(\mathbb{F}_{q}\), and it must have eigenvalues \(1,\pm 1,\pm 1\). Since it cannot be identity, \(A^{\frac{k}{2}}\) is similar to \(\begin{bmatrix}1\\ &-1\end{bmatrix}\) or \(\begin{bmatrix}1\\ &1\\ &-1\end{bmatrix}\). So we are done. **Corollary 4.3**.: _If \(q\) is a power of an odd prime \(p\). Then for any matrix \(A\in\operatorname{GL}_{3}(\mathbb{F}_{q})-Z(\operatorname{GL}_{n}(\mathbb{F}_ {q}))\), in the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), either the image of \(A\) has distance at most \(2\) to the image of a pivot matrix with multiplicative order \(2\), or the image of \(A\) has no path to the image of any pivot matrix._ Proof.: Note that the image of \(A\) has a path to the image of any pivot matrix if and only if \(A\) is not as described by Proposition 2.5 or Proposition 2.4. By going through all possible generalized Jordan canonical form, \(A\) is not as described by Proposition 2.5 or Proposition 2.4 if and only if \(A\) is decomposable. If \(A\) has even projective order, then this distance is at most \(1\) by Lemma 4.2. If \(A\) has odd projective order, then this distance is at most \(2\) by Lemma 4.1. **Lemma 4.4**.: _If \(q\neq 2\) is a power of \(2\). Fix any prime factor \(p_{0}\) of \(q-1\). Suppose the projective order of a decomposable matrix \(A\in\operatorname{GL}_{3}(\mathbb{F}_{q})-Z(\operatorname{GL}_{n}(\mathbb{F}_ {q}))\) is not a power of \(p_{0}\), then in the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), the image of \(A\) has distance at most \(3\) to the image of a pivot matrix with multiplicative order \(p_{0}\)._ Proof.: Suppose \(A\) has projective order \(k\). Since \(k\) is not a power of \(p_{0}\), we can find a prime factor \(p_{1}\) of \(k\) distinct from \(p_{0}\). Then \(A^{\prime}=A^{\frac{k}{p_{1}}}\) will have projective order \(p_{1}\). Also note that \(A^{\prime}\) is a power of a decomposable matrix, hence it is decomposable itself. By Lemma 4.1, in the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), the image of \(A^{\prime}\) has distance at most \(2\) to the image of a pivot matrix with multiplicative order \(p_{0}\). So the image of \(A\) has distance at most \(3\) to the image of a pivot matrix with multiplicative order \(p_{0}\). **Lemma 4.5**.: _If \(q\neq 2\) is a power of \(2\). Fix any prime factor \(p_{0}\) of \(q-1\). Suppose a matrix \(A\in\operatorname{GL}_{3}(\mathbb{F}_{q})-Z(\operatorname{GL}_{n}(\mathbb{F}_ {q}))\) is similar to \(\begin{bmatrix}a&1\\ &a&1\\ &&a\end{bmatrix}\) for some \(a\in\mathbb{F}_{q}^{*}\), then in the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), the image of \(A\) has distance at most \(3\) to the image of a pivot matrix with multiplicative order \(p_{0}\)._ Proof.: Note that \(A^{\prime}=(a^{-1}A)^{2}\) is similar to \(\begin{bmatrix}1&&1\\ &1&\\ &&1\end{bmatrix}\), which is a decomposable matrix with multiplicative order \(2\). By Lemma 4.1, in the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), the image of \(A^{\prime}\) has distance at most \(2\) to the image of a pivot matrix with multiplicative order \(p_{0}\). So the image of \(A\) has distance at most \(3\) to the image of a pivot matrix with multiplicative order \(p_{0}\). **Lemma 4.6**.: _If \(q\neq 2\) is a power of \(2\) and \(q-1\) is not prime. Fix any prime factor \(p_{0}\) of \(q-1\). Suppose the projective order of a decomposable matrix \(A\in\mathrm{GL}_{3}(\mathbb{F}_{q})-Z(\mathrm{GL}_{n}(\mathbb{F}_{q}))\) is a power of \(p_{0}\), then in the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), the image of \(A\) has distance at most \(4\) to the image of a pivot matrix with multiplicative order \(p_{0}\)._ Proof.: Since \(q\) is even and \(q-1\) is not prime, by Lemma 2.10, \(q-1\) is not a prime power. So we can find a prime factor \(p_{1}\) of \(q-1\) distinct from \(p_{0}\). If the projective order of \(A\) is a power of \(p_{0}\), then it is coprime to \(p_{1}\). By Lemma 4.1, in the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), the image of \(A\) has distance at most \(2\) to the image of a pivot matrix with multiplicative order \(p_{1}\). However, this pivot matrix has multiplicative order coprime to \(p_{0}\). So again by Lemma 4.1, the image of this pivot matrix has distance at most \(2\) to the image of a pivot matrix with multiplicative order \(p_{0}\). So all in all, the image of \(A\) has distance at most \(4\) to the image of a pivot matrix with multiplicative order \(p_{0}\). **Lemma 4.7**.: _If \(q\neq 2\) is a power of \(2\) and \(p_{0}=q-1\) is a prime.Suppose the projective order of a matrix \(A\in\mathrm{GL}_{3}(\mathbb{F}_{q})-Z(\mathrm{GL}_{n}(\mathbb{F}_{q}))\) is a power of \(p_{0}\), then \(A\) is diagonalizable over \(\mathbb{F}_{q}\) or its characteristic polynomial is irreducible. In particular, either \(A\) is a pivot matrix itself, or \(A\) is as described in Proposition 2.3 or Proposition 2.4._ Proof.: Since the projective order of \(A\) is a power of \(p_{0}\), which is odd, by Lemma 2.12, the generalized Jordan canonical form of \(A\) is block diagonal with companion matrices of irreducible polynomials as diagonal blocks. If \(C\) is one of these blocks and it is \(m\times m\) for some \(m>1\), then by Corollary 2.14, its multiplicative order must divide \(q^{m}-1\), but cannot divide \(q-1\). Hence if the projective order of a matrix \(A\in\mathrm{GL}_{3}(\mathbb{F}_{q})-Z(\mathrm{GL}_{n}(\mathbb{F}_{q}))\) is a power of \(p_{0}\), then \(q^{m}-1\) must be a multiple of \(p_{0}^{2}\). Since \(q\) is \(1\) modulus the odd prime \(p_{0}\), clearly \(q+1\) is cannot be a multiple of \(p_{0}\). So \(q^{2}-1\) is not a multiple of \(p_{0}^{2}\). So \(m\neq 2\). Hence all diagonal blocks in the generalized Jordan canonical form of \(A\) must be \(1\times 1\) or \(3\times 3\). If one block is \(3\times 3\), then \(A\) has irreducible polynomial. If all blocks are \(1\times 1\), then \(A\) is diagonalizable over \(\mathbb{F}_{q}\). ## 5 Upper Bounds when \(q-1\) is not a prime power Let \(q\neq 2\) be a power of a prime \(p\). Suppose \(q-1\) is not a prime power. Fix any prime factor \(p_{0}\) of \(q-1\). Our goal here is to find a short path (distance at most \(4\)) in the projectively reduced power graph of \(\mathrm{GL}_{3}(\mathbb{F}_{q})\) between any two pivot matrices whose multiplicative order is \(p_{0}\). Pick any \(b\in\mathbb{F}_{q}\) such that \(x\neq 0,1\). In the group \(\mathrm{GL}_{3}(\mathbb{F}_{q})\), let \(S_{1},S_{2},S_{3}\) be the centralizers of \(\begin{bmatrix}1&&\\ &b&\\ &b\end{bmatrix},\begin{bmatrix}b&&\\ &1&\\ &&b\end{bmatrix}\) respectively. Let \(S_{J}\) be the centralizer of \(\begin{bmatrix}1&&\\ &1&1\\ &&1\end{bmatrix}\). Finally, let \(S=S_{2}\cup S_{3}\cup S_{J}\). **Lemma 5.1**.: _Any \(X\in\mathrm{GL}_{3}(\mathbb{F}_{q})\), then \(X=X_{1}X_{2}X_{3}\) for some matrices \(X_{1},X_{3}\in S_{1}\) and \(X_{2}\in S\)._ Proof.: Suppose \(X=\begin{bmatrix}a&\mathbf{v}^{\mathrm{T}}\\ \mathbf{w}&B\end{bmatrix}\) for \(\mathbf{v},\mathbf{w}\in\mathbb{F}_{q}^{2}\) and a \(2\times 2\) matrix \(B\) over \(\mathbb{F}_{q}\). Suppose \(B\) is not invertible. Since \(X\) is invertible, we cannot have \(B=\begin{bmatrix}0&0\\ 0&0\end{bmatrix}\). Hence \(B\) has rank \(1\), and we can find \(B=B_{1}\begin{bmatrix}1&0\\ 0&0\end{bmatrix}B_{2}\) for some invertible \(B_{1},B_{2}\in\operatorname{GL}_{2}(\mathbb{F}_{q})\). Then \[X=\begin{bmatrix}1&\\ &B_{1}\end{bmatrix}\begin{bmatrix}a&x&y\\ c&1&0\\ d&0&0\end{bmatrix}\begin{bmatrix}1&\\ &B_{2}\end{bmatrix}.\] Since \(X\) is invertible, we must have \(d,y\neq 0\). So we have \[X=\begin{bmatrix}1&\\ &B_{1}\end{bmatrix}\begin{bmatrix}1&\frac{e}{2}\\ &1&\frac{e}{2}\\ &1\end{bmatrix}\begin{bmatrix}a&0&y\\ 0&1&0\\ d&0&0\end{bmatrix}\begin{bmatrix}1&\\ &1&\\ &\frac{x}{y}&1\end{bmatrix}\begin{bmatrix}1&\\ &B_{2}\end{bmatrix}\in S_{1}S_{2}S_{1}\subseteq S_{1}SS_{1}.\] Suppose \(\boldsymbol{v}=\boldsymbol{w}=\boldsymbol{0}\). Then \(X\in S_{1}\subseteq S_{1}SS_{1}\). Suppose \(\boldsymbol{v}=\boldsymbol{0}\) but \(\boldsymbol{w}\neq\boldsymbol{0}\). Then \(B\) must be invertible. Pick \(Y\in\operatorname{GL}_{2}(\mathbb{F}_{q})\) such that \(BY^{-1}\begin{bmatrix}1\\ 0\end{bmatrix}=\boldsymbol{w}\). Then \[X=\begin{bmatrix}1&\\ &BY^{-1}\end{bmatrix}\begin{bmatrix}a&\\ 1&1\\ &1\end{bmatrix}\begin{bmatrix}1&\\ &Y\end{bmatrix}\in S_{1}S_{3}S_{1}\subseteq S_{1}SS_{1}.\] Similarly, if \(\boldsymbol{w}=\boldsymbol{0}\) but \(\boldsymbol{v}\neq\boldsymbol{0}\), then again \(X\in S_{1}S_{3}S_{1}\subseteq S_{1}SS_{1}\). From now on, we assume that \(B\) is invertible and \(\boldsymbol{v},\boldsymbol{w}\neq\boldsymbol{0}\). Pick \(Y\in\operatorname{GL}_{2}(\mathbb{F}_{q})\) such that \(BY^{-1}\begin{bmatrix}1\\ 0\end{bmatrix}=\boldsymbol{w}\). Suppose \(\boldsymbol{v}^{\mathrm{T}}Y^{-1}=\begin{bmatrix}x&y\end{bmatrix}\). Then \[X=\begin{bmatrix}1&\\ &BY^{-1}\end{bmatrix}\begin{bmatrix}a&x&y\\ 1&1&\\ &&1\end{bmatrix}\begin{bmatrix}1&\\ &Y\end{bmatrix}.\] Here, \(x\) is actually determined by \(\boldsymbol{v},\boldsymbol{w},B\) alone. Indeed we have \[x=\begin{bmatrix}x&y\end{bmatrix}\begin{bmatrix}1\\ 0\end{bmatrix}=\boldsymbol{v}^{\mathrm{T}}Y^{-1}YB^{-1}\boldsymbol{w}= \boldsymbol{v}^{\mathrm{T}}B^{-1}\boldsymbol{w}.\] If \(\boldsymbol{v}^{\mathrm{T}}B^{-1}\boldsymbol{w}\neq 0\), then we further have \[X=\begin{bmatrix}1&\\ &BY^{-1}\end{bmatrix}\begin{bmatrix}1&\\ &1&-\frac{y}{x}\\ &1\end{bmatrix}\begin{bmatrix}a&x&0\\ 1&1\\ &&1\end{bmatrix}\begin{bmatrix}1&\\ &1&\frac{y}{x}\\ &1\end{bmatrix}\begin{bmatrix}1&\\ &Y\end{bmatrix}\in S_{1}S_{3}S_{1}\subseteq S_{1}SS_{1}.\] Now suppose \(\boldsymbol{v}^{\mathrm{T}}B^{-1}\boldsymbol{w}=0\). Then we have \[X=\begin{bmatrix}1&\\ &BY^{-1}\end{bmatrix}\begin{bmatrix}a&0&y\\ 1&1&\\ &1\end{bmatrix}\begin{bmatrix}1&\\ &Y\end{bmatrix}\in S_{1}S_{J}S_{1}\subseteq S_{1}SS_{1}.\] **Corollary 5.2**.: _Let \(q\neq 2\) be any prime power such that \(q-1\) is not a prime power. Let \(p_{0}\) be any prime factor of \(q-1\). Then in the projectively reduced power graph of \(\operatorname{GL}_{3}(\mathbb{F}_{q})\), any two pivot matrices with multiplicative order \(p_{0}\) has distance at most \(4\)._ Proof.: Suppose \(A,B\) are two pivot matrices in \(\operatorname{GL}_{3}(\mathbb{F}_{q})\) with multiplicative order \(p_{0}\). Say \(A=X_{A}\begin{bmatrix}a_{1}&\\ &a_{2}\\ &a_{2}\end{bmatrix}X_{A}^{-1}\) and \(B=X_{B}\begin{bmatrix}b_{1}&\\ &b_{2}\\ &b_{2}\end{bmatrix}X_{B}^{-1}\). Let \(X=X_{A}^{-1}X_{B}\). By Lemma 5.1, \(X=X_{1}X_{2}X_{3}\) for some matrices \(X_{1},X_{3}\in S_{1}\) and \(X_{2}\in S\). Note that \(\begin{bmatrix}a_{1}&&\\ &a_{2}&\\ &&a_{2}\end{bmatrix},\begin{bmatrix}b_{1}&&\\ &b_{2}&\\ &&b_{2}\end{bmatrix}\) both have centralizer \(S_{1}\). Since \(q-1\) is not a prime power, we can find a prime factor \(p_{1}\) of \(q-1\) different from \(p_{0}\). Let \(x\in\mathbb{F}_{q}^{*}\) be any element with multiplicative order \(p_{1}\). If \(X_{2}\in S_{2}\), \(S_{3}\) or \(S_{J}\), then we set \(D=\begin{bmatrix}x&\\ &1&\\ &x\end{bmatrix},\begin{bmatrix}x&\\ &x&\\ &1\end{bmatrix}\) or \(\begin{bmatrix}1&&1\\ &1&\\ &1\end{bmatrix}\) respectively. Then \(D\) has multiplicative order \(p_{1}\) or \(p\), which is coprime to \(p_{0}\) either way. Furthermore, we always have \(\begin{bmatrix}a_{1}&&\\ &a_{2}&\\ &&a_{2}\end{bmatrix}D=D\begin{bmatrix}a_{1}&&\\ &a_{2}&\\ &&a_{2}\end{bmatrix}\) and \(\begin{bmatrix}b_{1}&\\ &b_{2}&\\ &&b_{2}\end{bmatrix}D=D\begin{bmatrix}b_{1}&\\ &b_{2}&\\ &&b_{2}\end{bmatrix}\). Consequently, \(\begin{bmatrix}a_{1}&&\\ &a_{2}&\\ &&a_{2}\end{bmatrix}\) and \(D\) are both powers of \(\begin{bmatrix}a_{1}&&\\ &a_{2}&\\ &&a_{2}\end{bmatrix}D\), and in the same manner, \(\begin{bmatrix}b_{1}&&\\ &b_{2}&\\ &&b_{2}\end{bmatrix}\) and \(D\) are both powers of \(\begin{bmatrix}b_{1}&&\\ &b_{2}&\\ &&b_{2}\end{bmatrix}D\). Therefore, in the projectively reduced power graph of \(\operatorname{GL}_{3}(\mathbb{F}_{q})\), \(\begin{bmatrix}a_{1}&&\\ &a_{2}&\\ &&a_{2}\end{bmatrix}\) and \(\begin{bmatrix}b_{1}&&\\ &b_{2}&\\ &&b_{2}\end{bmatrix}\) both have distance at most \(2\) to \(D\). So we have path \[A=X_{A}\begin{bmatrix}a_{1}&&\\ &a_{2}&\\ &&a_{2}\end{bmatrix}X_{A}^{-1}=X_{A}X_{1}\begin{bmatrix}a_{1}&&\\ &a_{2}&\\ &&a_{2}\end{bmatrix}X_{1}^{-1}X_{A}^{-1}\] \[\rightarrow X_{A}X_{1}DX_{1}^{-1}X_{A}^{-1}=X_{A}X_{1}X_{2}DX_{2}^{-1}X_{1} ^{-1}X_{A}^{-1}\] \[\rightarrow X_{A}X_{1}X_{2}\begin{bmatrix}b_{1}&&\\ &b_{2}&\\ &&b_{2}\end{bmatrix}X_{2}^{-1}X_{1}^{-1}X_{A}^{-1}\] \[= X_{A}X_{1}X_{2}X_{3}\begin{bmatrix}b_{1}&&\\ &b_{2}&\\ &&b_{2}\end{bmatrix}X_{3}^{-1}X_{2}^{-1}X_{1}^{-1}X_{A}^{-1}\] \[= X_{A}X\begin{bmatrix}b_{1}&&\\ &b_{2}&\\ &&b_{2}\end{bmatrix}X^{-1}_{A}X_{A}^{-1}=X_{B}\begin{bmatrix}b_{1}&&\\ &b_{2}&\\ &&b_{2}\end{bmatrix}X_{B}^{-1}=B.\] Here each arrow means distance at most \(2\). So \(A,B\) have distance at most \(4\) between them. **Proposition 5.3**.: _If \(q\) is an odd prime power, and \(q-1\) is not a prime power, then in the reduce power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), the pivot component has diameter at most \(8\)._ Proof.: Suppose \(A,B\) are any two matrices in the pivot component. By Figure 1, all matrices in the pivot component are decomposable. Since \(q\) is odd, by Corollary 4.3, in the reduce power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), images \(A,B\) will respectively have distance at most \(2\) to images of some pivot matrices \(A^{\prime},B^{\prime}\) with multiplicative order \(2\). Then by Corollary 5.2, images of \(A^{\prime},B^{\prime}\) have distance at most \(4\) between them. So in total, \(A,B\) have distance at most \(2+2+4=8\) between them. **Proposition 5.4**.: _If \(q\) is power of \(2\), and \(q-1\) is not a prime, then in the reduce power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), the pivot component has diameter at most \(10\)._ Proof.: Suppose \(A,B\) are any two matrices in the pivot component. Then by Lemma 3.2, each of them is either decomposable, or an NPJ matrix. Let \(p_{0},p_{1}\) be any two distinct prime factors of \(q-1\). Suppose \(A,B\) are both not a decomposable matrix whose projective order is a power of \(p_{i}\). Then by Lemma 4.4 and Lemma 4.5, in the reduce power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), the images of \(A\) and \(B\) will respectively have distance at most \(3\) to the images of some pivot matrices \(A^{\prime}\) and \(B^{\prime}\) with multiplicative order \(p_{i}\). Then by Corollary 5.2, the distance between images of \(A^{\prime},B^{\prime}\) is at most \(4\). Therefore the total distance between images of \(A,B\) is at most \(3+3+4=10\). If the above assumption does not happen, then one of \(A,B\) is a decomposable matrix whose projective order is a power of \(p_{0}\), and the other is a decomposable matrix whose projective order is a power of \(p_{1}\). Say \(A\) has projective order a power of \(p_{0}\) and \(B\) has projective order a power of \(p_{1}\). Then by Lemma 4.6, the image of \(A\) has distance at most \(4\) to the image of a pivot matrix \(A^{\prime}\) with multiplicative order \(p_{0}\). By Lemma 4.1, the image of \(B\) has distance at most \(2\) to the image of a pivot matrix \(B^{\prime}\) with multiplicative order \(p_{0}\). Then by Corollary 5.2, the distance between images of \(A^{\prime},B^{\prime}\) is at most \(4\). Therefore the total distance between images of \(A,B\) is at most \(4+2+4=10\). ## 6 Upper bound when \(q-1\) is a prime power **Proposition 6.1**.: _Let \(q\) be a Fermat prime or \(q=9\). Then the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\) has diameter at most \(12\)._ Proof.: Suppose \(A,B\) are any two matrices in the pivot component. By Figure 2 and Figure 4, all matrices in the pivot component are decomposable. Since \(q\) is odd, by Corollary 4.3, in the reduce power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), images \(A,B\) will respectively have distance at most \(2\) to images of some pivot matrices \(A^{\prime},B^{\prime}\) with multiplicative order \(2\). Then by Proposition 2.8, images of \(A^{\prime},B^{\prime}\) have distance at most \(8\) between them. So in total, \(A,B\) have distance at most \(2+2+8=12\) between them. **Proposition 6.2**.: _Let \(q\neq 2\) be a power of \(2\) such that \(q-1\) is prime. Then the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\) has diameter at most \(13\)._ Proof.: Suppose \(A,B\) are any two matrices in the pivot component. By Figure 3, all matrices in the pivot component are decomposable or NJP matrices, and they cannot be LLL matrices. Suppose \(A\) is not an NJP matrix. By Lemma 4.7, either \(A\) is a pivot matrix, or the projective order of \(A\) is not a power of the prime \(q-1\). Then by Lemma 4.4, the \(A\) has distance at most \(2\) to the image of a pivot matrix \(A^{\prime}\). Now, if \(B\) is also not an NPJ matrix, then similarly the image of \(B\) has distance at most \(2\) to the image of a pivot matrix \(B^{\prime}\). If \(B\) is an NPJ matrix, by Lemma 4.5, the image of \(B\) has distance at most \(3\) to the image of a pivot matrix \(B^{\prime}\). By Proposition 2.8, images of \(A^{\prime},B^{\prime}\) have distance at most \(8\) between them. So in total, images of \(A,B\) have distance at most \(2+3+8=13\) between them. Similarly, if \(B\) is not an NPJ matrix, images of \(A,B\) have distance at most \(2+3+8=13\) between them. Finally, suppose \(A,B\) are both NPJ matrices. Then \(A^{2}\) and \(B^{2}\) will be Jordan pivot matrices. By Proposition 2.9, images of \(A^{2},B^{2}\) have distance at most \(8\) between them. So in total, images of \(A,B\) have distance at most \(1+1+8=10\) between them. All in all, images of \(A,B\) have distance at most \(13\) between them. **Proposition 6.3**.: _The reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{3})\) has diameter at most \(11\)._ Proof.: Suppose \(A,B\) are any two matrices in the pivot component. If both matrices are Jordan pivot matrices, by Proposition 2.9, images of \(A,B\) have distance at most \(8\) between them. Now WLOG suppose \(A\) is not a Jordan pivot matrix. Then by Figure 4, \(A\) is an LP matrix or an LLP matrix. If \(A\) is an LP matrix, then its multiplicative order must divides \(3^{2}-1=8\). So it must have even projective order. By Lemma 4.2, it has distance at most \(1\) to the image of a pivot matrix \(A^{\prime}\). On the other hand, if \(A\) is an LLP matrix, then \(A^{\prime}=A^{3}\) is a pivot matrix. Either way, the image of a pivot matrix \(A^{\prime}\) has distance \(1\) to the image of \(A\). By Figure 4, \(B\) is a decomposable matrix. Since \(q\) is odd, by Corollary 4.3, image \(B\) will have distance at most \(2\) to image of some pivot matrix \(B^{\prime}\). By Proposition 2.8, images of \(A^{\prime},B^{\prime}\) have distance at most \(8\) between them. So in total, images of \(A,B\) have distance at most \(1+2+8=11\) between them. Connection restrictions to pivot matrices First, let us discuss the cases when two matrices of the same Jordan type are connected by an edge in the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\). **Lemma 7.1**.: _In the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), if the image of a matrix \(A\) and the image of a matrix \(B\) are connected by an edge, and \(A,B\) have the same Jordan type, then they have identical invariant subspaces, eigenspaces and generalized eigenspaces._ Proof.: If the image of a matrix \(A\) and the image of a matrix \(B\) are connected by an edge, then WLOG say \(xB=A^{k}\) for some positive integer \(k\) and some \(x\in\mathbb{F}_{q}^{*}\). Then all invariant subspaces of \(A\) are invariant subspaces of \(A^{k}=xB\), and thus also invariant subspaces of \(B\). Similarly, all eigenspaces of \(A\) are eigenspaces of \(B\), and all generalized eigenspaces of \(A\) are generalized eigenspaces of \(B\). Suppose \(A,B\) have the same Jordan type. Then they have the same finite number of invariant subspaces, eigenspaces and generalized eigenspaces. Hence they have identical invariant subspaces, eigenspaces and generalized eigenspaces. Next, let us discuss the cases when a matrix and a pivot matrix are connected by an edge in the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\). **Lemma 7.2**.: _In the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), if the image of an LLL matrix \(A\) and the image of a pivot matrix \(B\) are connected by an edge, then each eigenspace of \(B\) is spanned by some eigenspaces of \(A\)._ Proof.: If the image of an LLL matrix \(A\) and the image of a pivot matrix \(B\) are connected by an edge, then \(xB=A^{k}\) for some positive integer \(k\) and some \(x\in\mathbb{F}_{q}^{*}\). Since \(A\) is an LLL matrix, \(A=X\begin{bmatrix}a_{1}&&\\ &a_{2}&\\ &&a_{3}\end{bmatrix}X^{-1}\) for some invertible \(X\) and some \(a_{1},a_{2},a_{3}\in\mathbb{F}_{q}^{*}\). Then \(B=X\begin{bmatrix}x^{-1}a_{1}^{k}&&\\ &x^{-1}a_{2}^{k}&\\ &&x^{-1}a_{3}^{k}\end{bmatrix}X^{-1}\). Since \(A\) is an LLL matrix, the eigenspaces of \(A\) are the three lines spanned by the three columns of \(X\) respectively. Since \(B\) is a pivot matrix, its eigenspaces are a line spanned by a column of \(X\), and a plane spanned by the other two column of \(X\). So we are done. **Lemma 7.3**.: _In the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), if the image of an LP matrix \(A\) and the image of a pivot matrix \(B\) are connected by an edge, then the invariant subspaces of \(A\) are exactly the eigenspaces of \(B\)._ Proof.: If the image of an LP matrix \(A\) and the image of a pivot matrix \(B\) are connected by an edge, then a power of \(A\) must be \(xB\) for some \(x\in\mathbb{F}_{q}^{*}\). Since \(A\) is an LP matrix, \(A=X\begin{bmatrix}a&\\ &C\end{bmatrix}X^{-1}\) for some invertible \(X\), some \(a\in\mathbb{F}_{q}^{*}\) and a companion matrix \(C\) to some irreducible polynomial. Then \(B=X\begin{bmatrix}x^{-1}a^{k}&&\\ &x^{-1}C^{k}\end{bmatrix}X^{-1}\). Since \(B\) is a pivot matrix, \(x^{-1}C^{k}\) must be diagonalizable over \(\mathbb{F}_{q}\) as well. But by Lemma 2.13, \(C^{k}\) is either not diagonalizable over \(\mathbb{F}_{q}\), or a multiple of identity. So we must have \(C^{k}=\begin{bmatrix}y&\\ &y\end{bmatrix}\) for some \(y\in\mathbb{F}_{q}^{*}\), and \(B=X\begin{bmatrix}x^{-1}a^{k}&&\\ &x^{-1}y&\\ &&x^{-1}y\end{bmatrix}X^{-1}\) with \(x^{-1}a^{k}\neq x^{-1}y\). Now the invariant subspaces of \(A\) are the line spanned by the first column of \(X\), and the plane spanned by the last two columns of \(X\). But these are exactly the eigenspaces of \(B\). **Lemma 7.4**.: _In the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), if the image of an LLP matrix \(A\) and the image of a pivot matrix \(B\) are connected by an edge, then the generalized subspaces of \(A\) are exactly the eigenspaces of \(B\)._ Proof.: If the image of an LLP matrix \(A\) and the image of a pivot matrix \(B\) are connected by an edge, then a power of \(A\) must be \(xB\) for some \(x\in\mathbb{F}_{q}^{*}\). Since \(A\) is an LLP matrix, \(A=X\begin{bmatrix}a_{1}&&\\ &a_{2}&1\\ &&a_{2}\end{bmatrix}X^{-1}\) for some invertible \(X\) and some \(a_{1},a_{2}\in\mathbb{F}_{q}^{*}\). Then \(B=X\begin{bmatrix}x^{-1}a_{1}^{k}&&\\ &x^{-1}a_{2}^{k}&x^{-1}ka_{2}^{k-1}\\ &&x^{-1}a_{2}^{k}\end{bmatrix}X^{-1}\). Since \(B\) is a pivot matrix, \(\begin{bmatrix}x^{-1}a_{2}^{k}&x^{-1}ka_{2}^{k-1}\\ &x^{-1}a_{2}^{k}\end{bmatrix}\) must be diagonalizable over \(\mathbb{F}_{q}\) as well, so \(x^{-1}ka_{2}^{k-1}=0\) and \(B=X\begin{bmatrix}x^{-1}a_{1}^{k}&&\\ &x^{-1}a_{2}^{k}&\\ &&x^{-1}a_{2}^{k}\end{bmatrix}X^{-1}\) with \(x^{-1}a_{1}^{k}\neq x^{-1}a_{2}^{k}\). Now the generalized subspaces of \(A\) are the line spanned by the first column of \(X\), and the plane spanned by the last two columns of \(X\). But these are exactly the eigenspaces of \(B\). **Lemma 7.5**.: _In the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), if the image of a pivot matrix \(A\) and the image of a pivot matrix \(B\) have distance at most \(3\), then either they have the same eigenspaces, or the \(2\)-dimensional eigenspace of \(A\) contains the \(1\)-dimensional eigenspace of \(B\) and vice versa._ Proof.: Suppose in the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), the image of \(A\) has an edge to the image of \(A^{\prime}\), which has an edge to the image of \(B^{\prime}\), which has an edge to the image of \(B^{\prime}\). By going through the diagrams from Figure 1 to Figure 4, \(A^{\prime},B^{\prime}\) must have the same Jordan type and must be LLL matrices or LP matrices or LLP matrices. If \(A^{\prime},B^{\prime}\) are both LP or both LLP matrices, then they have the same invariant subspaces and generalized eigenspaces by Lemma 7.1. Then by Lemma 7.3 and Lemma 7.4, \(A,B\) must have the same eigenspaces. If \(A^{\prime},B^{\prime}\) are both LLL matrices, then they have the same eigenspaces by Lemma 7.1. Say the three \(1\)-dimensional eigenspaces are \(W_{1},W_{2},W_{3}\). By Lemma 7.2, eigenspaces of \(A\) and of \(B\) are spanned by these. By exhausting all possibilities, either \(A,B\) have the same eigenspaces, or the \(2\)-dimensional eigenspace of \(A\) contains the \(1\)-dimensional eigenspace of \(B\) and vice versa. Here are some special restrictions when \(q=9\) or \(q\) is a Fermat prime. **Lemma 7.6**.: _Suppose \(q=9\) or \(q\) is a Fermat prime. For an LLL matrix \(A\in\operatorname{GL}_{3}(\mathbb{F}_{q})\), let \(k\) be the projective order of \(A\). Then \(A^{\frac{k}{2}}\) is a pivot matrix. Furthermore, in the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), suppose the image \(A\) is connected to the image of a pivot matrix \(B\) by an edge, then \(A^{\frac{k}{2}}\) and \(B\) must have identical eigenspaces._ Proof.: Since \(A\) is an LLL matrix, \(A=X\begin{bmatrix}a_{1}&&\\ &a_{2}&\\ &&a_{3}\end{bmatrix}X^{-1}\) for some invertible \(X\) and some \(a_{1},a_{2},a_{3}\in\mathbb{F}_{q}^{*}\). Since \(k\) is the projective order of \(A\), and \(q-1\) is a power of \(2\), \(k\) must be a power of \(2\). So \(a_{1}^{k}=a_{2}^{k}=a_{3}^{k}\), but \(a_{1}^{\frac{k}{2}},a_{2}^{\frac{k}{2}},a_{3}^{\frac{k}{2}}\) cannot all be identical, and they are all square roots of the same element in \(\mathbb{F}_{q}^{*}\). Since each element in \(\mathbb{F}_{q}^{*}\) has at most two square roots, \(A^{\frac{k}{2}}\) cannot have three distinct eigenvalues. So it is indeed a pivot matrix. Now, if the image of an LLL matrix \(A\) and the image of a pivot matrix \(B\) are connected by an edge, then \(xB=A^{t}\) for some positive integer \(t\) and some \(x\in\mathbb{F}_{q}^{*}\). Since \(B\) is not a scalar multiple of identity, \(k\) cannot divide \(t\). So let \(t^{\prime}\) be the power of \(2\) such that \(\frac{k}{2}\) divides \(tt^{\prime}\) but \(k\) does not divide \(tt^{\prime}\), and set \(B^{\prime}=(xB)^{t^{\prime}}\). Then \(B^{\prime}\) is a power of \(A\) that is not a scalar multiple of identity, but it is a power of \(A^{\frac{k}{2}}\). Hence it is a pivot matrix. So the image of \(B\) and the image of \(B^{\prime}\) are connected by an edge, and the image of \(B^{\prime}\) and the image of \(A^{\frac{k}{2}}\) are connected by an edge. By Lemma 7.1, they all of the same eigenspaces. **Lemma 7.7**.: _Suppose \(q=9\) or \(q\) is a Fermat prime. Suppose in the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), the image of an LLL matrix \(A\) and the image of an LLL matrix \(B\) are connected by an edge. Let _be the projective order of \(A,B\) respectively. Then the pivot matrices \(A^{\frac{k_{A}}{2}}\) and \(B^{\frac{k_{B}}{2}}\) must have identical eigenspaces._ Proof.: If the image of a matrix \(A\) and the image of a matrix \(B\) are connected by an edge, then WLOG say \(xB=A^{k}\) for some positive integer \(k\) and some \(x\in\mathbb{F}_{q}^{*}\). Then \(B^{\frac{k_{B}}{2}}\) is a scalar multiple of a power of \(A\), and it is a pivot matrices. So by Lemma 7.6, \(A^{\frac{k_{A}}{2}}\) and \(B^{\frac{k_{B}}{2}}\) must have identical eigenspaces. **Lemma 7.8**.: _Suppose \(q\neq 2\) and \(q-1\) is a prime power. In the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), suppose the image of a pivot matrix \(A\) is connected to the image of a pivot matrix \(B\) by an path that does not include any Jordan pivot matrix, then \(A\) and \(B\) must have identical eigenspaces._ Proof.: WLOG we can assume that there is no other pivot matrices on this path other than \(A\) and \(B\). By Figure 2 and Figure 3, then all matrices in this path between \(A\) and \(B\) must have the same Jordan type. They must all be LP matrices, or all be LLP matrices, or all be LLL matrices. Suppose they are all LP matrices. Then by Lemma 7.1, since all of them are connected by edges, they all have the same invariant subspaces. By Lemma 7.3, the eigenspaces of \(A\) and \(B\) must be identical to these invariant subspaces. So \(A\) and \(B\) must have identical eigenspaces. Suppose they are all LLP matrices. Then by Lemma 7.1, since all of them are connected by edges, they all have the same generalized eigenspaces. By Lemma 7.4, the eigenspaces of \(A\) and \(B\) must be identical to these generalized eigenspaces. So \(A\) and \(B\) must have identical eigenspaces. Finally, suppose they are all LLL matrices. Note that in this case, \(q\) must be odd, so \(q=9\) or \(q\) is a Fermat prime by Lemma 2.10. Suppose the path goes from the image of \(A\) to the image of \(A_{1}\), to the image of \(A_{2}\),..., to the image of \(A_{t}\), and finally to the image of \(B\). Let \(k_{i}\) be the projective order of \(A_{i}\). By Lemma 7.7, all \(A_{i}^{\frac{k_{i}}{2}}\) have the same eigenspaces. And by Lemma 7.6, \(A\) and \(B\) must also have the same eigenspaces as these matrices. ## 8 Lower Bounds when \(q-1\) is not a prime power Let \(q\neq 2\) be a power of a prime \(p\). Suppose \(q-1\) is not a prime power. When \(q\) is odd, let \(p_{0}\) be any odd prime factor of \(q-1\), and pick any \(x\in\mathbb{F}_{q}^{*}\) with multiplicative order \(p_{0}\). When \(q\) is even, pick any multiplicative generator \(x\in\mathbb{F}_{q}^{*}\), and set \(p_{0}=q-1\). Note that either way, we must have \(x\neq 0,1\), and both \(x\) and \(x^{2}\) have multiplicative order \(p_{0}\). Set \(A=\begin{bmatrix}1&&\\ &x&\\ &&x^{2}\end{bmatrix}\). Pick any \(x^{\prime}\neq 0,1,-2\) in \(\mathbb{F}_{q}\). By our assumption, \(q>3\), so this is possible. Set \(X=\begin{bmatrix}x^{\prime}&1&1\\ 1&x^{\prime}&1\\ 1&1&x^{\prime}\end{bmatrix}\), which has determinant \((x^{\prime}+2)(x^{\prime}-1)^{2}\neq 0\), so it is invertible. Finally, set \(B=XAX^{-1}\). Note that \(A,B\) must be LLL matrices with projective and multiplicative order \(p_{0}\). I claim that in the reduce power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), images of \(A,B\) will have distance at least \(8\) when \(q\) is odd, and at least \(10\) when \(q\) is even. **Lemma 8.1**.: _Let \(q\neq 2\) be a power of a prime \(p\). Suppose \(q-1\) is not a prime power. If \(q\) is odd, in the reduce power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), the distance between the image of \(A\) or \(B\) to the image of any pivot matrix is at least \(2\). If \(q\) is even, this distance is at least \(3\)._ Proof.: It is enough to prove it for \(A\), as \(A,B\) are similar matrices. Note that \(A\) has eigenvalue \(1,x,x^{2}\). So for any \(y\in\mathbb{F}_{q}^{*}\), the matrix \(yA^{k}\) has eigenvalues \(y,yx^{k},yx^{2k}\). But since \(x\) has odd multiplicitive order \(p_{0}\), \(yA^{k}\) has all three eigenvalues distinct when \(k\) is not a multiple of \(p_{0}\), and all three eigenvalues identical when \(k\) is a multiple of \(p_{0}\). So it is either an LLL matrix or a scalar multiple of identity. Also note that roots of an LLL matrix must still be an LLL matrix. Therefore, powers and roots of scalar multiples of \(A\) cannot be a pivot matrix. So the distance between the image of \(A\) to the image of any pivot matrix is at least \(2\). Suppose \(q\) is even. Suppose for contradiction that, in the reduce power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), the image of \(A\) is connected to the image of \(A^{\prime}\) by an edge, and the image of \(A^{\prime}\) is connected to the image of a pivot matrix \(B\) by an edge. Then \(B\) must be a scalar multiple of a power of \(A^{\prime}\). Since \(B\) cannot be a scalar multiple of a power of \(A\), therefore \(A^{\prime}\) must be the root of a scalar multiple of \(A\), and must be an LLL matrix. Suppose \(A^{\prime}\) has eigenvalues \(x^{\prime},y^{\prime},z^{\prime}\), and \((A^{\prime})^{k}=yA\) for some \(y\in\mathbb{F}_{q}^{*}\) and some positive integer \(k\). Then \((x^{\prime})^{k}=y,(y^{\prime})^{k}=yx,(z^{\prime})^{k}=yx^{2}\). So \(x^{\prime}(y^{\prime})^{-1},y^{\prime}(z^{\prime})^{-1},z^{\prime}(x^{\prime}) ^{-1}\) are all roots of \(x\) or \(x^{2}\), both are multiplicative generators of \(\mathbb{F}_{q}^{*}\). Therefore \(x^{\prime}(y^{\prime})^{-1},y^{\prime}(z^{\prime})^{-1},z^{\prime}(x^{\prime}) ^{-1}\) are all multiplicative generators of \(\mathbb{F}_{q}^{*}\), and they all have multiplicative order \(q-1\). So powers of \(A^{\prime}\) will either have all three eigenvalues distinct, or all three eigenvalues identical. So if \(B\) is a scalar multiple of a power of \(A^{\prime}\), then it must have identical eigenvalues and become a multiple of identity. But \(B\) is supposed to be a pivot matrix, contradiction. **Proposition 8.2**.: _Let \(q\neq 2\) be a power of a prime \(p\). Suppose \(q-1\) is not a prime power. If \(q\) is odd, in the reduce power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), the distance between the image of \(A\) and the image of \(B\) is at least \(8\). If \(q\) is even, this distance is at least \(10\)._ Proof.: Suppose for contradiction that there is a path from the image of \(A\) to the image of \(B\) in the reduce power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), with distance at most \(7\) for \(q\) odd or distance at most \(9\) for \(q\) even. First note that \(A,B\) do not have the same eigenspaces. So by Lemma 7.1, any path between them in the reduce power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\) cannot be entirely made of LLL matrices. By Figure 1, our path must contain a pivot matrix. Let \(A^{\prime}\) be the first pivot matrix on this path starting from \(A\), and let \(B^{\prime}\) be the last pivot matrix on this path. By Lemma 8.1, the distance between \(A\) and \(A^{\prime}\) is at least \(2\) for \(q\) odd, and at least \(3\) for \(q\) even. The same is true for \(B\) and \(B^{\prime}\). Hence either way, the distance between \(A^{\prime}\) and \(B^{\prime}\) is at most \(3\). Now by construction, \(A\) must be connected to \(A^{\prime}\) via a series of LLL matrices. By Lemma 7.2, eigenspaces of \(A^{\prime}\) are spanned by eigenspaces of \(A\). Let \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\) be the three columns of the identity matrix, then the eigenspaces of \(A^{\prime}\) must be \(\operatorname{span}(\mathbf{e}_{1})\) and \(\operatorname{span}(\mathbf{e}_{2},\mathbf{e}_{3})\), or \(\operatorname{span}(\mathbf{e}_{2})\) and \(\operatorname{span}(\mathbf{e}_{3},\mathbf{e}_{1})\), or \(\operatorname{span}(\mathbf{e}_{3})\) and \(\operatorname{span}(\mathbf{e}_{1},\mathbf{e}_{2})\). These are the only three possibilities. Similarly, eigenspaces of \(B^{\prime}\) are spanned by eigenspaces of \(B\). Let \(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3}\) be the three columns of the matrix \(X\), then the one dimensional eigenspace of \(B^{\prime}\) must be \(\operatorname{span}(\mathbf{x}_{1})\) or \(\operatorname{span}(\mathbf{x}_{2})\) or \(\operatorname{span}(\mathbf{x}_{3})\). Neither of them can coinside with the one dimensional eigenspace of \(A^{\prime}\), and neither of them can be contained in the two dimensional eigenspace of \(A^{\prime}\). But this is impossible by Lemma 7.5. So we are done. ## 9 Connection restrictions to Jordan pivot matrices **Lemma 9.1**.: _If \(A\in\operatorname{GL}_{3}(\mathbb{F}_{q})\) is a Jordan pivot matrix with eigenvalue \(a\), then the column space of \(A-aI\) is contained in the kernel of \(A-aI\)._ Proof.: This is because \(\left(A-aI\right)^{2}\) is zero by the definition of a Jordan pivot matrix. **Lemma 9.2**.: _In the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), if the image of an LLP matrix \(A\) and the image of a Jordan pivot matrix \(B\) are connected by an edge. Suppose \(A\) has simple eigenvalue \(a_{1}\in\mathbb{F}_{q}^{*}\) and repeated eigenvalue \(a_{2}\in\mathbb{F}_{q}^{*}\), and \(B\) has an eigenvalue \(b\in\mathbb{F}_{q}^{*}\). Then the column space of \(B-bI\) is the eigenspace of \(A\) for \(a_{2}\), and the kernel of \(B-bI\) is spanned by the two eigenspaces of \(A\)._ Proof.: If the image of an LLP matrix \(A\) and the image of a Jordan pivot matrix \(B\) are connected by an edge, then a power of \(A\) must be \(xB\) for some \(x\in\mathbb{F}_{q}^{*}\). Since \(A\) is an LLP matrix, \(A=X\begin{bmatrix}a_{1}&&\\ &a_{2}&1\\ &&a_{2}\end{bmatrix}X^{-1}\) for some invertible \(X\). Then \(B=X\begin{bmatrix}x^{-1}a_{1}^{k}&\\ &x^{-1}a_{2}^{k}&x^{-1}ka_{2}^{k-1}\end{bmatrix}X^{-1}\) for some positive integer \(k\). Since \(B\) is a Jordan pivot matrix with eigenvalue \(b\), therefore all three diagonal entries here must be \(b\). Now the column space of \(B-bI\) and the eigenspace of \(A\) for \(a_{2}\) are both the span of the second column of \(X\). The two eigenspaces of \(A\) are the span of the first column of \(X\) and the span of the second column of \(X\), while the kernel of \(B-bI\) is spanned by the first two columns of \(X\). So we are done. **Lemma 9.3**.: _In the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), if the image of an NPJ matrix \(A\) and the image of a Jordan pivot matrix \(B\) are connected by an edge. Suppose \(A\) has an eigenvalue \(a\in\mathbb{F}_{q}^{*}\) and \(B\) has an eigenvalue \(b\in\mathbb{F}_{q}^{*}\). Then the column space of \(B-bI\) is the eigenspace of \(A\) for \(a\), and the kernel of \(B-bI\) is the same as the kernel of of \((A-aI)^{2}\)._ Proof.: If the image of an NPJ matrix \(A\) and the image of a Jordan pivot matrix \(B\) are connected by an edge, then \(xB=A^{k}\) for some \(x\in\mathbb{F}_{q}^{*}\) and some positive integer \(k\), and \(k\) must be an even number. Since \(A\operatorname{\,is\,an\,NPJ\,\,matrix}\), \(A=X\begin{bmatrix}a&1\\ &a&1\\ &&a\end{bmatrix}X^{-1}\) for some invertible \(X\). Then \(B=X\begin{bmatrix}x^{-1}a^{k}&\frac{k}{2}(k-1)a^{k-2}\\ &x^{-1}a^{k}&x^{-1}a^{k}\end{bmatrix}X^{-1}\). Since \(B\) is a Jordan pivot matrix with an eigenvalue \(b\), therefore all three diagonal entries here must be \(b\). Now the column space of \(B-bI\) and the eigenspace of \(A\) for \(a\) are both the span of the first column of \(X\). The kernel of \(B-bI\) and the kernel of \((A-aI)^{2}\) are both the span of the first two columns of \(X\). So we are done. **Lemma 9.4**.: _In the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), suppose the image of a Jordan pivot matrix \(A\) is connected to the image of a Jordan pivot matrix \(B\) by an path that does not include any pivot matrix. Suppose \(A\) has an eigenvalue \(a\in\mathbb{F}_{q}^{*}\) and \(B\) has an eigenvalue \(b\in\mathbb{F}_{q}^{*}\). Then the column space of \(A-aI\) is the same as the column space of \(B-bI\), and the kernel of \(A-aI\) is the same as the kernel of of \(B-bI\)._ Proof.: WLOG we can assume that there is no other Jordan pivot matrices on this path other than \(A\) and \(B\). By checking all diagrams from Figure 1 to Figure 5, all matrices in this path between \(A\) and \(B\) must have the same Jordan type. They must all be LLP matrices, or all be NJP matrices. Suppose they are all LLP matrices. Then by Lemma 7.1, since all of them are connected by edges, they all have the same eigenspaces. By Lemma 9.2, the column space and the kernel of \(A-aI\) and the column space and the kernel of \(B-bI\) are determined by these same spaces. So we are done. Suppose they are all NJP matrices. Then by Lemma 7.1, since all of them are connected by edges, they all have the same invariant subspaces. By Lemma 9.3, the column space and the kernel of \(A-aI\) and the column space and the kernel of \(B-bI\) are determined by these same spaces. So we are done. Now we investigate paths between a pivot matrix and a Jordan pivot matrix. We first make an important definition. **Definition 9.5**.: _Let \(A\) be a pivot matrix and \(B\) be a Jordan pivot matrix with eigenvalue \(b\). We say \(A,B\) are compatible if the two dimensional eigenspace of \(A\) contains the column space of \(B-bI\), and the kernel of \(B-bI\) contains the one dimensional eigenspace of \(A\)._ **Lemma 9.6**.: _Let \(q\neq 2\) be a prime power such that \(q-1\) is also a prime power. In the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), suppose the image of a pivot matrix \(A\) is connected to the image of a Jordan pivot matrix \(B\) by an path. Suppose either \(A\) is the only pivot matrix on this path, or \(B\) is the only Jordan pivot matrix on this path. Then \(A,B\) are compatible to each other._ Proof.: Using Lemma 7.8 and Lemma 9.4, we can WLOG assume that \(A\) is the only pivot matrix on this path, and \(B\) is the only Jordan pivot matrix on this path. Then all other matrices on this path must be LLP matrices, and by Lemma 7.1, they must all have the same invariant subspaces. Suppose \(C\) is an LLP matrix on this path with simple eigenvalue \(c_{1}\) and repeated eigenvalue \(c_{2}\). Let \(L_{1}\) be the eigenspace of \(C\) for \(c_{1}\), \(L_{2}\) be the eigenspace of \(C\) for \(c_{2}\), and \(P\) be the generalized eigenspace of \(C\) for \(c_{2}\). By Lemma 7.4, the eigenspaces for \(A\) must be \(L_{1}\) and \(P\), while the column space of \(B-bI\) is \(L_{2}\), and the kernel of \(B-bI\) is spanned by \(L_{1}\) and \(L_{2}\). It is easy to see now that \(A,B\) are compatible to each other. **Lemma 9.7**.: _Let \(q\neq 2\) be a prime power such that \(q-1\) is also a prime power. Let \(A,B\in\operatorname{GL}_{3}(\mathbb{F}_{q})\) be pivot matrices, such that the one dimensional eigenspace of \(B\) is contained in the two dimensional eigenspace of \(A\). Then in the reduced power graph of \(\operatorname{PGL}_{3}(\mathbb{F}_{q})\), their images have distance at least \(8\) to each other._ Proof.: Suppose for contradiction that there is a path between them with distance at most \(7\). Let the eigenspaces of \(A\) be \(L_{A},P_{A}\) and the eigenspaces of \(B\) be \(L_{B}\) and \(P_{B}\), where \(L_{A},L_{B}\) are one dimensional and \(P_{A}\) and \(P_{B}\) are two dimensional. Since \(P_{A}\) cannot contain \(L_{A}\), yet it contains \(L_{B}\), therefore \(A\) and \(B\) do not have the same eigenspaces. So by Lemma 7.8, there must be a Jordan pivot matrix on this path. Let \(A^{\prime}\) be the first Jordan pivot matrix on this path starting from \(A\), and let \(B^{\prime}\) be the last Jordan pivot matrix on this path. By Figure 2, Figure 3 and Figure 4, the distance between a pivot matrix and a Jordan pivot matrix is at least \(2\). Since our path has total distance \(7\), the distance between \(A^{\prime}\) and \(B^{\prime}\) is at most \(3\). In particular, the path from \(A^{\prime}\) to \(B^{\prime}\) cannot contain a pivot matrix. If \(A^{\prime}\) has eigenvalue \(a\) and \(B^{\prime}\) has eigenvalue \(b\), then by Lemma 9.4, \(A^{\prime}-aI\) and \(B^{\prime}-bI\) will have the same column space and same kernel. Let the common column space be \(L\) and the common kernel be \(P\). Since \(A\) has a path to \(A^{\prime}\) on which \(A^{\prime}\) is the only Jordan pivot matrix, \(A\) and \(A^{\prime}\) are compatible by Lemma 9.6. Similarly, \(B\) and \(B^{\prime}\) are compatible. So \(L\subseteq P_{A}\cap P_{B}\), and therefore \(L\neq L_{A},L_{B}\). Now, since \(L\) and \(L_{B}\) are two different one dimensional subspaces inside \(P_{A}\), \(P_{A}\) must be the span of them. But similarly, \(L_{B}\) must be inside \(P\), and by Lemma 9.1, \(L\) must also be inside \(P\). Hence \(P\) is also the span of \(L\) and \(L_{B}\), and this means \(P=P_{A}\). Then \(P\) cannot contains \(L_{A}\), contradiction. So we are done. **Lemma 9.8**.: _Let \(q\neq 2\) be a prime power such that \(q-1\) is also a prime power. Let \(A\in\mathrm{GL}_{3}(\mathbb{F}_{q})\) be a pivot matrix and let \(B\in\mathrm{GL}_{3}(\mathbb{F}_{q})\) be a Jordan pivot matrix with eigenvalue \(b\), such that the two dimensional eigenspace of \(A\) is the same as the kernel of \(B-bI\). Then in the reduced power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), their images have distance at least \(10\) to each other._ Proof.: Suppose there is a path from \(A\) to \(B\). Let \(B^{\prime}\) be the first pivot matrix on this path starting from \(B\). By Lemma 9.6, \(B\) and \(B^{\prime}\) are compatible. So the one dimensional subspace of \(B^{\prime}\) is contained in the kernel of \(B-bI\), which is the two dimensional eigenspace of \(A\). By Lemma 9.7, \(A\) and \(B^{\prime}\) have distance at least \(8\), while the path from \(B^{\prime}\) to \(B\) has distance at least \(2\) by Figure 2, Figure 3 and Figure 4. Therefore this path from \(A\) to \(B\) must have a distance at least \(10\). ## 10 Lower bounds when \(q-1\) is a prime power Let \(q\neq 2\) be a power of a prime \(p\), such that \(q-1\) is also the power of a prime. If \(q\neq 3\) and \(q-1\) is a power of \(2\), \(q+1\) cannot be a power of \(2\). Pick any odd prime factor \(p_{0}\) of \(q+1\). If \(q\neq 2\) is a power of \(2\) and \(q-1\) is a prime, then \(q+1\) must be coprime to \(q-1\). Pick any prime factor \(p_{0}\) of \(q+1\). Finally, if \(q=3\), then set \(p_{0}=q^{2}-1=8\). Either way, \(p_{0}\) is a factor of \(q^{2}-1\) but not a factor of \(q-1\). Let \(C\) be a companion matrix to any irreducible polynomial over \(\mathbb{F}_{q}\) of degree \(2\), then \(\mathbb{F}_{q}[C]\) is a field with \(q^{2}\) elements. So we can find \(C^{\prime}\in\mathbb{F}_{q}[C]\) with multiplicative order \(p_{0}\). Since \(p_{0}\) cannot divide \(q-1\), \(C^{\prime}\) must have irreducible characteristic polynomial. Set \(A=\begin{bmatrix}C^{\prime}&\\ &1\end{bmatrix}\), then \(A\) is an LP matrix. **Lemma 10.1**.: _Let \(q\neq 2,3\) be a power of a prime \(p\). Suppose \(q-1\) is a prime power. Then in the reduce power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\), the image of \(A\) has distance at least \(2\) to the image of any pivot matrix._ Proof.: Since \(q\neq 2,3\), \(p_{0}\) is a prime number. So by Corollary 2.14, powers of \(A\) are either still LP matrices or scalar multiples of identity. We can also see that roots of scalar multiples of \(A\) cannot be a pivot matrix. So \(A\) has distance at least \(2\) to the image of any pivot matrix. **Proposition 10.2**.: _Let \(q\neq 3\) be a Fermat prime or \(q=9\). Then the reduce power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\) has diameter at least \(12\)._ Proof.: Set \(B=\begin{bmatrix}1&&1\\ &1&\\ &&1\end{bmatrix}\), so it is a Jordan pivot matrix. Suppose there is a path from \(A\) to \(B\). By Figure 2, this path must contain a pivot matrix. Let \(A^{\prime}\) be the first pivot matrix on this path starting from \(A\). Let \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\) be the three columns of the identity matrix. By Lemma 7.3, the eigenspace of \(A^{\prime}\) must be the span of \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\), and the span of \(\mathbf{e}_{3}\). In particular, the two dimensional eigenspace of \(A^{\prime}\) is the same as the kernel of \(B-I\). By Lemma 9.8, the images of \(A^{\prime}\) and \(B\) has distance at least \(10\). By Lemma 10.1, the images of \(A\) and \(A^{\prime}\) has distance at least \(2\). So this path from \(A\) to \(B\) have distance at least \(12\). **Proposition 10.3**.: _Let \(q\neq 2\) be a power of \(2\), and \(q-1\) is prime. Then the reduce power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{q})\) has diameter at least \(13\)._ Proof.: Set \(B=\begin{bmatrix}1&1&\\ &1&1\\ &&1\end{bmatrix}\), so it is an NPJ matrix. Suppose there is a path from \(A\) to \(B\). By Figure 3, this path must contain a pivot matrix and a Jordan pivot matrix. Let \(A^{\prime}\) be the first pivot matrix on this path starting from \(A\). Let \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\) be the three columns of the identity matrix. By Lemma 7.3, the eigenspace of \(A^{\prime}\) must be the span of \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\), and the span of \(\mathbf{e}_{3}\). Let \(B^{\prime}\) be the last Jordan pivot matrix on this path starting from \(A\), and suppose it has an eigenvalue \(b\in\mathbb{F}_{q}^{*}\). Let \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\) be the three columns of the identity matrix. By Lemma 9.3, the kernel of \(B^{\prime}-bI\) must be the span of \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\), and it is identical to the two dimensional eigenspace of \(A^{\prime}\). By Lemma 9.8, the images of \(A^{\prime}\) and \(B^{\prime}\) has distance at least \(10\). By Lemma 10.1, the images of \(A\) and \(A^{\prime}\) has distance at least \(2\). And since \(B\) and \(B^{\prime}\) are not in the same Jordan type, their images have distance at least \(1\). So this path from \(A\) to \(B\) have distance at least \(13\). **Proposition 10.4**.: _The reduce power graph of \(\mathrm{PGL}_{3}(\mathbb{F}_{3})\) has diameter at least \(11\)._ Proof.: Set \(B=\begin{bmatrix}1&&1\\ &1&\\ &&1\end{bmatrix}\), so it is a Jordan pivot matrix. Suppose there is a path from \(A\) to \(B\). By Figure 4, this path must contain a pivot matrix. Let \(A^{\prime}\) be the first pivot matrix on this path starting from \(A\). Let \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\) be the three columns of the identity matrix. By Lemma 7.3, the eigenspace of \(A^{\prime}\) must be the span of \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\), and the span of \(\mathbf{e}_{3}\). In particular, the two dimensional eigenspace of \(A^{\prime}\) is the same as the kernel of \(B-I\). By Lemma 9.8, the images of \(A^{\prime}\) and \(B\) has distance at least \(10\). Since \(A\) and \(A^{\prime}\) are not in the same Jordan type, their images have distance at least \(1\). So this path from \(A\) to \(B\) have distance at least \(11\).
2305.04338
Formation of Thermal Vortex Rings
An evolution of a spherical region, subjected to uniform buoyancy force, is investigated. Incompressibility and axial symmetry are assumed, together with a buoyancy discontinuity at the boundary. The boundary turns into a vortex sheet and the system evolves into a ring. Contrary to the case of mechanically generated rings, buoyancy-driven rings are unstable. This is due to the generation of negative vorticity at the bottom. Furthermore, a sequence of Kelvin-Helmholtz instabilities arises along the buoyancy anomaly boundary. This sequence transfers the energy toward large scales with $\kappa^{-3}$ distribution. The vortex blob method has been used to simulate the system numerically. An optimization algorithm, used previously in two dimensions, has been extended to the axisymmetric case. It reduces computational complexity from $N^2$ to $N \log N$, where N is the number of nodes. Additionally, a new algorithm has been developed as a remedy for the exponential growth of the number of nodes required. It exploits a tendency of the vortex sheet to form many parallel stripes, by merging them together.
Paweł Jędrejko, Jun-Ichi Yano, Marta Wacławczyk
2023-05-07T17:34:16Z
http://arxiv.org/abs/2305.04338v2
# Formation of Thermal Vortex Rings ###### Abstract An evolution of a spherical region, subjected to uniform buoyancy force, is investigated. Incompressibility and axial symmetry are assumed, together with a buoyancy discontinuity at the boundary. The boundary turns into a vortex sheet and the system evolves into a ring. Contrary to the case of mechanically generated rings, buoyancy-driven rings are unstable. This is due to the generation of negative vorticity at the bottom. Furthermore, a sequence of Kelvin-Helmholtz instabilities arises along the buoyancy anomaly boundary. This sequence transfers the energy toward large scales with \(\sim\kappa^{-3}\) distribution. The vortex blob method has been used to simulate the system numerically. An optimization algorithm, used previously in two dimensions, has been extended to the axisymmetric case. It reduces computational complexity from \(N^{2}\) to \(N\log N\), where \(N\) is the number of nodes. Additionally, a new algorithm has been developed as a remedy for the exponential growth of the number of nodes required. It exploits a tendency of the vortex sheet to form many parallel stripes, by merging them together. ## 1 Introduction Thermal vortex rings play an important role in the formation of cumulus clouds. Rising thermal vortex rings correspond to the initial phase of atmospheric convection which is considered adiabatic [12]. In this work, we adopt this assumption and model them as regions of a fluid subjected to uniform buoyancy force. We neglect stratification and focus on the high Reynolds number regime. The problem, which details are described in the next section was already considered 40 years ago. Since that time, a lot has changed in computational physics and available computational resources. We approach the problem from the very same perspective of vortex dynamics in lagrangian formulation, although enriching it with modern capacities. Its advantage over standard, eulerian mesh methods is an insight into flow coherencies. Moreover, high Reynolds numbers might be troublesome for Eulerian methods. We present a few developments on an algorithmic basis and take advantage of modern hardware architecture. We will also refer to mechanically generated vortex rings. The main body of this article consists of four parts. In the next one - the second section, we present a mathematical statement of the problem and derivation of the governing equations. The third section describes the basics of the numerical method that we used and details of the surgery procedure. The fourth one is dedicated to optimization of the numerics - a fast velocity induction algorithm is described as well as fast surgery. A few test cases support their correctness. We present results and postprocessing in the fifth section. The article finishes with a conclusion and suggestions for further work. ## 2 Statement of the problem ### Geometry and governing equations We will consider an evolution of an initially spherical region, of an incompressible fluid with increased temperature. We will let it be constant in both regions, with a discontinuity at the interface. The system is assumed to have axial symmetry. Far enough we expect the impact of the hot region to vanish so we let the velocity approach zero at infinity: \[\lim_{r\to\infty}\boldsymbol{u}=\boldsymbol{0} \tag{1}\] We start with a system of three, well-known, equations, representing the conservation of mass, momentum, and energy (without dissipative heating) respec Figure 1: Initial condition tively \[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{u})=0 \tag{2}\] \[\rho\frac{D\mathbf{u}}{Dt}=-\nabla p+\mu\nabla^{2}\mathbf{u}- \rho g\hat{k} \tag{3}\] \[\frac{DT}{Dt}=\frac{\lambda}{\rho c_{p}}\nabla^{2}T \tag{4}\] we also assumed that fluid properties are constant with exception of density. We will further assume that it depends on \(T\) exclusively and it can be expanded in a Taylor series around some reference value \(T_{0}\): \[\rho(T)=\rho(T_{0})+\frac{d\rho}{dT}\Big{|}_{T_{0}}(T-T_{0})+{\cal O}\Big{(}(T -T_{0})^{2}\Big{)} \tag{5}\] If we now consider a piece of fluid, from the mass conservation (de facto defining the piece) and product rule, we know that: \[\frac{d(\rho V)}{dT}=0 \tag{6}\] \[\frac{d\rho}{dT}V=-\frac{dV}{dT}\rho \tag{7}\] and by definition of thermal expansion coefficient: \[\frac{\partial\rho}{\partial T}=-\beta\rho \tag{8}\] Substituting this to (5) and neglecting higher order terms we obtain: \[\rho(T)\approx\rho_{0}-\rho_{0}\beta(T-T_{0}) \tag{9}\] We will assume that changes of density are relevant only in the source term of momentum equation (3), what is called the Boussinesq approximation. That turns (2) into: \[\nabla\cdot\mathbf{u}=0 \tag{10}\] and (3) into: \[\frac{D\mathbf{u}}{Dt}=-\frac{1}{\rho_{0}}\nabla p+\nu\nabla^{2}\mbox {\boldmath$u$}-g\hat{k}-\beta(T-T_{0})g\hat{k} \tag{11}\] Introducing buoyancy as: \[b=-\beta(T-T_{0})g \tag{12}\] and taking \(g\hat{k}\) under the gradient: \[\frac{D\mathbf{u}}{Dt}=-\frac{1}{\rho_{0}}\nabla(p+\rho_{0}gz)+\nu \nabla^{2}\mathbf{u}+b\hat{k} \tag{13}\] We can also use (12) to substitute for \(T=-b/(\beta g)+T_{0}\) in (4). Because \(T\) appears only under the derivative and \(\beta\), \(T_{0}\), \(g\) are constants, \(b\) will just replace Our last fundamental simplification will be neglecting the diffusive processes. This will turn buoyancy into a passive scalar, maintaining the discontinuous character of its distribution. The governing equations of the system are presented below: \[\nabla\cdot\mathbf{u}=0 \tag{14}\] \[\frac{D\mathbf{u}}{Dt}=-\frac{1}{\rho_{0}}\nabla(p+\rho_{0}gz)+b\hat{k} \tag{15}\] \[\frac{Db}{Dt}=0 \tag{16}\] ### Vorticity based formulation To reduce the number of unknowns we will turn to the vorticity equation. We will introduce vorticity defined as a curl of velocity: \[\mathbf{\omega}=\nabla\times\mathbf{u} \tag{17}\] where \(C\) is a closed curve, bounding an area \(A\). According to this definition, vorticity could be considered as a circulation density per unit area. Looking for a reverse relation we apply Helmholtz decomposition to the velocity field. Splitting it to divergence-free part (\(\mathbf{u_{\omega}}\)) and curl-free part (\(\mathbf{u_{q}}\)) we obtain: \[\mathbf{u}=\mathbf{u_{\omega}}+\mathbf{u_{q}} \tag{18}\] We will also write down a few useful vector identities: \[\nabla\cdot(\nabla\times\mathbf{\psi})=0 \tag{19a}\] \[\nabla\times(\nabla\phi)=0 \tag{19b}\] \[\nabla\times(\nabla\times\mathbf{\psi})=\nabla(\nabla\cdot\mathbf{\psi})-\nabla\cdot( \nabla\mathbf{\psi}) \tag{19c}\] \[\nabla\times(\phi\mathbf{\psi})=\phi(\nabla\times\mathbf{\psi})+\nabla\phi\times\psi= \phi(\nabla\times\mathbf{\psi})-(\mathbf{\psi}\times\nabla)\phi \tag{19d}\] Using the first two ones, we write: \[\mathbf{u_{\omega}}=\nabla\times\mathbf{\psi} \tag{20a}\] \[\mathbf{u_{q}}=\nabla\phi \tag{20b}\] We will take for granted that all divergence-free fields can be represented as curls of some other field called vector potential. #### 2.2.1 The vorticity-induced component of velocity Substituting (20) to (18), then taking the curl we get: \[\nabla\times\mathbf{u}=\nabla(\nabla\cdot\mathbf{\psi})-\nabla \cdot(\nabla\mathbf{\psi}) \tag{21}\] In (20a), \(\psi\) is determined up to a potential field, due to identity (19b). Therefore we have a freedom to add such \(\nabla f\) that makes it divergence-free: \[\nabla\cdot\mathbf{\psi}=\nabla\cdot(\mathbf{\psi}^{\prime} +\nabla f)=0 \tag{22}\] This can be done by choosing some \(\psi^{\prime}\) and solving Poisson equation for the potential of "correction" \(f\). For convenience we will denote the source term with \(q=\nabla\cdot\mathbf{\psi}^{\prime}\) \[\nabla^{2}f=q \tag{23}\] The equation is linear so we can superimpose velocities (\(v_{i}=\nabla f_{i}\)) induced by separate infinitesimal sources. Let us say that at point \(r\) we got a mass source of strength \(q_{i}dV\), closed in a ball-shaped control volume of radius \(\rho\). Assuming that there is no preferable direction (\(v_{i}=v_{i}(\rho)\)) we can determine the velocity at point \(r_{0}\), belonging to the boundary of the control volume, by the integral form of mass conservation: \[q_{i}dV=4\pi\rho^{2}\frac{df_{i}}{d\rho} \tag{24}\] \[v_{i}(\mathbf{r_{0}})=\frac{df_{i}}{d\rho}=\frac{q_{i}dV}{4\pi\rho^{ 2}} \tag{25}\] which satisfies the boundary condition of vanishing at infinity. By radial integration, we can obtain a potential (let constant be equal 0): \[f_{i}(\mathbf{r_{0}})=-\frac{q_{i}dV}{4\pi\rho}=-\frac{q_{i}dV}{4\pi| \mathbf{r_{0}}-\mathbf{r}|} \tag{26}\] By superposition of all such point sources, we obtain a solution of (23): \[f(\mathbf{r_{0}})=\frac{-1}{4\pi}\int\frac{q(\mathbf{r})}{| \mathbf{r_{0}}-\mathbf{r}|}dV \tag{27}\] Now, having divergence-free \(\psi\) we can go back to equation (21) and simplify it to a system of three Poisson equations: \[\omega_{i}=\nabla^{2}\psi_{i} \tag{28}\] (where \(i\) can be \(x\), \(y\) or \(z\)). Solving them in exactly the same way as before, we get: \[\psi_{i}(\mathbf{r_{0}})=-\frac{1}{4\pi}\int\frac{\omega_{i}(\mathbf{r})}{|\mathbf{r_{0}}-\mathbf{r}|}dV \tag{29}\] then: \[\frac{\partial\psi_{i}}{\partial x_{0j}}=\frac{1}{4\pi}\int\frac{\omega_{i}(x _{0j}-x_{j})}{|\mathbf{r_{0}}-\mathbf{r}|^{3}}dV \tag{30}\] Finally we compute the vorticity-induced velocity: \[\mathbf{u_{\omega}}(\mathbf{r_{0}})=\nabla_{0}\times\mathbf{\psi}=\frac{1}{4\pi}\int\frac{\mathbf{\omega}\times(\mbox {\boldmath$r_{0}$}-\mathbf{r})}{|\mathbf{r_{0}}-\mathbf{r}|^{3}}dV \tag{31}\] #### 2.2.2 The remaining component of velocity Taking the divergence of (18) and using continuity (10) on the left hand side we obtain: \[0=\nabla^{2}\phi \tag{32}\] so \(\phi\) is a harmonic function. That means that it can take maximal and minimal values only at the boundary (maximum modulus principle). That is very intuitve if we interpret it as a steady state of diffusion. Boundary condition (1) implies that \(\nabla\phi\) is zero, so the flow is uniquely determined by the vorticity distribution. #### 2.2.3 The vorticity equation We will now take the curl of (13) to take an advantage of the fact that curl of gradient is zero. \[\nabla\times\left(\frac{\partial\boldsymbol{u}}{\partial t}+(\boldsymbol{u} \cdot\nabla)\boldsymbol{u}\right)=\nabla\times\Bigg{(}-\frac{1}{\rho_{0}} \nabla(p+\rho_{0}gz)+\nu\nabla^{2}\boldsymbol{u}+b\hat{k}\Bigg{)} \tag{33}\] Using the identity (19d): \[\nabla\times\left((\boldsymbol{u}\cdot\nabla)\boldsymbol{u}\right)=( \boldsymbol{u}\cdot\nabla)(\nabla\times\boldsymbol{u})-\left(\nabla( \boldsymbol{u}\cdot\nabla)\right)\times\boldsymbol{u} \tag{34}\] and using Schwarz theorem (switching derivatives), we get the vorticity equation: \[\frac{\partial\boldsymbol{\omega}}{\partial t}+(\boldsymbol{u}\cdot\nabla) \boldsymbol{\omega}=(\boldsymbol{\omega}\cdot\nabla)\boldsymbol{u}+\nu\nabla ^{2}\boldsymbol{\omega}+\nabla\times(b\hat{k}) \tag{35}\] The hitch is that the last term is not differentiable at the interface due to assumed discontinuity in the buoyancy distribution. Nevertheless, we will assume that the derivative exists but is singular. Of course, in all other regions it is zero. The source term will generate a vortex sheet at the interface, with the same type of singularity, what suggests a walk-around. Curl of buoyancy points in the azimuthal direction, so does the vorticity. No vorticity will appear in non-interface regions, due to Kelvin's theorem, assuming that viscosity is negligible. Despite some complications, at the current state we can take a step forward and reformulate the problem. We will describe the vortex sheet at the interface as a parametric surface. Due to axial symmetry we will skip the dependence on azimuth \(\phi\). \[\boldsymbol{r}(s,t)=\begin{bmatrix}\rho(s,t)\\ z(s,t)\end{bmatrix} \tag{36}\] \[s\in[-\tfrac{1}{2}\pi,\tfrac{1}{2}\pi]\] Where \(s\) is a parameter in a fixed range, coincident to initial length of the \(\phi=\text{const}\) section of the surface. The boundary conditions, given by the axial symmetry, are: \[\rho(-\tfrac{1}{2}\pi,t)=0\] \[\rho(\tfrac{1}{2}\pi,t)=0 \tag{37}\] \[\frac{\partial z}{\partial s}(-\tfrac{1}{2}\pi,t)=0\] \[\frac{\partial z}{\partial s}(\tfrac{1}{2}\pi,t)=0\] and the initial condition: \[\rho(s,0)=R\cos(s)\] \[z(s,0)=R\sin(s) \tag{38}\] Figure (2) presents both the parametrization and the initial condition. #### 2.2.4 Nondimensionalization Basic quantities characterizing the system are: * \(b\) [L T\({}^{2}\)] buoyancy * \(\nu\) [L\({}^{2}\) T\({}^{-1}\)] kinematic viscosity * \(R\) [L] initial radius (in brackets we denote the physical dimension) then we can express (buoyancy-linked) time, velocity and vorticity scales as: * T = \(\sqrt{\frac{R}{b}}\) [T] * U = \(\frac{R}{T}=\sqrt{bR}\) [L T\({}^{-1}\)] Figure 2: Region of non-zero buoyancy (blue), covered with parametrized vortex sheet (grey). Shown in half-section. * \(\Omega=\frac{U}{R}=\sqrt{\frac{b}{R}}\) [\(\mathrm{T}^{-1}\)] Let us now introduce non-dimensional quantities (denoted by tilde): \(\mathbf{u}=U\widetilde{\mathbf{u}}\) etc. Substituting this to equation (35) and dividing both sides by the vorticity scale at the very beginning, we obtain: \[\frac{\partial\widetilde{\mathbf{\omega}}}{\partial\widetilde{t}}\frac{1}{\mathrm{T }}+(\widetilde{\mathbf{u}}\cdot\widetilde{\nabla})\widetilde{\mathbf{\omega}}\frac{ \mathrm{U}}{R}=(\widetilde{\mathbf{\omega}}\cdot\widetilde{\nabla})\widetilde{\mathbf{u }}\frac{\mathrm{U}}{R}+\frac{\nu}{R^{2}}\widetilde{\nabla}^{2}\widetilde{\mathbf{ \omega}}+\widetilde{\nabla}\times(1\hat{k})\frac{b}{R\Omega} \tag{39}\] multiplying by the time scale, introducing: \[\varepsilon=\frac{\nu T}{R^{2}}=\frac{\nu}{RU} \tag{40}\] and skipping tyldas for convenience, we get: \[\frac{\partial\mathbf{\omega}}{\partial t}+(\mathbf{u}\cdot\nabla)\mathbf{\omega}=(\mathbf{ \omega}\cdot\nabla)\mathbf{u}+\varepsilon\nabla^{2}\mathbf{\omega}+\nabla\times(1\hat{ k}) \tag{41}\] \(\varepsilon\) might be interpreted as an inverse of the Reynolds number. By taking the limit \(\varepsilon\to 0\) or equivalently \(\mathrm{Re}\rightarrow\infty\) we end up with: \[\frac{\partial\mathbf{\omega}}{\partial t}+(\mathbf{u}\cdot\nabla)\mathbf{\omega}=(\mathbf{ \omega}\cdot\nabla)\mathbf{u}+\nabla\times(b\hat{k}) \tag{42}\] Although non-dimensional body force \(b=1\), we will still write it explicitly, just keeping in mind its value of one. #### 2.2.5 The curl of buoyancy and the circulation density To walk-around the problem of singularities in the vorticity equation (35), we will take a step back to equation (33). Let us recall the definition of a curl operator. \[\Gamma^{u}=\oint_{C}\mathbf{u}\cdot d\mathbf{r} \tag{43}\] we will call the circulation of field \(\mathbf{u}\) around curve \(C\). Then the \(i\)th component of the curl is defined as: \[(\nabla\times\mathbf{u})_{i}=\lim_{A\to 0}\frac{\Gamma^{u}}{A} \tag{44}\] where \(C\) is assumed to be in a plane normal to \(i\) direction (given by unit vector \(\hat{e}_{i}\)) and \(A\) is an area bounded by \(C\). Therefore (using Einstein's summation convention): \[\nabla\times\mathbf{u}=\lim_{A_{i}\to 0}\frac{\Gamma^{u}_{i}}{A_{i}}\hat{e }_{i} \tag{45}\] Now, we will apply identity (19d) to equation (33), as before, but this time using the definition of the curl. \[\frac{d}{dt}\Big{(}\frac{\Gamma^{u}_{i}}{A_{i}}\mathbf{e}_{i}\Big{)}=\Big{(}\frac {\Gamma^{u}_{i}}{A_{i}}\hat{e}_{i}\,\cdot\nabla\Big{)}\mathbf{u}+\frac{\Gamma^{b} _{i}}{A_{i}} \tag{46}\] Expanding the vortex stretching term in cylindrical coordinates (keeping the axial symmetry in mind) and the time derivative with a product rule: \[\frac{1}{A_{\phi}}\frac{d\Gamma_{\phi}^{u}}{dt}+\Gamma_{\phi}^{u}\Big{(}-\frac{ 1}{A_{\phi}^{2}}\frac{dA_{\phi}}{dt}\Big{)}=\frac{\Gamma_{\phi}^{u}}{A_{\phi}} \frac{u_{\rho}}{\rho}+\frac{\Gamma_{\phi}^{b}}{A_{\phi}} \tag{47}\] The total time derivative of \(A_{\phi}\) can be deduced from the incompressibility: \[\frac{dV}{dt}=\frac{d(A_{\phi}\rho\delta\phi)}{dt}=0 \tag{48}\] \[\frac{dA_{\phi}}{dt}\rho\delta\phi+A_{\phi}\frac{d\rho}{dt}\delta\phi=0 \tag{49}\] so, by the definition of \(u_{\rho}\): \[\frac{dA_{\phi}}{dt}=-A_{\phi}\frac{u_{\rho}}{\rho} \tag{50}\] now substituting it to (47): \[\frac{1}{A_{\phi}}\frac{d\Gamma_{\phi}^{u}}{dt}+\Gamma_{\phi}^{u}\Big{(}\frac {1}{A_{\phi}}\frac{u_{\rho}}{\rho}\Big{)}=\frac{\Gamma_{\phi}^{u}}{A_{\phi}} \frac{u_{\rho}}{\rho}+\frac{\Gamma_{\phi}^{b}}{A_{\phi}} \tag{51}\] we see that this term cancels out with the vortex stretching. What is even more important, we can get rid of \(A_{\phi}\) and remove the singularity (we will omit index \(\phi\)): \[\frac{d\Gamma^{u}}{dt}=\Gamma^{b} \tag{52}\] The last challenge left is to compute these two circulations. We will refer to fig. (3). To compute the \(\Gamma^{b}\) we will enclose a small piece of the interface (that is given by \(\delta s\)) in an oriented rectangle. Note that the orientation is clockwise due to the direction of \(\phi\). One can easily notice, that only the left side will contribute to circulation. we can express its length as: \[\delta z=\frac{\partial z}{\partial s}\delta s \tag{53}\] then: \[\Gamma^{b}=b\,\delta z=b\frac{\partial z}{\partial s}\delta s \tag{54}\] Now, it would be convenient to express \(\Gamma^{u}\) also in terms of \(\delta s\). We will do this by introducing circulation density per parameter length \(\gamma\): \[\Gamma^{u}=\gamma\delta s \tag{55}\] It characterizes a uniform circulation around a ring of thickness \(\delta s\). Substituting (54) and (55) to (52), taking an advantage of the fact that \(\delta s\) by design does not depend on time and canceling it out, we obtain: \[\frac{d\gamma}{dt}=b\frac{\partial z}{\partial s} \tag{56}\] The corresponding Biot-Savart formula can be obtained by substituting: \[\mathbf{\omega}dV=\hat{\phi}\Gamma^{u}dl=\hat{\phi}\gamma ds\rho d\phi \tag{57}\] (\(dl\) is an infinitesimal length in the azimuthal direction) to (31), what gives: \[\mathbf{u}(\mathbf{r_{0}})=\frac{1}{4\pi}\int_{0}^{2\pi}\int_{-\pi/2}^{\pi/2}\frac{ \gamma\hat{\phi}\times(\mathbf{r_{0}}-\mathbf{r})}{|\mathbf{r_{0}}-\mathbf{r}|^{3}}\rho ds\,d\phi \tag{58}\] ### Evolution of chosen integral quantities We can deduce the evolution of certain integral quantities characterizing the region of non-zero buoyancy (we will call it interior). This can be used to monitor the simulation quality and possibly improve the time-stepping scheme. Volume integrals are rather inconvenient due to the difficulties of dynamic volume discretization. For that reason, we will transform proper expressions to surface integrals using Gauss and Stokes theorems. We will start by recalling a few useful formulas in cylindrical coordinates. The derivatives of unit vectors and their cross products: \[\frac{d\hat{\rho}}{d\phi}=\hat{\phi} \tag{59a}\] \[\frac{d\hat{\phi}}{d\phi}=-\hat{\rho}\] (59b) \[\hat{z}\times\hat{\rho}=\hat{\phi}\] (59c) \[\hat{\phi}\times\hat{z}=\hat{\rho} \tag{59d}\] Figure 3: A small piece of the vortex sheet and enclosing curve \[\hat{\rho}\times\hat{\phi}=\hat{z} \tag{59e}\] It would also be useful to decompose unit vectors given at point \(A\) to unit vectors from point \(B\) (fig. 4). \[\hat{\rho}_{A}=\hat{\rho}_{B}\cos(\Delta\phi)+\hat{\phi}_{B}\sin(\Delta\phi) \tag{60a}\] \[\hat{\phi}_{A}=\hat{\rho}_{B}\sin(\Delta\phi)+\hat{\phi}_{B}\cos(\Delta\phi) \tag{60b}\] where \(\Delta\phi=\phi_{A}-\phi_{B}\). Divergence of some vector field \(\mathbf{q}\) in axisymmetric case is: \[\nabla\cdot\mathbf{q}=\frac{1}{\rho}\frac{\partial(\rho\,q_{\rho})}{\partial\rho} +\frac{\partial q_{z}}{\partial z} \tag{61}\] Let the parametrized, axisymmetric surface be given by: \[\mathbf{r}(s)=\rho(s)\hat{\rho}(\phi)+z(s)\hat{z} \tag{62}\] then its infinitesimal piece is given by: \[d\mathbf{S}=\frac{\partial\mathbf{r}}{\partial\phi}d\phi\times\frac{\partial\mathbf{r}}{ \partial s}ds=(\rho\hat{\phi}d\phi)\times\Big{(}\frac{\partial\rho}{\partial s }\hat{\rho}+\frac{\partial z}{\partial s}\hat{z}\Big{)}ds=\Big{(}\frac{ \partial z}{\partial s}\hat{\rho}-\frac{\partial\rho}{\partial s}\hat{z}\Big{)} \rho d\phi ds \tag{63}\] In some cases, integration over the azimuth is trivial and it will be convenient to introduce also: \[d\mathbf{n}=\Big{(}\frac{\partial z}{\partial s}\hat{\rho}-\frac{\partial\rho}{ \partial s}\hat{z}\Big{)}ds \tag{64a}\] \[d\mathbf{\tau}=\Big{(}\frac{\partial\rho}{\partial s}\hat{\rho}+\frac{\partial z }{\partial s}\hat{z}\Big{)}ds \tag{64b}\] understood, respectively, as vector normal and vector tangent to the \(\phi=\text{const}\) section of the surface. Figure 4: Unit vectors of points \(A\) and \(B\) in axonometric view and seen from above #### 2.3.1 Mass We assumed constant density, so mass conservation in the interior is expressed by the constancy of the enclosed volume: \[\frac{dV}{dt}=\frac{d}{dt}\int_{V}1dV=0 \tag{65}\] we can interpret the integrand as a divergence of some vector field of unit divergence, then use the Gauss theorem. In the end, we also perform trivial integration over azimuth. Below are two exemplary choices: \[V=\int_{V}\nabla\cdot\Big{(}\frac{\rho}{2}\hat{\rho}\Big{)}dV=\oiint\frac{\rho} {2}\hat{\rho}\cdot d\mathbf{S}=\pi\int_{-\pi/2}^{\pi/2}\rho^{2}\frac{\partial z}{ \partial s}ds \tag{66a}\] \[V=\int_{V}\nabla\cdot z\hat{z}dV=\oiint z\hat{z}\cdot d\mathbf{S}=-2\pi\int_{-\pi/2 }^{\pi/2}z\rho\frac{\partial\rho}{\partial s}ds \tag{66b}\] #### 2.3.2 Total circulation in meridional plane After trivial integration of (56), we obtain the increase of the total circulation \[\frac{d}{dt}\int_{-\pi/2}^{\pi/2}\gamma ds=b(z_{2}-z_{1}) \tag{67}\] where \(z_{1}\) and \(z_{2}\) denote the coordinates at s equal \(-\pi/2\) and \(\pi/2\) respectively and as a difference - the thickness of buoyant region at the \(z\) axis. #### 2.3.3 Moment of vorticity The first moment is a conserved quantity for cases with no buoyancy. Their further importance is highlighted in section 4. \[\frac{d}{dt}\int\mathbf{r}\times\mathbf{\omega}dV=\frac{d}{dt}\int(\rho\hat{\rho}+z\hat {z})\times\mathbf{\omega}dV \tag{68}\] \[=\frac{d}{dt}\int_{-\pi/2}^{\pi/2}\int_{0}^{2\pi}(\rho\hat{\rho}+z\hat{z}) \times\hat{\phi}\gamma\rho d\phi ds\] \[=\frac{d}{dt}\int_{-\pi/2}^{\pi/2}\int_{0}^{2\pi}(\rho\hat{z}-z\hat{\rho}) \gamma\rho d\phi ds\] the \(\hat{\rho}\) component will be zero due to axial symmetry, so we will focus on the other one. By the product rule, we get: \[=\hat{z}\int_{-\pi/2}^{\pi/2}\int_{-\pi/2}^{\pi/2}\Bigg{(}2\rho\frac{d\rho}{ dt}\gamma+\frac{d\gamma}{dt}\rho^{2}\Bigg{)}dsd\phi\] where \(d\rho/dt\) is just \(v_{\rho}\) determined by formula (58). This means that the first term on the right-hand side is built of all the interactions between pairs of points, which sums up to zero by the following argument: Consider two points belonging to the vortex sheet - \(A\) and \(B\). Contribution from the interaction where \(A\) is the probing point and \(B\) is the source is: \[i_{AB}=\Bigg{(}\frac{\hat{\rho}_{A}}{2\pi}\cdot\frac{\gamma_{B}\hat{\phi}_{B} \times(\mathbf{r_{A}}-\mathbf{r_{B}})}{|\mathbf{r_{A}}-\mathbf{r_{B}}|^{3}}\rho_{B}ds_{B}\,d \phi_{B}\Bigg{)}\gamma_{A}\rho_{A}ds_{A}ds_{A} \tag{69}\] It is enough to consider only the \(\hat{z}\) component of the difference in the numerator because the rest will give \(\hat{z}\) after the cross product. Using the formula (60a) we can take the dot product and obtain: \[i_{AB}=\Bigg{(}\gamma_{B}\frac{(z_{A}-z_{B})\cos(\phi_{A}-\phi_{B})}{2\pi|\mathbf{r _{A}}-\mathbf{r_{B}}|^{3}}\rho_{B}ds_{B}\,d\phi_{B}\Bigg{)}\gamma_{A}\rho_{A}ds_{ A}ds_{A} \tag{70}\] We see that, if we swap \(A\) and \(B\), the formula only changes sign, so \(i_{AB}+i_{BA}=0\) Going back to equation (68), substituting from (56) and computing the trivial integral over azimuth: \[\frac{d}{dt}\int\mathbf{r}\times\mathbf{\omega}dV=2\pi\hat{z}\int_{-\pi/2}^{\pi/2}b \frac{\partial z}{\partial s}\rho^{2}ds \tag{71}\] recalling formula (66a): \[\frac{d}{dt}\int\mathbf{r}\times\mathbf{\omega}dV=2bV\hat{z} \tag{72}\] #### 2.3.4 Generalized momenta To represent the volume integral of velocity as a divergence, we will use the Stokes stream function: \[u_{\rho}=-\frac{1}{\rho}\frac{\partial\psi}{\partial z} \tag{73a}\] \[u_{z}=\frac{1}{\rho}\frac{\partial\psi}{\partial\rho} \tag{73b}\] Starting with the momentum conjugated with the radial coordinate, and proceeding with Gauss divergence theorem: \[\int u_{\rho}dV=-\int\frac{1}{\rho}\frac{\partial\psi}{\partial z}dV=-\int \nabla\cdot\Big{(}\frac{\psi}{\rho}\hat{z}\Big{)}dV=-\int\Big{(}\frac{\psi}{ \rho}\hat{z}\Big{)}\cdot d\mathbf{S} \tag{74}\] this can be expanded, integrated in \(\phi\) with ease and integrated by parts in \(s\): \[=2\pi\int_{-\pi/2}^{\pi/2}\psi\frac{\partial\rho}{\partial s}ds=2\pi(\psi\rho )\Big{|}_{-\pi/2}^{\pi/2}-2\pi\int_{-\pi/2}^{\pi/2}\rho\frac{\partial\psi}{ \partial s}ds \tag{75}\] the first term is zero by the boundary conditions (37) for \(\rho\) and the second can be further expanded with the chain rule: \[=-2\pi\int_{-\pi/2}^{\pi/2}\rho\Big{(}\frac{\partial\psi}{\partial\rho}\frac{ \partial\rho}{\partial s}+\frac{\partial\psi}{\partial z}\frac{\partial z}{ \partial s}\Big{)}ds=2\pi\int_{-\pi/2}^{\pi/2}\rho^{2}\Big{(}-u_{z}\frac{ \partial\rho}{\partial s}+u_{\rho}\frac{\partial z}{\partial s}\Big{)}ds \tag{76}\] so we can conclude: \[\int u_{\rho}dV=2\pi\int\rho^{2}\mathbf{u}\cdot d\mathbf{n}=\int\rho\mathbf{u}\cdot d\mathbf{S} \tag{77}\] Proceeding analogically with \(z\) component: \[\int u_{z}dV=\int\frac{1}{\rho}\frac{\partial}{\partial\rho}\Big{(}\frac{\psi }{\rho}\rho\Big{)}dV=\int\Big{(}\frac{\psi}{\rho}\hat{\rho}\Big{)}\cdot d\mathbf{S} \tag{78}\] expanding the dot product, integrating in \(\phi\), then integrating by parts in \(s\): \[=2\pi\int_{-\pi/2}^{\pi/2}\psi\frac{\partial z}{\partial s}ds=2\pi(\psi z) \Big{|}_{-\pi/2}^{\pi/2}-2\pi\int_{-\pi/2}^{\pi/2}z\frac{\partial\psi}{ \partial s}ds \tag{79}\] the boundary term is zero because both boundaries of the curve lay on the axial streamline \(\psi=0\). \[=-2\pi\int_{-\pi/2}^{\pi/2}z\Big{(}\frac{\partial\psi}{\partial\rho}\frac{ \partial\rho}{\partial s}+\frac{\partial\psi}{\partial z}\frac{\partial z}{ \partial s}\Big{)}ds=2\pi\int_{-\pi/2}^{\pi/2}z(-u_{z}\frac{\partial\rho}{ \partial s}+u_{\rho}\frac{\partial z}{\partial s})\rho ds \tag{80}\] and we can conclude: \[\int u_{z}dV=2\pi\int z\rho\mathbf{u}\cdot d\mathbf{n}=\int z\mathbf{u}\cdot d\mathbf{S} \tag{81}\] ### Axial symmetry of induced velocity Using the Biot-Savart formula (58), we will now take an advantage of the axial symmetry and integrate the induced velocity over the azimuth. Starting by expanding the numerator: \[\gamma\hat{\phi}\times(\mathbf{r_{0}}-\mathbf{r})=\gamma\hat{\phi}\times(\rho_{0}\hat {\rho}_{0}+z_{0}\hat{z}-\rho\hat{\rho}-z\hat{z}) \tag{82}\] and substituting for \(\hat{\rho}\) with the help of formula (60a): \[=\gamma\hat{\phi}\times\Big{(}\hat{\rho}(\rho_{0}\cos(\Delta\phi)-\rho)+\hat{ \phi}\rho_{0}\sin(\Delta\phi)+\hat{z}(z_{0}-z)\Big{)} \tag{83}\] where \(\Delta\phi=\phi_{0}-\phi\) Now, taking the cross product according to formulas (59): \[=\gamma\Big{(}\hat{z}(\rho-\rho_{0}\cos(\Delta\phi))+\hat{\rho}(z_{0}-z)\Big{)} \tag{84}\] and using (60a) again; but this time, keeping in mind that \(\hat{\phi}_{0}\) component has to integrate to zero by axial symmetry. Therefore, we will just skip this term and write: \[\gamma\hat{\phi}\times(\mathbf{r_{0}}-\mathbf{r})=\gamma\Big{(} \hat{z}(\rho-\rho_{0}\cos(\Delta\phi))+\hat{\rho}_{0}\cos(\Delta\phi)(z_{0}-z) \Big{)} \tag{85}\] Proceeding to the denominator of (58), we expand it and take the dot product. Using (60a) we can notice that \(\hat{\rho}_{0}\cdot\hat{\rho}=\cos(\Delta\phi)\). \[|\mathbf{r_{0}}-\mathbf{r}|^{3}=(\hat{\rho}_{0}\rho_{0}+\hat {z}z_{0}-\hat{\rho}\rho-\hat{z}z)^{3} \tag{86}\] \[=\Big{(}\rho_{0}^{2}+\rho^{2}-2\rho\rho_{0}\cos(\Delta\phi)+(z_{0}-z)^{2} \Big{)}^{3/2} \tag{87}\] Now we can plug both results (85) and (87) to (58) obtaining: \[\mathbf{u}(\mathbf{r_{0}})=\frac{1}{4\pi}\int_{-\pi/2}^{\pi /2}\int_{0}^{2\pi}\gamma\frac{\hat{z}(\rho-\rho_{0}\cos(\Delta\phi))+\hat{\rho }_{0}\cos(\Delta\phi)(z_{0}-z)}{\Big{(}\rho_{0}^{2}+\rho^{2}-2\rho\rho_{0}\cos (\Delta\phi)+(z_{0}-z)^{2}\Big{)}^{3/2}}\rho d\phi ds \tag{88}\] Proceeding in an analogical way as in [9] (art. 161), we introduce: \[{R_{1}}^{2}=(\rho_{0}-\rho)^{2}+(z_{0}-z)^{2} \tag{89a}\] \[{R_{2}}^{2}=(\rho_{0}+\rho)^{2}+(z_{0}-z)^{2} \tag{89b}\] which can be interpreted as the least (\(R_{1}\)) and the greatest (\(R_{2}\)) distances from the circular cross-section (\(z\) const) to the point \(\mathbf{r_{0}}\). If we now add equations (89) and divide by 2, we obtain: \[\frac{{R_{1}}^{2}+{R_{2}}^{2}}{2}=\rho_{0}^{2}+\rho^{2}+(z_{0}-z)^{2} \tag{90}\] by analogous subtraction: \[\frac{{R_{2}}^{2}-{R_{1}}^{2}}{2}=2\rho_{0}\rho \tag{91}\] We can now use these formulas to transform the denominator of (88). For convenience we will skip the root, considering just the dot product: \[|\mathbf{r_{0}}-\mathbf{r}|^{2}=\frac{1}{2}\Big{(}{R_{1}} ^{2}+{R_{2}}^{2}-({R_{2}}^{2}-{R_{1}}^{2})\cos(\Delta\phi)\Big{)} \tag{92}\] Now, using the double-angle formula: \[=\frac{1}{2}\Big{(}{R_{1}}^{2}+{R_{2}}^{2}-({R_{2}}^{2}-{R_{1}}^{2})\big{(}\, \cos^{2}(\frac{\Delta\phi}{2})-\sin^{2}(\frac{\Delta\phi}{2})\,\big{)}\Big{)}\] some further algebra: \[={R_{1}}^{2}\,\cos^{2}(\frac{\Delta\phi}{2})+{R_{2}}^{2}\,\sin^{2}(\frac{ \Delta\phi}{2}) \tag{93}\] \[={R_{2}}^{2}\left(\,\big{(}\frac{R_{1}}{R_{2}}\big{)}^{2}\cos^{2}(\frac{\Delta \phi}{2})+\sin^{2}(\frac{\Delta\phi}{2})\,\right) \tag{94}\] trigonometric identity again: \[={R_{2}}^{2}\left(1-\,k^{2}\,\cos^{2}(\frac{\Delta\phi}{2})\,\right) \tag{95}\] where \[k^{2}=1-\left(\frac{R_{1}}{R_{2}}\right)^{2} \tag{96}\] Let us also substitute: \[\theta=\frac{\pi}{2}-\frac{\Delta\phi}{2} \tag{97}\] Then, the denominator will take the form: \[|\mathbf{r_{0}}-\mathbf{r}|^{2}={R_{2}}^{2}\bigg{(}1-\,k^{ 2}\,\sin^{2}\theta\,\bigg{)} \tag{98}\] and for the numerator of (88) we use double-angle formula: \[\cos(\Delta\phi)=\cos^{2}(\frac{\Delta\phi}{2})-\sin^{2}(\frac{\Delta\phi}{2}) \tag{99}\] \[=1-2\sin^{2}(\frac{\pi}{2}-\theta)=1-2\cos^{2}\theta\] to obtain: \[\hat{\phi}\times(\mathbf{r_{0}}-\mathbf{r})=\hat{z}\Big{(} \rho-\rho_{0}+2\rho_{0}\cos^{2}\theta)\Big{)}+\hat{\rho}_{0}(1-2\cos^{2} \theta)(z_{0}-z) \tag{100}\] We can now represent the radial component of (88) as follows: \[u_{\rho}(s_{0})=\frac{1}{4\pi}\int_{-\pi/2}^{\pi/2}\,g\,\rho\,(z_{0}-z)\int_{ \pi/2}^{3\pi/2}\,\frac{(1-2\cos^{2}\theta)\,(2\,d\theta)}{{R_{2}}^{3}\bigg{(} 1-\,k^{2}\,\sin^{2}\theta\,\bigg{)}^{3/2}}\,ds\] \[=\frac{1}{4\pi}\int_{-\pi/2}^{\pi/2}\,\frac{g\,\rho\,(z_{0}-z)}{{R_{2}}^{3}} \int_{0}^{\pi}\frac{2\,(1-2\cos^{2}\theta)\,d\theta}{\bigg{(}1-\,k^{2}\,\sin ^{2}\theta\,\bigg{)}^{3/2}}\,ds \tag{101}\] For convenience, we will split integral over \(\theta\) into two parts, defining the following: \[I_{1}=\int_{0}^{\pi}\frac{2\,d\theta}{\bigg{(}1-k^{2}\,\sin^{2}\theta\bigg{)} ^{3/2}} \tag{102}\] \[I_{2}=\int_{0}^{\pi}\frac{4\cos^{2}\theta\,d\theta}{\bigg{(}1-k^{2}\,\sin^{2} \theta\bigg{)}^{3/2}} \tag{103}\] and then: \[u_{\rho}(s_{0})=\frac{1}{4\pi}\int_{-\pi/2}^{\pi/2}\frac{\gamma\,r\,(z_{0}-z)}{{R_ {2}}^{3}}(I_{1}-I_{2})\,ds \tag{104}\] proceeding in a similar fashion with the vertical component of (88), we obtain: \[u_{z}(s_{0})=\frac{1}{4\pi}\int_{-\pi/2}^{\pi/2}\frac{\gamma\,\rho\,\rho_{0}}{{ R_{2}}^{3}}\bigg{(}\frac{\rho-\rho_{0}}{\rho_{0}}I_{1}+I_{2}\bigg{)}\,ds \tag{105}\] Integrals \(I_{1}\) and \(I_{2}\) can be computed analytically resulting in: \[I_{1}=\frac{4}{1-k^{2}}E(k^{2}) \tag{106}\] \[I_{2}=\frac{8}{k^{2}}\bigg{(}K(k^{2})-E(k^{2})\bigg{)} \tag{107}\] where \(K()\) and \(E()\) are the complete elliptic integrals of the first and the second kind respectively. An equivalent set of equations was derived in [11]. ### Induced velocity is finite Formulas (104) and (105) have a singularity, when \(s\to s_{0}\) i.e. when we approach the case of autoinduction of velocity. In this case \(k\to 1\), what gives a singularity in \(I_{2}\) due to \(K\) and in \(I_{1}\) due to \((1-k^{2})^{-1}\). Nevertheless, the resulting velocity intuitively should be finite. To show that, we will consider a small piece of the vortex sheet. Without losing generality, we can assume that it has \(s_{0}=0\). Because \(\gamma\) is always zero at the \(z\) axis, we will also assume that \(\rho>0\). We will start by expanding \(\rho(s)\) and \(z(s)\) in a Taylor series around \(s_{0}\) and substituting this into \(k^{2}\): \[k^{2}=1-\frac{\big{(}-\rho_{0}^{\prime}+\mathcal{O}(s)\big{)}^{2}+\big{(}-z_ {0}^{\prime}+\mathcal{O}(s)\big{)}^{2}}{\big{(}2\rho_{0}+\rho_{0}^{\prime}s+ \mathcal{O}(s^{2})\big{)}^{2}+\big{(}-z_{0}^{\prime}s+\mathcal{O}(s^{2}) \big{)}^{2}}s^{2} \tag{108}\] thus: \[k\approx 1+\mathcal{O}(s^{2}) \tag{109}\] Then we can use an asymptotic expansion from [3] for \(K()\) when \(k\to 1^{-}\): \[K(k)\approx\ln\Big{(}\frac{4}{\sqrt{1-k^{2}}}\Big{)} \tag{110}\] \[K(k)\approx\ln\Big{(}\frac{4}{|s|}\Big{)} \tag{111}\] and this integrates to a finite value: \[\int_{0}^{\Delta s}K(k)ds\approx\int_{0}^{\Delta s}\ln\Big{(}\frac{4}{|s|} \Big{)}ds=\Delta s(1-\ln\Big{(}\frac{\Delta s}{4}\Big{)}) \tag{112}\] All terms including \(K()\), by partial integration, give also a finite value. Proceeding to the second source of the problem: we can note that \(I_{1}\) is always multiplied by \((\rho_{0}-\rho)\) or \((z_{0}-z)\) which are both \({\cal O}(s)\). Using (109), we can write: \[I_{1}=\frac{4}{{\cal O}(s^{2})}E(k^{2}) \tag{113}\] so for example: \[I_{1}(z_{0}-z)\approx\frac{-4z_{0}^{\prime}}{s}E(k^{2}) \tag{114}\] and this integrates (in a sense of Cauchy principal value) to a finite quantity. Therefore, the overall induced velocity stays finite. ### Velocity smoothing Although the induced velocity is finite, formulas (105), (104) are still troublesome. The problem of singularity in the integrand might be solved by various methods and an example is presented in [16], where a function with the same singularity is subtracted from the integrand. No matter of solution details, some areas of the sheet quickly get noisy and the resulting structure is growing cancer-like. The irregular movement of the nodes was studied in [6] and [10] and is associated with unresolved Kelvin-Helmholtz instability. Modeling a physical vortex sheet, of finite thickness, with an idealized two-dimensional surface, introduces instability in all small wavenumbers. Its development can be launched by e.g. finite precision arithmetics. A well-established solution to this problem can be found in [7], [8], [11]. The core idea is to modify the Biot-Savart formula by adding a smoothing parameter \(\delta\) to the denominator, i.e.: \[\mathbf{u}(\mathbf{r_{0}})=\frac{1}{4\pi}\int_{0}^{2\pi} \int_{-\pi/2}^{\pi/2}\frac{\gamma\hat{\phi}\times(\mathbf{r_{0}}- \mathbf{r})}{\left(|\mathbf{r_{0}}-\mathbf{r}|^{2} +\delta^{2}\right)^{3/2}}\rho ds\,d\phi \tag{115}\] The exact value of \(\delta\) determines the amount of damping that is applied to high wavenumbers and thus limits the range of scales present in a flow. The figure (5) presents the smoothed velocity field. [2] shows that such a modification approaches the Lamb-Oseen solution of the viscous vortex decay, where \[\delta(t)=\sqrt{5.03\nu t} \tag{116}\] although we used \(\delta\) of fixed size, as in most of the articles mentioned. An alternative, Gaussian smoothing, leveraging Lamb-Oseen vortex directly is presented in [1]. The comparison of the two methods might be found in [14]. Lagrangian, vorticity-based formulations of the flow, that include some kind of smoothing in the Biot-Savart formula are known in the literature as vortex blob method. ### Summary of the governing equations Below, we present the governing equations that take an advantage of the axial symmetry (section 2.4) and include smoothing (section 2.6), applied in exactly the same way as in [11]. \[\frac{d\gamma}{dt}=b\frac{\partial z}{\partial s} \tag{117}\] \[\frac{d\rho}{dt}=u_{\rho}^{\delta} \tag{118}\] \[\frac{dz}{dt}=u_{z}^{\delta} \tag{119}\] \[u_{\rho}^{\delta}(s_{0})=\frac{1}{4\pi}\int_{-\pi/2}^{\pi/2}\,\frac{\gamma\, \rho\,(z_{0}-z)}{{R_{2}^{\delta}}^{3}}\,(I_{1}^{\delta}-I_{2}^{\delta})\,ds \tag{120}\] \[u_{z}^{\delta}(s_{0})=\frac{1}{4\pi}\int_{-\pi/2}^{\pi/2}\,\frac{\gamma\,\rho \,}{{R_{2}^{\delta}}^{3}}\bigg{(}\left(\rho-\rho_{0}\right)I_{1}^{\delta}+ \rho_{0}\,I_{2}^{\delta}\bigg{)}ds \tag{121}\] where: \[I_{1}^{\delta}=\frac{4}{1-{k_{\delta}}^{2}}E({k_{\delta}}^{2}) \tag{122}\] Figure 5: Influence of smoothing parameter on the velocity field \[I_{2}^{\delta} = \frac{8}{{k_{\delta}}^{2}}\bigg{(}K({k_{\delta}}^{2})-E({k_{\delta}} ^{2})\bigg{)} \tag{123}\] \[{k_{\delta}}^{2}=1-\bigg{(}\frac{R_{1}^{\delta}}{R_{2}^{\delta}} \bigg{)}^{2} \tag{124}\] \[{R_{1}^{\delta}}^{2} = (\rho_{0}-\rho)^{2}+(z_{0}-z)^{2}+\delta^{2} \tag{125}\] \[{R_{2}^{\delta}}^{2} = (\rho_{0}+\rho)^{2}+(z_{0}-z)^{2}+\delta^{2} \tag{126}\] \(K()\) and \(E()\) denote complete elliptic integrals of the 1st and 2nd kind respectively. \(\rho_{0}\) is a shortcut for \(\rho(s_{0})\) etc. ## 3 Numerical algorithm ### General structure of the code We demand from the discretization method the two following features: the ability to add new nodes in regions of intense stretching and the ability to merge nodes that are close enough. For this purpose, the vortex sheet is represented as a set of nodes and a set of segments connecting the nodes. Such a complication will be more understandable in subsection 3.4. Allowing the merging of nearby nodes, we will change the sheet's topology. A single node could be connected with more than two other nodes so the sheet could no longer be considered a continuous, oriented curve, but rather an oriented graph. Each node consists of: * an integration weight, such that: \(\sum_{i=1}^{N}f_{i}\mathtt{iW}_{i}\approx\int_{-\pi/2}^{\pi/2}fds\), that is scheme-dependent * a list of segments linked to the node * position, alternatively represented by vector \(r\) * circulation density and possibly some other values, necessary for memory management, etc. We will denote by Nn the total number of nodes and use the index \(i\) while mentioning a particular one. Each segment consists of: * indices of the starting and the ending nodes of that segment * parameter length * if the segment constitutes a boundary of a buoyant region and possibly some other values. The total number of segments will be denoted by \(\mathtt{Ns}\) and each particular one, will be distinguished by the index \(j\). We will also denote the values at the starting node by subscript \(s\) and at the ending node by \(e\). We resigned from keeping the values of the parameter (\(s\)) in nodes because it would be difficult to track when nodes are merged. The same purpose can be reached by keeping \(ds\) of every segment. ### Computation of values at nodes Quantities are assumed to vary linearly over the segments. Accessing nodes from the previous or next segment for higher-order schemes would be difficult because, due to merging, the order of segments is not properly defined (there could be multiple previous or next segments). Nevertheless, it could be done by turning segments into higher-order finite elements, with additional nodes inside. Although such improvement is left for further work and by now, the trapezoidal rule is used, which is second order in \(\mathtt{ds}\). #### 3.2.1 Integral weights Computation of the trapezoidal integral weights is done by splitting \(\mathtt{ds}\) from every segment in half, between its starting point and ending point. From the "node's perspective", we sum up all the \(\mathtt{ds}\) from segments listed in \(\mathtt{seg[]}\) and divide by 2: \[\mathtt{iW}_{i}=\tfrac{1}{2}\sum_{j\in\mathtt{seg}}\mathtt{ds}_{j} \tag{127}\] #### 3.2.2 Time derivative of the circulation density The generation of \(\gamma\) at every node is computed as follows: equation (56) is integrated over the parameter length of every segment giving the total increase of circulation around it. Then, the circulation generation is split among the starting and ending node of the segment. At each node, after it is gathered from the adjacent segments, it is divided by \(\mathtt{iW}\) and stored as \(\mathtt{g}\). This results in the following formula: \[\frac{d\gamma_{i}}{dt}=\frac{b}{\mathtt{iW}_{i}}\sum_{j\in\mathtt{seg}}\tfrac {1}{2}(z_{e}-z_{s})_{j} \tag{128}\] which can also be seen as a linear staggered grid. #### 3.2.3 Velocities and coordinates Velocities were computed in a procedure described in section 4.2. Time integration was performed with the 4th-order Runge-Kutta scheme. ### Refinement The fundamental constant characterizing general discretization is \[\mathtt{ds0}=\frac{\pi}{\mathtt{Nn}_{0}-1} \tag{129}\] which is the initial length of the segments, both, in the parameter sense and in the nondimensional physical sense. A segment, which exceeds prescribed maximal elongation \(\kappa_{E}\): \[(\boldsymbol{r}_{e}-\boldsymbol{r}_{s})^{2}>\mathtt{ds0}^{2}\kappa_{E}^{2} \tag{130}\] is split in half. Let us denote it by \(1\) and let it start in node \(s\) and end in node \(e\), as depicted on fig. (6). A new node \(n\) is inserted just between the starting and the ending nodes, averaging their values: \[\boldsymbol{r}_{n} =\tfrac{1}{2}(\boldsymbol{r}_{s}+\boldsymbol{r}_{e}) \tag{131}\] \[\gamma_{n} =\tfrac{1}{2}(\gamma_{s}+\gamma_{e})\] The segment \(1\) is replaced by segment \(2\) given between nodes \(s\) and \(n\) and segment \(3\), between \(n\) and \(e\). The initial \(\mathtt{ds}\) is split in half so: \[\mathtt{ds}_{2} =\tfrac{1}{2}\mathtt{ds}_{1}\] \[\mathtt{ds}_{3} =\tfrac{1}{2}\mathtt{ds}_{1} \tag{132}\] and \(\mathtt{iW}\) of all three nodes are updated. The proper update of \(\mathtt{seg[]}\) arrays is of course also needed. \(\mathtt{bdry}\) is inherited by both segments without changes. Below we will check that the above procedure preserves circulation, denoting updated values with primes: \[\mathtt{iW}_{s}^{\prime} =\mathtt{iW}_{s}-\tfrac{1}{4}\mathtt{ds}_{1}\] \[\mathtt{iW}_{e}^{\prime} =\mathtt{iW}_{e}-\tfrac{1}{4}\mathtt{ds}_{1}\] Figure 6: Before and after refinement of a segment \[\mathtt{iW}_{n}=\tfrac{1}{2}\mathtt{ds}_{1}\] Initially, we have: \[\Gamma=\mathtt{iW}_{s}\gamma_{s}+\mathtt{iW}_{e}\gamma_{e} \tag{133}\] and after refinement we have: \[\Gamma^{\prime}=\mathtt{iW}_{s}^{\prime}\gamma_{s}+\mathtt{iW}_{e}^{\prime} \gamma_{e}+\mathtt{iW}_{n}\gamma_{n}= \tag{134}\] \[=(\mathtt{iW}_{s}-0.25\mathtt{ds}_{1})\gamma_{s}+(\mathtt{iW}_{e}-\tfrac{1}{ 4}\mathtt{ds}_{1})\gamma_{e}+\tfrac{1}{2}\mathtt{ds}_{1}(\gamma_{s}+\gamma_{e})\] \[=\mathtt{iW}_{s}\gamma_{s}+\mathtt{iW}_{e}\gamma_{e}\] The total generation of circulation is also clearly preserved. The refinement due to high curvature is left for future development. Its possible drawback could be the unphysically increased stability of regions of low curvature, due to coarse discretization. Moreover merging segments of significantly different lengths would also be more difficult. ### Surgery Most of the sophistication of the algorithm comes from the need of merging nearby nodes. This idea was inspired by the procedure used with contour dynamics method [4]. Although it might sound simple, the trade-off between reducing the complexity of the sheet and not degenerating it too much is difficult to balance. We would like to keep the number of nodes in regions of contraction because that could allow resolving the possible development of instability. Such regions are rather rare anyway. The main purpose of the surgery is to simplify laminate structures. By laminate structures, we mean regions of many, relatively straight, parallel, pieces of the vortex sheet, packed tightly, as presented in figure (7). They are pretty common in the analyzed case, originating in stretching and folding of Kelvin-Helmholtz vortices. Distances between two pieces belonging to such laminate structure tend to get lower and lower, so they could be "glued together" when they are much closer to each other than ds0 (discretization scale). This can not only decrease the computational complexity by removing nodes but also avoid problems when pieces are so tightly packed that can cross each other due to numerical inaccuracies. The main problem with the process is the change in the sheet topology. What is inside and what is outside the surface is no longer clearly defined. We solve this difficulty by marking the segments that constitute the boundary of the buoyant region with the logical variable bdry. Its value is true when the buoyant fluid is only on one side of the segment. If it is on both or on none, then the value is false and the segment is just a piece of vortex sheet, that does not constitute the interface. This means that it does not generate the vorticity (\(\nabla\times\boldsymbol{b}=\boldsymbol{0}\)). All the segments are, of course, initialized with bdry=true. #### 3.4.1 Case a) merging two interfaces Let us consider a fragment of vortex sheet, where two segments are close enough to be merged. We will denote them by indices \(1\) and \(2\). Their starting and ending nodes are denoted by \(s\) and \(e\) with proper subscripts, as presented in figure (8). Let the curve (the meridional section of the sheet) be positively oriented, so the buoyant region is always on the left side of each segment. We demand from the procedure not to affect the total circulation and the total generation of the circulation. The latter depends only on the coordinates of the nodes of the segments. Because the segments are assumed to have a buoyant region Figure 7: An example of a laminate structure in a series of subfigures of increasing zoom. The dashed square represents a region, enlarged in the next subfigure. In (c) and (d) individual nodes are depicted. always on the left side, if two of them are close to each other, they must have (approximately) opposite directions. This means that: \[|z_{s1}-z_{e2}|<\varepsilon \tag{135}\] \[|z_{e1}-z_{s2}|<\varepsilon\] where \(\varepsilon\) is some small value, determined by the merging criterion. Therefore, the total generation of circulation by these two segments, using eq. (56) is: \[\frac{d\Gamma_{1}}{dt}+\frac{d\Gamma_{2}}{dt}=b(z_{e1}-z_{s1})+b(z_{e2}-z_{s2} )<2b\varepsilon \tag{136}\] which is assumed to be negligible. Segments 1 and 2, both with bdry=true will be replaced by segment 3 with bdry=false. We will call it the external vortex sheet. The choice of its direction does not matter. We decide to locate the resulting new nodes at averages weighted by the circulation modulus i.e: \[\mathbf{r}_{s3}=\frac{\mathbf{r}_{s1}\mathtt{iW}_{s1}|\gamma_{s1}|+\mathbf{r}_{e2}\mathtt{ iW}_{e2}|\gamma_{e2}|}{\mathtt{iW}_{s1}|\gamma_{s1}|+\mathtt{iW}_{e2}|\gamma_{e2}|} \tag{137}\] \[\mathbf{r}_{e3}=\frac{\mathbf{r}_{e1}\mathtt{iW}_{e1}|\gamma_{e1}|+\mathbf{r}_{s2}\mathtt{ iW}_{s2}|\gamma_{s2}|}{\mathtt{iW}_{e1}|\gamma_{e1}|+\mathtt{iW}_{s2}|\gamma_{s2}|}\] In the end, we need to update the sheet structure. Segment 3 might be just modified segment 1, while 2 is removed. List of segments linked to the new node will be a union of segments linked to the old ones, without removed segments and with the newly created one. \[\mathtt{seg[]}_{s3}=\mathtt{seg[]}_{s1}\cup\mathtt{seg[]}_{e2}\cup\{3\}\backslash \{1,2\} \tag{138}\] Figure 8: Before and after merging two interface segments. The solid line depicts segments with bdry=true, while the dashed line ones with bdry=false \[\texttt{seg[]}_{e3}=\texttt{seg[]}_{e1}\cup\texttt{seg[]}_{e2}\cup\{3\}\backslash\{1,2\}\] There is also an analogical case that results in an internal vortex sheet - surrounded from both sides with a buoyant region. It is qualitatively the same, just colors in fig. (8) are swapped. It is also good to notice that the resulting closed interiors preserve their orientation in both cases. This means that all the positive-oriented cycles in the graph enclose the buoyant fluid, while the negative-oriented enclose non-buoyant one. #### 3.4.2 Case b) merging interface with an external vortex sheet In the previous case, we showed, that merging might result in a segment that does not generate circulation. Now, we will analyze how such a segment might be merged with a generating one: The external vortex sheet is just incorporated into the interior, which keeps its orientation. Because circulation generation depends on the coordinates, keeping the values from the interface seems more reasonable, than using (137). There is also an analogical case with the internal vortex sheet. In all cases, we would like to preserve the total circulation and the total parameter length (\(s\)), which is important for computing the integrals (section 2.3). Due to the latter: \[\texttt{ds}_{3}=\texttt{ds}_{1}+\texttt{ds}_{2} \tag{139}\] then, the values of \(\texttt{iW}_{s3}\) and \(\texttt{iW}_{e3}\) are computed by the formula (127). Circulation conservation requires: \[\gamma_{s3} =\frac{\texttt{iW}_{s1}\gamma_{s1}+\texttt{iW}_{e2}\gamma_{e2}}{ \texttt{iW}_{s3}} \tag{140}\] \[\gamma_{e3} =\frac{\texttt{iW}_{e1}\gamma_{e1}+\texttt{iW}_{s2}\gamma_{s2}}{ \texttt{iW}_{e3}}\] Figure 9: Before and after merging and interface with an external vortex sheet. The solid line depicts segments with bdry=true, while the dashed line ones with bdry=false #### 3.4.3 Case c) merging two external vortex sheets Merging of two external (or two internal) vortex sheets results in a sheet of the same kind as the former two. The coordinates of the nodes are determined according to (137). Because the direction of external (internal) segments is arbitrary, it is not guaranteed that the starting node of one will be merged with the ending node of the other. It is necessary to check for merging also after flipping one of the segments. #### 3.4.4 Merging criterion As can be seen in fig. (7), especially (d), nodes of the laminate structure could be shifted along its direction. Therefore their distance might be much bigger (reaching ds0/2) than the actual distance between the two layers of the laminate. For this reason, although straightforward, this might not be the best criterion for merging. We want to keep the process as non-intrusive as possible and merging layers that are ds0/2 away, might be too degenerative. This problem will be even amplified if one decides to adjust the discretization to the local curvature, which in regions of interest is generally low. We propose the criterion based on three indicators: * distance between the nodes that could possibly be merged \[(\boldsymbol{r}_{e1}-\boldsymbol{r}_{s2})^{2} <\texttt{ds0}^{2}\kappa_{M}{}^{2}\] (141) \[(\boldsymbol{r}_{s1}-\boldsymbol{r}_{e2})^{2} <\texttt{ds0}^{2}\kappa_{M}{}^{2}\] which should be lower than some percentage of the discretization scale, given by \(\kappa_{M}\). To deal with the problem of shifted nodes it can be set to around 0.5. * "normal distance" between the segments. We demand all the distances between chosen point and an opposite segment to be lower than some percentage of ds0, given by \(\kappa_{MN}\) (\(<\kappa_{M}\)). For convenience, let us denote vectors representing segments by \(\boldsymbol{a}\) and vectors connecting these segments by \(\boldsymbol{c}\): \[\boldsymbol{a}_{1} =\boldsymbol{r}_{e1}-\boldsymbol{r}_{s1}\] \[\boldsymbol{a}_{2} =\boldsymbol{r}_{e2}-\boldsymbol{r}_{s2}\] \[\boldsymbol{c}_{1} =\boldsymbol{r}_{s2}-\boldsymbol{r}_{e1}\] \[\boldsymbol{c}_{2} =\boldsymbol{r}_{s1}-\boldsymbol{r}_{e2}\] then the criterion is: \[\left|\boldsymbol{a}_{1}\times\boldsymbol{c}_{1}\right|/\left| \boldsymbol{a}_{1}\right| <\texttt{ds0}\,\kappa_{MN}\] \[\left|\boldsymbol{c}_{1}\times\boldsymbol{a}_{2}\right|/\left| \boldsymbol{a}_{2}\right| <\texttt{ds0}\,\kappa_{MN}\] (142) \[\left|\boldsymbol{a}_{2}\times\boldsymbol{c}_{2}\right|/\left| \boldsymbol{a}_{2}\right| <\texttt{ds0}\,\kappa_{MN}\] \[\left|\boldsymbol{c}_{2}\times\boldsymbol{a}_{1}\right|/\left| \boldsymbol{a}_{1}\right| <\texttt{ds0}\,\kappa_{MN}\] * relative angle (given by its cosine) \[\frac{\mathbf{a}_{1}\cdot\mathbf{a}_{2}}{|\mathbf{a}_{1}|\ |\mathbf{a}_{2}|}<\kappa_{A}\] (143) where \(\kappa_{A}\) represents the maximal cosine of the angle and should be negative. This allows targeting approximately parallel layers. Moreover, prevents merging neighboring segments that are in a straight line if they get too short. We do not do this as described at the beginning of the section. This would also require a slightly different procedure to be done in a conservative manner. Nevertheless, sharp enough corners are merged with no problems. In addition to the above criterion, we need to add a few exceptions to reject some pathological cases. * we do not allow the converging or diverging segments to be merged. Such a situation could happen because segments with bdry=false have an arbitrary direction and pass the angle criterion. Therefore, if \(e_{1}=e_{2}\) or \(s_{1}=s_{2}\) merging is not applied. * one-node segments (\(s_{1}=e_{1}\)) are merged only with each other. Merging such a segment with a normal, two-node one is troublesome to be done in a conservative way. Such a segment might arise in a case presented in fig. (10) when segments 1 and 2 are merged. ### Promoting continuity of merging If we let segments be tested for merging in ("random") order of their placement in memory, the resulting structures might leave a lot to be desired. E.g. if we consider 3 parallel lines, it might happen that some segments from the middle one will be merged with the left one and others with the right one. This will give rise to a zig-zag reminding shape, which we find unfavorable. Another way in which it might arise is when two lines are balancing on the edge of the merging criterion. We can imagine that the first segment will be merged, the next one will not, the third one will be merged again, the 4th not, etc. We would prefer to merge continuous, possibly long, parts of the sheet. To promote this we test the segments in the order given by the Deep First Search algorithm. We start Figure 10: Segment 3 becomes a one-node segment after merging segments 1 and 2 with a given segment, look for its possible merges, then proceed to its neighbor and do the same. Moreover we order the neighboring segments according to the angle they make with the currently visited segment, to promote traveling over straight lines. One can notice that when the merging starts, we get a node that is connected to (at least) three segments. In addition to that, angles between nearby segments have changed unfavorably for the continuation of the surgery. For that reason, we add one more rule to the criterion: if tested segments have a common node, and if this node is connected to at least 3 segments, then we neglect the angle criterion (143) and the normal distance criterion (142). This is not only a correction to the described disadvantage but also actively promotes continuation. If merging just has started, then the next segment has less restrictive criteria and is more likely to also be merged. For that reason, surgery will be continued slightly too far i.e. will include segments that would not be merged if the process was to start at them. As a result, when the process is stopped, segments that are going to be tested, are no longer balancing on the edge of the criterion but rather clearly do not satisfy it. The last advantageous feature that could be added is setting few thresholds of criteria parameters. We would start with the most restrictive one and proceed to the more liberal ones. In the case of a few merging possibilities, this approach would support the best fit, rather than the first found. Nevertheless, we did not use this. ## 4 Optimization ### Construction of a tree Next to nodes and segments, we add to the vortex sheet structure the third element - the quad-tree. It consists of: * array that redefines nodes' indices. Used for sorting. * an array of cells * list of cells that are at the bottom - are childless Each cell consists of: * first and last index in nodeMap of the nodes that belong to the cell * radial and vertical bounds of the cell * 4 indices of the child-cells of that cell and some other variables of secondary importance for the general algorithm. The hierarchical structure of cells is constructed according to the flow chart in fig. (11). We choose the desired maximal bottom-cell size \(h\) (understand as a diagonal). Starting with one, mother cell that includes the whole sheet, we search for the extremal coordinates of the nodes. Then we use them to adjust the boundaries (\(\rho_{0}\), \(\rho_{1}\), \(z_{0}\), \(z_{1}\)) of the cell, and check the criterion: \[(\rho_{1}-\rho_{0})^{2}+(z_{1}-z_{0})^{2}<h^{2} \tag{144}\] If it is not met, the cell is split into four child cells coincident with the quadrants. Nodes are distributed among the children by properly sorting the piece of nodeMap that belongs to the initial cell. As a result, nodes inherited by a child are in a continuous interval of indices. Then coordinates of all four children are cropped and the process continues till the creation of small enough cells. Those of them, that are not empty, will be called bottom cells and their indices are stored in bottoms. An exemplary tree is presented in fig. (12). Figure 11: Construction of a quadtree ### Fast velocity induction This section presents an application of the method described in [5] for two dimensions, to the three-dimensional, axisymmetric case. The main idea comes from the N-body problem in the context of celestial mechanics. Interactions of far-away clusters of bodies are simplified to the interaction between their barycenters, which is then "distributed" over particular bodies. Clustering is done with the help of a hierarchical structure like the one described in the previous section. Although we are dealing with a continuous problem, due to discretization it becomes an N-body problem as well. In the original context, mass is always positive, therefore barycenters lie within the clusters. It is not the case in the N-vortex problem, where its analog - circulation, might also be negative, making the location of barycenters unbounded. The solution is to split the vortices into two groups - these with positive circulation and the rest with negative. Then barycenters are determined separately for each group. The method is summarized in three steps: 1. build a tree 2. compute circulations and barycenters Figure 12: Quad tree constructed for the system of 32 nodes with \(h=0.4\) at initial condition 3. induce velocities A more detailed flowchart for the last one is presented in fig. (13). #### 4.2.1 Approximation of the induced field Although it is an abuse of nomenclature, let us refer to the following as the positive and negative parts of the circulation in the axisymmetric case: \[\Gamma_{+}=\int_{C_{+}}\gamma\rho\,ds \tag{145a}\] \[\Gamma_{-}=\int_{C_{-}}\gamma\rho\,ds \tag{145b}\] Figure 13: Fast induction of velocity for a given probing point We associate it with a group of nodes gathered in a cell (\(C\)). Let us now define the corresponding, positive and negative barycenters of that cell: \[\mathbf{r_{+}}=\frac{1}{\Gamma_{+}}\int_{C_{+}}\mathbf{r}\gamma\rho\,ds \tag{146a}\] \[\mathbf{r_{-}}=\frac{1}{\Gamma_{+}}\int_{C_{-}}\mathbf{r}\gamma\rho\,ds \tag{146b}\] although for brevity, we will discuss only the positive one. The negative will be completely analogical. Finally, let us consider two nodes from far away cells. The first one we will call the probing node (located in \(\mathbf{r_{0}}\)) and the second the source node (located in \(\mathbf{r}\)). The source cell has barycenter in \(\mathbf{r_{+}}\) and let \(\mathbf{r_{c}}\) be some reference point in the probing cell - e.g. its centroid. Then we introduce the following decomposition: \[\mathbf{r_{0}}=\mathbf{r_{c}}+\mathbf{\delta_{0}} \tag{147a}\] \[\mathbf{r}=\mathbf{r_{+}}+\mathbf{\delta} \tag{147b}\] Vectors \(\mathbf{\delta}=\delta_{\rho}\hat{\rho}+z_{\rho}\hat{z}\) and \(\mathbf{\delta_{0}}=\delta_{0\rho}\hat{\rho_{0}}+z_{\rho}\hat{z}\) should not be confused with the smoothing parameter from section 2.6. We would like to substitute this into the Biot-Savart formula (58). For simplicity, let us, now, consider just the following expression: \[\frac{\mathbf{r_{0}}-\mathbf{r}}{(\mathbf{r_{0}}-\mathbf{r})^{3}}=\frac{\mathbf{r_{c}}+\mathbf{\delta _{0}}-(\mathbf{r_{+}}+\mathbf{\delta})}{[\mathbf{r_{c}}+\mathbf{\delta_{0}}-(\mathbf{r_{+}}+\mathbf{ \delta})]^{3/2}} \tag{148}\] To take an advantage of the assumption that the cells are far away, we will introduce also: \[\mathbf{\delta^{\prime}}=\mathbf{\delta_{0}}-\mathbf{\delta} \tag{149a}\] \[\mathbf{r^{\prime}_{+}}=\mathbf{r_{c}}-\mathbf{r_{+}} \tag{149b}\] The considered case, together with the nomenclature was presented in fig. (14). We now can write: \[\frac{\mathbf{r_{0}}-\mathbf{r}}{(\mathbf{r_{0}}-\mathbf{r})^{3}}=\frac{\mathbf{r^{\prime}_{+}}+ \mathbf{\delta^{\prime}}}{(r^{\prime 2}_{+}+\delta^{\prime 2}+2\mathbf{r^{\prime}_{+}} \cdot\mathbf{\delta^{\prime}})^{3/2}}=\frac{\mathbf{r^{\prime}_{+}}+\mathbf{\delta^{ \prime}}}{r^{\prime 3}_{+}}\bigg{[}1+\big{(}\frac{\delta^{\prime}}{r^{\prime}_{+}} \big{)}^{2}+2\frac{\mathbf{r^{\prime}_{+}}\cdot\mathbf{\delta^{\prime}}}{r^{\prime 2}_{+}} \bigg{]}^{-3/2} \tag{150}\] The fraction in parentheses is by assumption (clusters far away from each other) a small parameter that we will denote \(\varepsilon\), so: \[\varepsilon=\frac{\delta^{\prime}}{r^{\prime}_{+}} \tag{151}\] In practice, in numerical computation it will be estimated as: \[\varepsilon^{\prime}=\frac{h+h_{0}}{r^{\prime}_{+}} \tag{152}\] where \(h\) and \(h_{0}\) are diagonals of probing and source cells. It is also good to notice that \[\frac{\mathbf{r^{\prime}_{+}}\cdot\mathbf{\delta^{\prime}}}{r^{ \prime 2}_{+}}=\frac{\hat{r^{\prime}_{+}}r^{\prime}_{+}\cdot\hat{\delta^{\prime}} \delta^{\prime}}{r^{\prime 2}_{+}}=(r^{\hat{\prime}}_{+}\cdot\hat{\delta^{\prime}})\varepsilon \tag{153}\] Reminding the following Taylor expansion around 0: \[(1+x)^{-3/2}=1-\frac{3}{2}x+\frac{15}{8}x^{2}+... \tag{154}\] we will expand the square bracket in (150), neglect the terms of order higher than first in \(\varepsilon\), obtaining: \[\frac{\mathbf{r_{0}}-\mathbf{r}}{(\mathbf{r_{0}}- \mathbf{r})^{3}}=\left(\frac{\hat{r^{\prime}_{+}}+\hat{\delta^{\prime }}\varepsilon}{r^{\prime 2}_{+}}\right)\biggl{[}1-\frac{3}{2}\bigl{(} \varepsilon^{2}+2\hat{r^{\prime}_{+}}\cdot\hat{\delta^{\prime}}\varepsilon \bigr{)}+\frac{15}{8}\bigl{(}\varepsilon^{2}+2\hat{r^{\prime}_{+}}\cdot\hat{ \delta^{\prime}}\varepsilon\bigr{)}^{2}...\biggr{]} \tag{155}\] \[\approx\frac{\hat{r^{\prime}_{+}}}{r^{\prime 2}_{+}}\biggl{[}1-3\hat{r^{ \prime}_{+}}\cdot\hat{\delta^{\prime}}\varepsilon\biggr{]}+\frac{\hat{\delta^{ \prime}}\varepsilon}{r^{\prime 2}_{+}}\] Now, when we truncated the series, it would be convenient to substitute back for \(\delta^{\prime}\) from (149a) and (147): \(\hat{\delta^{\prime}}\varepsilon=\mathbf{\delta^{\prime}}/r^{\prime }_{+}=(\mathbf{\delta_{0}}+\mathbf{r_{+}}-\mathbf{r })/r^{\prime}_{+}\) in hope of extracting the integral quantities defined at the very beginning (145a), (146a). Also for that purpose, we will present it in full Biot-Savart context: \[\frac{1}{4\pi}\int_{0}^{2\pi}\int_{C_{+}}\hat{\phi}\times\frac{\mathbf{r_{0}}-\mathbf{r}}{(\mathbf{r_{0}}-\mathbf{r})^{3}}\ \gamma\ ds\rho d\phi\approx \tag{156}\] \[\approx\frac{1}{4\pi}\int_{0}^{2\pi}\hat{\phi}\times\Biggl{(}\int_{C_{+}} \frac{\mathbf{r^{\prime}_{+}}}{r^{\prime 3}_{+}}\biggl{[}1-3\frac{\mathbf{r^{\prime}_{+}}\cdot(\mathbf{\delta_{0}}+\mathbf{ r_{+}}-\mathbf{r})}{r^{\prime 2}_{+}}\biggr{]}\ \gamma\ ds+\int_{C_{+}} \frac{(\mathbf{\delta_{0}}+\mathbf{r_{+}}-\mathbf{r })}{r^{\prime 3}_{+}}\ \gamma\ ds\Biggr{)}\rho d\phi\] Noticing that \(\mathbf{r^{\prime}_{+}}\), \(\mathbf{r_{+}}\), \(\delta_{0}\) do not depend on \(s\) (at least for a given pair of clusters) and the only \(s\)-dependent quantities are \(r\) and \(\gamma\), we will rearrange the Figure 14: Two exemplary nodes belonging to far-away clusters, captured by the cells integration: \[=\frac{1}{4\pi}\int_{0}^{2\pi}\hat{\phi}\times\Biggl{(}\frac{\mathbf{r^{ \prime}_{+}}}{r_{+}^{\prime 3}}\int_{C_{+}}\ \gamma\rho\ ds-3\frac{\mathbf{r^{\prime}_{+}}}{r_{+}^{ \prime 5}}\biggl{[}\mathbf{r^{\prime}_{+}}\cdot(\mathbf{\delta_{0}}+ \mathbf{r_{+}})\int_{C_{+}}\gamma\rho\ ds-\mathbf{r^{\prime}_ {+}}\cdot\int_{C_{+}}\mathbf{r}\ g\rho\ ds\biggr{]}+\] \[+\frac{\mathbf{\delta_{0}}+\mathbf{r_{+}}}{r_{+}^{\prime 3}} \int_{C_{+}}\ \gamma\rho\ ds-\frac{1}{r_{+}^{\prime 3}}\int_{C_{+}}\mathbf{r}\ \gamma\rho\ ds\Biggr{)}d\phi\] Substituting integral quantities from (145a) and (146a): \[=\frac{1}{4\pi}\int_{0}^{2\pi}\int_{C_{+}}\hat{\phi}\times\Biggl{(}\frac{\mbox {\boldmath$r^{\prime}_{+}$}}{r_{+}^{\prime 3}}\Gamma_{+}-3\frac{\mathbf{r^{ \prime}_{+}}}{r_{+}^{\prime 5}}\biggl{[}\mathbf{r^{\prime}_{+}}\cdot( \mathbf{\delta_{0}}+\mathbf{r_{+}})\Gamma_{+}-\mathbf{r^{\prime}_{+}}\cdot\Gamma_{+}\mathbf{r_{+}}\biggr{]}+ \frac{\mathbf{\delta_{0}}+\mathbf{r_{+}}}{r_{+}^{\prime 3}} \Gamma_{+}-\frac{\Gamma_{+}\mathbf{r_{+}}}{r_{+}^{\prime 3}}\Biggr{)}d\phi \tag{157}\] and canceling out terms, we finally get the formula for an approximate velocity in point \(\mathbf{r_{0}}\), induced by the nodes of positive circulation from cell \(C\): \[\mathbf{u}(\mathbf{r_{0}})_{C+}=\frac{1}{4\pi}\int_{0}^{2 \pi}\Gamma_{+}\hat{\phi}\times\Biggl{(}\underbrace{\frac{\mathbf{r^{ \prime}_{+}}}{r_{+}^{\prime 3}}}_{A}-3\underbrace{\frac{\mathbf{r^{ \prime}_{+}}}{r_{+}^{\prime 5}}\biggl{[}\mathbf{r^{\prime}_{+}} \cdot\mathbf{\delta_{0}}\biggr{]}}_{B}+\underbrace{\frac{\mathbf{\delta_{0}}}{r_{+}^{\prime 3}}}_{C}\Biggr{)}d\phi+{\cal O}(\varepsilon^{2}) \tag{158}\] The result is identical to one obtained in [5] with exception of the coefficient in front of term B. That difference comes from three-dimensional space instead of two and arises is Taylor expansion of \((1+x)^{-3/2}\) instead of \((1+x)^{-1}\). #### 4.2.2 Integration over the azimuth It can be noticed that the term A from equation (158) is analogical to (58), so we will write down the corresponding quantities (denoting them by \(\rightarrow\)): \[\rho_{c}\rightarrow\rho_{0}\] \[z_{c}\to z_{0} \tag{159}\] \[z_{+}\to z\] \[\Gamma_{+}\rightarrow\gamma\rho\ ds\] and proceed in exactly the same way as before (including smoothing), obtaining: \[u^{\delta}_{\rho_{0}A}=\frac{1}{4\pi}\frac{\Gamma_{+}(z_{c}-z_{+})}{R_{2A}^{ \delta\ \ 3}}\biggl{(}I_{1A}^{\delta}-I_{2A}^{\delta}\biggr{)} \tag{160}\] \[u^{\delta}_{zA}=\frac{1}{4\pi}\frac{\Gamma_{+}}{R_{2A}^{\delta\ \ 3}}\biggl{(}(\rho_{+}-\rho_{c})I_{1A}^{\delta}+\rho_{c}I_{2A}^{\delta}\biggr{)} \tag{161}\] where: \[{k_{A}^{\delta}}^{2}=1-\left(\frac{R_{1A}^{\delta}}{R_{2A}^{\delta}} \right)^{2} \tag{162}\] \[{R_{1A}^{\delta}}^{2}=(\rho_{c}-\rho_{+})^{2}+(z_{c}-z_{+})^{2}+ \delta^{2}\] (163) \[{R_{2A}^{\delta}}^{2}=(\rho_{c}+\rho_{+})^{2}+(z_{c}-z_{+})^{2}+ \delta^{2} \tag{164}\] Resulting integrals have exactly the same form as (122) and (123). The only difference is that \(k_{\delta}\) is replaced with \(k_{A}^{\delta}\) and \(R_{2}^{\delta}\) with \({R_{2A}^{\delta}}\). We will denote them as \(I_{1A}^{\delta}\) and \(I_{2A}^{\delta}\). It is good to notice, that (159) represent also behavior in a limit as dimensions of both cells (\(\boldsymbol{\delta}\) and \(\boldsymbol{\delta_{0}}\)) approach zero. Thus (161) and (160) approach the integrands of (105) and (104). Let us now proceed to the term C: \[C=\frac{1}{4\pi}\int_{0}^{2\pi}\frac{\Gamma_{+}\hat{\phi}\times\boldsymbol{ \delta_{0}}}{r_{+}^{\delta}}d\phi \tag{166}\] its denominator is exactly the same as for term A. Let \(\boldsymbol{\delta_{0}}=\delta_{0\rho}\hat{\rho_{0}}+\delta_{0z}\hat{z}\), and consider the numerator: \[\hat{\phi}\times(\delta_{0\rho}\hat{\rho_{0}}+\delta_{0z}\hat{z})=-\delta_{0 \rho}\cos(\Delta\phi)\hat{z}+\delta_{0z}\hat{\rho} \tag{167}\] Projecting it with dot product onto directions \(z\) and \(\rho_{0}\) gives: \[\hat{\phi}\times(\delta_{0\rho}\hat{\rho_{0}}+\delta_{0z}\hat{z})=-\delta_{0 \rho}\cos(\Delta\phi)\hat{z}+\delta_{0z}\cos(\Delta\phi)\hat{\rho_{0}} \tag{168}\] We switch to \(\theta\) via (99), remembering that \(d\phi=2d\theta\) and to half the integration upper limit. We obtain: \[u_{\rho 0C}^{\delta}=\frac{1}{4\pi}\int_{0}^{\pi}\frac{\Gamma_{+} \delta_{0z}(2-4\cos^{2}\theta)}{{R_{2A}^{\delta}}^{2}(1-{k_{A}^{\delta}}^{2} \sin^{2}\theta)}\,d\theta=\frac{1}{4\pi}\frac{\Gamma_{+}\delta_{0z}}{{R_{2A}^{ \delta}}^{3}}\bigg{(}I_{1A}^{\delta}-I_{2A}^{\delta}\bigg{)} \tag{169}\] \[u_{zC}^{\delta}=\frac{1}{4\pi}\int_{0}^{\pi}\frac{\Gamma_{+} \delta_{0z}(2-4\cos^{2}\theta)}{{R_{2A}^{\delta}}^{2}(1-{k_{A}^{\delta}}^{2} \sin^{2}\theta)}\,d\theta=\frac{-1}{4\pi}\frac{\Gamma_{+}\delta_{0\rho}}{{R_{2 A}^{\delta}}^{3}}\bigg{(}I_{1A}^{\delta}-I_{2A}^{\delta}\bigg{)} \tag{170}\] where quantities with subscript \(A\) are defined near (162). \[B=\frac{3}{4\pi}\int_{0}^{2\pi}\Gamma_{+}\hat{\phi}\times\frac{\boldsymbol{r_ {+}^{\prime}}(\boldsymbol{r_{+}^{\prime}}\cdot\boldsymbol{\delta_{0}})}{r_{+ }^{\prime 5}}\,d\phi \tag{171}\] The denominator can be transformed using the formula (98). In the first part of the numerator - the cross product, we can use (100). The remainder is the dot product in the numerator. It is convenient to decompose \(\boldsymbol{r_{+}^{\prime}}\) with (149b). \[\boldsymbol{r_{+}^{\prime}}\cdot\boldsymbol{\delta_{0}}=(\boldsymbol{r_{c}}- \boldsymbol{r_{+}})\cdot\boldsymbol{\delta_{0}}=(\rho_{c}\hat{\rho_{0}}+z_{c} \hat{z}-\rho_{+}\hat{\rho}-z_{+}\hat{z})\cdot(\delta_{0\rho}\hat{\rho_{0}}+ \delta_{0z}\hat{z}) \tag{172}\] \[=\rho_{c}\,\delta_{0\rho}-\rho_{+}\delta_{0\rho}\cos(\Delta\phi)+(z_{c}-z_{+}) \delta_{0z}\] then we can substitute \(\theta\) using (99) and overall, we obtain: \[v_{B\rho 0}=\frac{3}{4\pi}\int_{0}^{\pi}\Gamma_{+}\frac{\rho_{c}\,\delta_{0\rho} -\rho_{+}\delta_{0\rho}(1-2\cos^{2}\theta)+(z_{c}-z_{+})\delta_{0z}}{R_{2A}^{ \delta\ \ 5}(1-{k_{A}^{\delta\ \ 2}}^{2}\sin^{2}\theta)^{5/2}}(z_{c}-z_{+})(2-4\cos^{2} \theta)\,d\theta \tag{173}\] \[=\frac{6}{4\pi}\int_{0}^{\pi}\Gamma_{+}\frac{\delta_{0\rho}(\rho_{c}-\rho_{+}) +(z_{c}-z_{+})\delta_{0z}}{R_{2A}^{\delta\ \ 5}(1-{k_{A}^{\delta\ \ 2}}^{2}\sin^{2} \theta)^{5/2}}(z_{c}-z_{+})\,d\theta\] \[+\frac{12}{4\pi}\int_{0}^{\pi}\Gamma_{+}\frac{-(\rho_{c}\,\delta_{0\rho}-\rho _{+}\delta_{0\rho}+(z_{c}-z_{+})\delta_{0z})+\rho_{+}\delta_{0\rho}}{R_{2A}^{ \delta\ \ 5}(1-{k_{A}^{\delta\ 2}}^{2}\sin^{2}\theta)^{5/2}}(z_{c}-z_{+})\cos^{2} \theta\,d\theta\] \[+\frac{24}{4\pi}\int_{0}^{\pi}\Gamma_{+}\frac{-\rho_{+}\delta_{0\rho}}{R_{2A}^ {\delta\ \ 5}(1-{k_{A}^{\delta\ 2}}^{2}\sin^{2}\theta)^{5/2}}(z_{c}-z_{+})\cos^{4} \theta\,d\theta\] what finally leads to: \[v_{B\rho 0}^{\delta}=\frac{3(z_{c}-z_{+})\Gamma_{+}}{4\pi R_{2A}^{\delta\ \ 5}} \Biggl{[}\biggl{(}\delta_{0\rho}(\rho_{c}-\rho_{+})+(z_{c}-z_{+}) \delta_{0z}\biggr{)}(I_{1B}^{\delta}-I_{2B}^{\delta}) \tag{174}\] \[+\rho_{+}\delta_{0\rho}(I_{2B}^{\delta}-I_{3B}^{\delta})\Biggr{]}\] and analogically for \(z\) component: \[v_{Bz}^{\delta}=\frac{3}{4\pi}\int_{0}^{\pi}\Gamma_{+}\frac{\rho_{c}\,\delta_ {0\rho}-\rho_{+}\delta_{0\rho}(1-2\cos^{2}\theta)+(z_{c}-z_{+})\delta_{0z}}{R_ {2A}^{\delta\ \ 5}(1-{k_{A}^{\delta\ 2}}^{2}\sin^{2}\theta)^{2}}(2\rho_{+}- \rho_{c}(2-4\cos^{2}\theta))\,d\theta \tag{175}\] \[=\frac{3}{4\pi\,{R_{2A}^{\delta\ 5}}^{2}}\Gamma_{+}\Biggl{(}[(\rho_{c}-\rho_{+})^ {2}\delta_{0\rho}+(z_{c}-z_{+})\delta_{0z}\rho_{c}](I_{2B}^{\delta}-I_{1B}^{ \delta})\] \[+\rho_{+}[(z_{c}-z_{+})\delta_{0z}I_{1B}^{\delta}+\delta_{0\rho}\rho_{c}I_{3B }^{\delta}]\Biggr{)}\] where \[I_{1B}^{\delta}=\int_{0}^{\pi}\frac{2\,d\theta}{(1-{k_{A}^{\delta\ 2}}^{2}\sin^{2} \theta)^{5/2}}=\frac{4}{3}\Biggl{(}\frac{2E({k_{A}^{\delta\ 2}})-K({k_{A}^{\delta\ 2}})}{1-{k_{A}^{\delta\ 2}} }+\frac{2E({k_{A}^{\delta\ 2}})}{(1-{k_{A}^{\delta\ 2}})^{2}}\Biggr{)} \tag{176}\] \[I_{2B}^{\delta}=\int_{0}^{\pi}\frac{4\cos^{2}\theta}{(1-{k_{A}^{2}}\sin^{2} \theta)^{5/2}}\,d\theta=\frac{1}{3}\Bigl{(}I_{2A}+\frac{8E({k_{A}^{\delta\ 2}})}{1-{k_{A}^{\delta\ 2}}} \Bigr{)} \tag{177}\] \[I_{3B}^{\delta}=\int_{0}^{\pi}\frac{8\cos^{4}\theta}{(1-{k_{A}^{\delta\ 2}}^{2}\sin^{2} \theta)^{5/2}}d\theta=\frac{2}{3{k_{A}^{\delta\ 2}}}\Biggl{(}(2+{k_{A}^{\delta\ 2}})I_{2A}^{\delta}-8E({k_{A}^{\delta\ 2}}) \Biggr{)} \tag{178}\] and the total velocity induced by the cluster is: \[u_{\rho 0}^{\delta}=u_{\rho 0A}^{\delta}-u_{\rho 0B}^{\delta}+u_{\rho 0C}^{ \delta}+{\cal O}(\varepsilon^{2}) \tag{179}\] \[u_{z0}^{\delta}=u_{z0A}^{\delta}-v_{z0B}^{\delta}+v_{z0C}^{\delta}+{\cal O}( \varepsilon^{2}) \tag{180}\] #### 4.2.3 Final formulas One can notice that in equations derived in the previous subsection, only \(\delta_{0\rho}\) and \(\delta_{0z}\) depend on the specific probing point. Other quantities depend on the locations of the centers of interacting clusters exclusively. It means, that they can be computed once for a cluster-cluster interaction. Thus, we express the velocity induced by the nodes of positive circulation, from a given cluster as follows: \[u_{\rho 0}^{\delta} =\alpha_{\rho}\;\delta_{0\rho}+\beta_{\rho}\;\delta_{0z}+\gamma_{ \rho}+\mathcal{O}(\varepsilon^{2}) \tag{181}\] \[u_{z0}^{\delta} =\alpha_{z}\;\delta_{0\rho}+\beta_{z}\;\delta_{0z}+\gamma_{z}+ \mathcal{O}(\varepsilon^{2}) \tag{182}\] \[\alpha_{\rho}=\frac{3\Gamma_{+}}{4\pi{R_{2A}^{\delta}}^{5}}\Bigg{(}(\rho_{c}- \rho_{+})(I_{2B}^{\delta}-I_{1B}^{\delta})+\rho_{+}(I_{3B}^{\delta}-I_{2B}^{ \delta})\Bigg{)}(z_{c}-z_{+}) \tag{183}\] \[\beta_{\rho}=\frac{\Gamma_{+}}{4\pi{R_{2A}^{\delta}}^{3}}\Bigg{(}I_{1A}^{ \delta}-I_{2A}^{\delta}+\frac{3}{{R_{2A}^{\delta}}^{2}}(z_{c}-z_{+})^{2}(I_{2 B}^{\delta}-I_{1B}^{\delta})\Bigg{)} \tag{184}\] \[\gamma_{\rho}=\frac{\Gamma_{+}}{4\pi{R_{2A}^{\delta}}^{3}}(z_{c}-z_{+})(I_{1A }^{\delta}-I_{2A}^{\delta}) \tag{185}\] \[\alpha_{z}=\frac{\Gamma_{+}}{4\pi{R_{2A}^{\delta}}^{3}}\Bigg{(}(I_{2A}^{ \delta}-I_{1A}^{\delta})+\frac{3}{{R_{2A}^{\delta}}^{2}}\left[(\rho_{c}-\rho_{ +})^{2}(I_{1B}^{\delta}-I_{2B}^{\delta})-\rho_{+}\rho_{c}I_{3B}^{\delta}\right] \Bigg{)} \tag{186}\] \[\beta_{z}=\frac{3\Gamma_{+}}{4\pi{R_{2A}^{\delta}}^{5}}\Bigg{(}\rho_{c}(I_{1B }^{\delta}-I_{2B}^{\delta})-\rho_{+}I_{1B}^{\delta}\Bigg{)}(z_{c}-z_{+}) \tag{187}\] \[\gamma_{z}=\frac{\Gamma_{+}}{4\pi{R_{2A}^{\delta}}^{3}}\bigg{(}(\rho_{+}-\rho _{c})I_{1A}^{\delta}+\rho_{c}I_{2A}^{\delta}\bigg{)} \tag{188}\] The above equations work also for the negative part of circulation if we replace \(\Gamma_{+}\) with \(\Gamma_{-}\) and \(\boldsymbol{r_{+}}\) with \(\boldsymbol{r_{-}}\). The total induced velocity is obtained by summing both contributions from all the far enough cells. Contributions from nearby cells are determined naively, according to (120) and (121). #### 4.2.4 Test cases The above formulas are rather complicated so to be sure of their correctness a few simple test cases were designed. We initialize a straight vortex sheet, with 1024 nodes, stretching radially between 10 and \(10+\pi\) (a) or 0.5 and 1.5 (b). We let it have \(z=0\) and \(\gamma=0\) everywhere. We build a tree such that each bottom cell has one node inside. We pick a non-bottom cell from the left edge, including 8 nodes. We assign \(\gamma=2\) and \(\gamma=1\) to one at the left cell boundary and one at the right respectively. Additionally, we shift the latter in \(z\) to make the cell a square (practice shows that otherwise, we get a special case with \(\mathcal{O}(\varepsilon^{3})\)). The exact values are of course arbitrary. We choose (a) to show the general convergence and (b) to have rather typical values. We induce velocity with both, naive and fast method (which we apply no matter of \(\varepsilon\)), using smoothing parameter \(\delta=0.0001\) (a) or \(\delta=0.03\) (b). Figure (15) presents the behavior of the error understood as the absolute value of the exact and the approximated velocity. The close proximity of the smoothed source seems to decrease the convergence rate to linear while the effects of strong curvature increase it to nearly \(\mathcal{O}(\varepsilon^{3})\). To also check terms \(\alpha\) and \(\beta\) (eq. 182-188), we modify the tree from former cases to have 8 nodes per bottom cell. The rest of the setup stays the same and the results are in the figure (16). Figure 15: Velocity (absolute) error of the fast method compared with respect to the naive one. Axes are logarithmic (base 10). A circle marks measurement from point at a smoothing distance from the closest source. Therefore, nodes to the right lay in a small viscous region, and to the left, cover the rest of the sheet. On the horizontal axis, we used \(\varepsilon\) computed using the left node. The value of \(\varepsilon\) varies non-monotonically, causing the stripes visible in (a). We depicted probes from a chosen cell with blue circles (that overlay in pairs). We see that although per each value of \(\varepsilon\), we get a range of possible errors, the convergence is proper. Case (b) we will use to determine the acceptable value of \(\varepsilon\). We plotted the relative error against the estimated value - \(\varepsilon^{\prime}\) (constant in a cell), according to formula 152. The convergence is weaker than linear, due to the influence of the viscous core, which additionally decreases an error for relatively high values of \(\varepsilon^{\prime}\). For low values of smoothing parameter \(\delta\), the error is higher, approaching the clear linear convergence. We can notice, that for relative error to be lower than \(1\%\), \(\varepsilon^{\prime}<10^{-1.3}\approx 0.05\). For lower values of \(\delta\) it rises up to \(\varepsilon^{\prime}<10^{-1.5}\approx 0.03\) Finally, we will test the efficiency of the method, by measuring the time of computing the induced velocity. For that purpose, we used an initial condition, with \(\gamma=\cos(s)\) and \(128\) nodes. We computed the velocity five times per each number of nodes (without taking the actual step), took the average and then refined the discretization. We used \(\delta=0.03\) and \(\varepsilon^{\prime}=0.05\), even in the fast method, some of the interactions were handled naively. For processing we used one thread, not to disrupt the measurements with parallelization performance. We compare the results for the fast method with the naive approach in figure (17) Figure 16: The algorithm seems to reduce \(N^{2}\) computational complexity to \(N\log N\). ### Fast surgery The constructed tree can also easily be used to highly optimize the process of surgery, reducing its complexity from \(N^{2}\) to \(N\log N\). The idea is similar to the case of velocity induction, although we determine which cells are far away with a different criterion. We start by creating a list of centers of the segments. Then, we treat them as nodes and build a tree. Each bottom cell, we test with other cells (starting with the mother cell) for a long enough relative distance. We define it as the shortest line connecting the boundaries of the cells. In other words, for cells \(A\) and \(B\), having boundaries at \(z_{0}^{A}\), \(z_{1}^{A}\) and \(z_{0}^{B}\), \(z_{1}^{B}\) we can define a one-dimensional distance: \[d_{z}(A,B)=\left\{\begin{array}{cl}0&\mbox{if $(z_{0}^{A}-z_{1}^{B})<0$ and $(z_{1}^{A}-z_{0}^{B})<0$}\\ \\ \min(|z_{0}^{A}-z_{1}^{B}|,|z_{1}^{A}-z_{0}^{B}|)&\mbox{otherwise}\end{array}\right. \tag{189}\] where the first case captures an overlap. We define analogical quantity for \(\rho\) coordinate. The shortest line connecting cell boundaries is then: \[d(A,B)=\sqrt{d_{z}(A,B)^{2}+d_{\rho}(A,B)^{2}} \tag{190}\] We can state, using a rather conservative bound, that if for two cells \[d(A,B)>2\texttt{ds0} \tag{191}\] then surgery criterion (141) cannot be satisfied. Otherwise we check the same for children of the cell given. If there are no more children (both cells are bottom cells) then we naively check for surgery possibilities. Figure 17: Average processing time for velocity induction against the number of the nodes. Notice different scaling of the axes at (b) ### Notes on parallelization and scaling The code was parallelized with OpenMP. Among all the operations that have to be done every timestep, the velocity induction takes far the most time (difference of a few orders of magnitude). The loop over bottom cells was distributed among the threads so that each thread processes the interactions of its bottom cell with all other cells. Dynamic scheduling was used because the time required for a given bottom cell might differ a lot. Additionally, tree construction was parallelized in the aspect of distributing nodes among the cells. Computation of all the integrals was done in parallel as well. To test the scaling of the code we used one of the timesteps from \(b=1\), \(\delta=0.008\) case to initialize the sheet, with around 85 000 nodes. We made 6 complete timesteps with each number of threads \(n\), measured the time \(t(n)\), and presented the results in figure 18. Nearly all the simulations were performed on a 36-core node of Rysy cluster in the Interdisciplinary Centre for Mathematical and Computational Modelling UW. ### Numerical parameters #### 4.5.1 Discretization Parameter ds0 sets the accuracy of spatial discretization (including surgery), by defining the desired length of segments. We want it to be sufficiently small Figure 18: Speedup to solve all the scales that might arise in the simulation. Their size is dependent on the smoothing parameter \(\delta\). From experience, we find that \[\mathtt{ds0}=\frac{\delta}{10} \tag{192}\] is enough. There are two mechanisms that bind the length of the time step dt - movement of the nodes and generation of their circulation density \(\gamma\). We estimate the scale of induced velocity as: \(U=\Gamma/R\) where \(\Gamma\) is the total circulation at a given time. Then, to simulate structures of scale \(\delta\) moving with \(U\), we let: \[\mathtt{dt1}=2\frac{\delta}{\Gamma R} \tag{193}\] where for 4th order Runge-Kutta, 2 seems to be rather a conservative choice. At the same time, to properly solve the production of circulation (especially at the very beginning), we do not want it to increase by more than 10% between the time steps. This leads to: \[\mathtt{dt2}=0.1\frac{\Gamma}{(z_{2}-z_{1})b} \tag{194}\] where \(z_{2}\) and \(z_{1}\) are coordinates of the points at the \(z\) axis - the top and the bottom (see formula (67)). The time step size is then determined by: \[\mathtt{dt}=\min(\mathtt{dt1},\mathtt{dt2}) \tag{195}\] The initial condition for \(\gamma\) was set with the initial tendency rather than exact zero, i.e.: \[\gamma(s)\Big{|}_{t=0}=0.1\cos(s) \tag{196}\] #### 4.5.2 Refinement and surgery The exact values of the following parameters were worked out mostly by trial and error as a tradeoff between efficiency and accuracy. The maximal length of a segment before splitting (as multiple of ds0): \[\kappa_{E}=1.25 \tag{197}\] The maximal distance between the nodes for merging (as multiple of ds0): \[\kappa_{M}=0.65 \tag{198}\] The maximal "normal distance" between the segments for merging (as multiple of ds0): \[\kappa_{MN}=0.2 \tag{199}\] The maximal cosine of the angle between the segments for merging: \[\kappa_{A}=-0.985 \tag{200}\] what corresponds to the angle of 170 degrees. The maximal tree cell diameter: \[h=3\texttt{ds0} \tag{201}\] the algorithm seems to have the best efficiency when there are about 3-5 nodes per cell. We judged the quality of the simulation by monitoring mass conservation, and the evolution of the moment of vorticity, according to formula (68). Of course, both were affected by surgery. ## 5 Results We performed a series of simulations with different values of \(\delta\). In each scenario, we investigated the case with buoyant vorticity generation, and as a reference - the case with \(b=0\). Cases vary in time length due to different behavior of the system and increasing computational complexity. ### The general evolution of the vortex ring Although the detailed behavior of the system is strongly \(\delta\)-dependent, as will be shown in further sections, the evolution of integral quantities seems to be more stable. For all investigated values of \(\delta\), the system is rising in a similar manner. This also applies to radial expansion and accumulation of total circulation. All tendencies were presented in figures (19) and (20). Relative differences of \(\langle z\rangle\) (volume-averaged coordinate) from the most accurate case (\(\delta=0.008\)), at its last step, were: 5.2%, 2.5%, 1.6%, for decreasing \(\delta\) respectively. In case of \(\langle\rho\rangle\): 0.6%, 1.1%, 0.8% and for \(\Gamma\): 4.9%, 1.9%, 1.0%, both in the same order. Although \(\Gamma\) might not seem to be directly \(\delta\)-dependend, it depends on the coordinates at the \(z\)-axis. The bottom coordinate seems to be highly sensitive to the amount of smoothing. ### The effect of buoyancy on the qualitative behavior In the figure (21) we presented a comparison between the evolution of a buoyancy-driven vortex ring (with \(\gamma\) changing according to eq. (56)) and a vortex ring with fixed strength. The latter might correspond to a ring generated by shearing stresses in a nozzle (e.g. [13]). It was initialized with \(\gamma(s)=2.5\cos(s)\) which gives a total circulation approximately equal to the limiting value from the buoyancy-driven case (see fig. 20). At the left-sided labels, we denoted the time of the fixed-strength ring, which is exactly one unit lower than the time of the buoyancy-driven one (right). In this initial unit, the former was nearly in place accumulating vorticity. Figure 19: Rising (solid line) and radial expansion (dashed) of the ring. Characterized by volume-mean coordinates and velocities of the region enclosed. Figure 20: Evolution of total circulation, which reaches 5.07 after 10 time units. The system is generating circulation till the limiting value of around 5.07. Figure 21: Development of the vortex ring. Left from the axis - constant vorticity case, right - buoyancy driven. The same convention for time labels. Subfigures represent \(\delta=0.1\), \(0.05\), and \(0.03\) respectively from the top. We notice that while the fixed-vorticity ring is stable, buoyancy introduces an instability mechanism that disintegrates the ring. The phenomenon is the clearest in case of \(\delta=0.1\), where Kelvin-Helmholtz instability does not develop. The collapse of the initial sphere generates a concave region that slowly accumulates negative vorticity. During the development of the ring, this region is attracted to the orbit of the main vortex and as a highly concentrated, counter-rotating vortex, launches the breakdown. The process was depicted in figure (22). Figure 22: Vorticity distribution in the development of the buoyancy-driven ring instability (\(\delta=0.1\)). ### The ironing of Kelvin-Helmholtz vortices Decreasing \(\delta\) launches the Kelvin-Helmholtz instability in both cases (\(b=0\) and \(b=1\)). A broad discussion of its nature might be found e.g. in ([15], chapter 6). The lower the \(\delta\), the less stable the sheet is, and, generally, the lower the size of vortices arising. Nevertheless, their size does not seem to scale with \(\delta\) in a simple way - in tests, we were able to obtain the same wavenumber for two significantly different \(\delta\) with the same setup. The presence of additional vortices highlights another interesting phenomenon. Case of \(\delta=0.05\) captures it in a well-separated form. We can observe an evolution of a vortex, that arose in the wake and is advected towards the center of the ring. During this time it is subjected to intense stretching in direction of the nearby vortex sheet and contraction in the normal. This leads to the "ironing" of the vortex, which is further incorporated into the sheet and, indistinguishable, gets susceptible to instability again. The whole process reminds an iteration of Smale's horseshoe map and is illustrated in figure (23). Figure 23: Ironing of the wake vortex into a regular sheet. \(\delta=0.05\), \(b=0\) As the simulation proceeds, some structures evolve into bulky, sharp-edged shapes, which might suggest too coarse discretization. Although, figure (24) shows that they are well resolved in space. We also repeated part of the simulation when they arise with time step halved (with RK4 this should decrease the error 16 times) and noticed no significant change. In the case of buoyant production of vorticity (same \(\delta\) - equal 0.05), there is no clear wake and marked fluid, understandably, moves in more compact manner. The Kelvin-Helmholtz instability is also more intense. In fig. (25) we can observe how vortices arise, are ironed, and winded around the core, one after another. The core grows until it is torn apart by the same mechanism as in the \(\delta=0.1\) case. Figure 24: Example of the discretization. Structure visible in fig. (23d). The same case and time. Further decreasing the smoothing parameter to \(\delta=0.03\) intensifies previously described mechanisms. Nevertheless \(b=0\) case remains a coherent vortex ring, not being destabilized by negative vorticity. Due to winding, it quickly becomes very tightly packed, still exhibiting sheet structure, but thicker - see fig. (26). Such a sheet characterizes by repetitive strands of internal and external volume building a Damascus-like structure. Figure 25: Winding of a sequence of Kelvin-Helmholtz vortices around the core, \(\delta=0.05\), \(b=1\) ### The hierarchy of Kelvin-Helmholtz instabilities When \(\delta\) is decreased to \(0.008\), we observe how the sheet gets covered with tiny vortices. In the first stage, it gets wavy, then characteristic eye-reminding structures arise. They seem to effectively increase the local thickness of the sheet, stabilizing the resulting structure in high wavenumbers. The following, so-called, vortex pairing, can be seen as initial waves, but level higher, occurring in the thicker, composite sheet. We further see that they evolve into analogical, eye-reminding vortices that cover the sheet and increase its effective thickness. The resulting layer again gets unstable in even higher wave numbers and the process repeats as long until vortices get big enough to significantly affect the mean flow. Each iteration increases the scale roughly twice, what follows from vortex pairing mechanism. We clearly see the transition from low to high scales, which is probably associated with inverse energy transport. Figure 26: Core of the main vortex. \(\delta=0.03\), \(b=0\), \(t=3.12\). The internal region was not marked black to highlight the concentration of the vortex sheet. Figure 27: A hierarchy of Kelvin-Helmholtz instabilities (\(b=1\), \(\delta=0.008\)). The exact position of the sheet was shifted for easier comparison. Coordinates are labeled just for size estimation. To verify this we start by drawing the mean contour for each of the presented timesteps. Instead of trying to remove Kelvin-Helmholtz vortices by some kind of smoothing or averaging, we just use the shapes of the sheet from the \(\delta=0.05\) case. Although their heights are different by few percent (what is definitely too much in comparison to the size of small vortices), they seem to be de facto shifted in time. If we choose a pair of timesteps that have the tops of the sheets aligned, the rest fits well. An example is in fig. (28). We rediscretize the mean contour, increasing the number of nodes and providing exactly equal spacing in all cases. This will allow us to use FFT and test different timesteps against exactly the same modes. Because the instability takes place only in the part of the contour, we need to extract it somehow. We take an advantage of the vertical orientation of the instability (it covers mostly the "right side" of the initial sphere) and look for the maxima of \(\rho(s)\). For the time \(t\approx 0.76\), we extract the region between the first and the last maximum and trim the last, underdeveloped 25% of the obtained length. Let us call the resulting length \(L\). In later times, the instability spreads along the sheet, being in a later stage in the initial region (right side) and in an earlier stage closer to the top. We pick this maximum of \(\rho(s)\) which is right in the middle and take \(L/2\) of the mean contour before and after that point. Such domains are presented in fig. (29). We probe the induced velocities and take a Fourier transform against its (physical) length. We compute the density of specific kinetic energy associated Figure 28: Sheet with \(\delta=0.05\) used as a mean contour for case with \(\delta=0.008\) after proper shifting in time. with particular modes, understood as follows: \[k(\kappa)=\frac{|\mathcal{F}\{u_{\rho}\}|^{2}+|\mathcal{F}\{u_{z}\}|^{2}}{2} \tag{202}\] where \(\kappa\) is wavenumber and \(\mathcal{F}\) denotes Fourier transform. For convenience, we plot (30) the results against the wavelength \(\lambda\) (although no longer densities, but rather whole amounts of energy associated with discrete wavelengths). Figure 29: Paths along which Fourier transform was computed (light blue) and corresponding parts of the sheet from fig. (27) We see that the dominant mode of instability gets shifted towards higher scales, starting in \(\lambda=0.04\), through \(\lambda=0.06\) and reaching \(0.111\) around t \(=1.28\). Values are consistent with (27). In fig. (31) we also present a spectrum for the whole contour in \(t\approx 1.28\) against wavenumber. Figure 30: Kinetic energy associated with modes of particular wavelength - a selected part of the spectrum. \(\delta=0.008\), \(b=1\). Peaks for different times, labeled under the horizontal axis. A further stage of the system is presented in figures (32) and (33). We can notice that, despite surgery, there are still relatively thick, coherent parts of the sheet that could possibly be simplified. Their details are finer than the features of the vorticity map, therefore it should not affect the overall evolution significantly. Below, we also present the complete map of vorticity distribution (34) as well as the induced velocity (35). Figure 31: Energy spectrum for the whole contour in \(t\approx 1.28\), \(\delta=0.008\), \(b=1\). The dashed line depicts least squares fit to data marked green (slope -3.01). If blue data is included, the slope decreases to -3.36. \(\kappa\) with (approximately) \(k_{\kappa}<10^{-7}\) is absent in the flow. Slope there is a numerical feature. Figure 32: Vortex sheet, \(\delta=0.008\), \(b=1\), \(t\approx 3.23\) Figure 33: Selected regions from fig. (32) zoomed with vorticity map in a background. Figure 34: Vorticity field, \(\delta=0.008\), \(b=1\), \(t\approx 3.23\) ### The effectiveness of surgery An effect of surgery is presented on the example of \(\delta=0.008\) simulation. We did an additional, shorter simulation with surgery turned off from the very beginning. There is no actual difference in the position, shape, and evolution of the structures. Slight variations are presented on fig. (36). Despite a rather conservative surgery setup, the whole vortex sheet was represented with 40% of the nodes (reduction from 210 663 to 86 615). Figure 35: Velocity magnitude, \(\delta=0.008\), \(b=1\), \(t\approx 3.23\) The operation allows to avoid an exponential increase in the number of nodes required, what is presented in fig. (37a). The cost is of course a slight violation of the mass conservation fig. (37b). Figure 36: Representation of a tiny structure for \(\delta=0.008\) with (left) and without surgery (right from the very beginning. To get the scale, recall the characteristic length of a whole system \(R=1\). Dashed square in (a) and (b) depicts region zoomed on (c) and (d). Gray lines on left figures depict vortex sheet. ### Accuracy and errors Progressive violation of mass conservation is an unavoidable result of surgery. For that reason, it should not be interpreted as a symptom of poor discretization. Figure (36) suggests, that it does not have much impact on the overall behavior of the system. While the sheet is getting more and more concentrated, it is likely that we will "remove" a significant amount of mass, but still keep the vorticity distribution accurate enough. The error grows faster in cases with lower \(\delta\) due to coarser discretization, which allows surgery at higher distances. The same reason causes an error in local vorticity generation. Global circulation, by the design of the numerical scheme, evolves exactly according to (67). Nevertheless, unphysical shifts of segments, introduced by surgery, affect the local distribution of production intensity. This, further, indirectly affects the moment, which, on the other hand, is conserved by surgery quite well (38d). In cases where the moment of vorticity should just be conserved (no production), the error is much lower. Figure 37: The same case (\(\delta=0.008\)) simulated with and without surgery A probable source of error that might affect the overall behavior of the system is \(\delta\). Although it is very robust in filtering out the instabilities in high wavenumbers it still affects the far field. In [11] a reported error in the vortex ring speed (comparing to the experiment) was about 15-20%, although authors used rather stronger smoothing. The method of fast velocity induction also introduces an error. It probably does not affect the general evolution much, but it might be far noisier than the rest, lowering the stability of the sheet. In general, we tried to keep all the errors below 1-2%, with exception of surgery. Figure 38: Relative errors of predictable integrals Conclusion and further work The overall formation of the vortex ring appears to converge with decreasing smoothings. With lower smoothings, the system is subjected to Kelvin-Helmholtz instability, although volume-averaged height and radius are affected slightly. A significant difference is noted between the development of buoyancy-driven and mechanically generated rings. The latter, although subjected to roll-up, evolves into a coherent, stable vortex ring, with a clear wake behind. The former gets, at least partially, disintegrated by the negative vorticity that accumulated near its bottom. This qualitative difference happens because buoyancy can generate both positive and negative vorticity locally, whereas there is no vorticty source in mechanically generated case. The proposed methodology consists of two components: the fast velocity induction and the vortex sheet surgery. The first component decreases computational complexity from \(N^{2}\) to approximately \(N\log N\), where \(N\) is the number of nodes. Nevertheless, it introduces an error that could affect the sheet's stability. This could be a drawback in study of fundamental vortex phenomena. Increasing the order of accuracy of the method by accounting for higher-order moments of vorticity is left for future work. Vortex sheet surgery provides promising results in terms of efficiency. Even with a conservative setup, it allows to avoid exponential growth of nodes number. Despite the formal violation of integral mass conservation by a few percent, the results seem to keep accuracy. Implementing it for higher-order schemes could allow for more radical exploitations of flow coherences. This would not only decrease the computational complexity but also would support the study of elemental phenomena that play role in turbulence. We directly observe how vortex sheet roll-up can transport energy among the scales by increasing the effective sheet thickness. This further determines the wavenumber of the instability. The observed direction of transport is characteristic for two-dimensional flows. This might be caused by enforced axial symmetry, which is a strong constraint on vortex stretching. Loosening this restriction could be a subject of further development. The richness of scales present in the flow is sensitively dependent on the smoothing parameter \(\delta\). Keeping it constant, as in this study, could be an oversimplification of qualitative importance. For example, \(\delta\) could decrease associated with stretching, induced by Kelvin-Helmholtz instability. This procedure would decrease the critical wavelengths of local instability, transporting energy to the smaller scales, before vortex ironing happens. On the other hand, the thickness of the vortex sheet should also increase in time, due to diffusion in reality. Both mechanisms, on average, would lead to a local equilibrium. These possibilities are what we suggest for further investigations.
2301.10862
Learning Gradients of Convex Functions with Monotone Gradient Networks
While much effort has been devoted to deriving and analyzing effective convex formulations of signal processing problems, the gradients of convex functions also have critical applications ranging from gradient-based optimization to optimal transport. Recent works have explored data-driven methods for learning convex objective functions, but learning their monotone gradients is seldom studied. In this work, we propose C-MGN and M-MGN, two monotone gradient neural network architectures for directly learning the gradients of convex functions. We show that, compared to state of the art methods, our networks are easier to train, learn monotone gradient fields more accurately, and use significantly fewer parameters. We further demonstrate their ability to learn optimal transport mappings to augment driving image data.
Shreyas Chaudhari, Srinivasa Pranav, José M. F. Moura
2023-01-25T23:04:50Z
http://arxiv.org/abs/2301.10862v2
# Learning Gradients of Convex Functions with Monotone Gradient Networks ###### Abstract While much effort has been devoted to deriving and analyzing effective convex formulations of signal processing problems, the gradients of convex functions also have critical applications ranging from gradient-based optimization to optimal transport. Recent works have explored data-driven methods for learning convex objective functions, but learning their monotone gradients is seldom studied. In this work, we propose C-MGN and M-MGN, two monotone gradient neural network architectures for directly learning the gradients of convex functions. We show that, compared to state of the art methods, our networks are easier to train, learn monotone gradient fields more accurately, and use significantly fewer parameters. We further demonstrate their ability to learn optimal transport mappings to augment driving image data. Shreyas Chaudhari\({}^{*}\) Srinivasa Pranav\({}^{*}\) Jose M.F. Moura Electrical and Computer Engineering, Carnegie Mellon University Convex Functions, Monotone Gradient, Neural Network, Optimal Transport ## 1 Introduction Convex functions have been studied and celebrated for their amenable analytic properties, relative ease of optimization, and plethora of applications. When finding solutions to signal processing problems, convex formulations enable us to easily augment objective functions and incorporate prior domain knowledge regarding the structure of the solution. However, for complex problems where prior domain knowledge is either lacking or insufficient, deep learning approaches are attractive alternatives that rely on purely data-driven, nonconvex, and overparameterized problem formulations. Deep neural networks have achieved state of the art performance on a variety of image and speech processing tasks at the cost of sacrificing many benefits of convex optimization: computational efficiency, interpretability, and theoretical guarantees. Thus, even in the age of deep learning, convex optimization methods offer significant value. Formulating convex optimization problems is an active area of research that permeates nearly all of signal processing, from source localization in communications to image deblurring [3]. However, it is often a laborious process that involves manually designing suitable convex objectives and associated convex constraints. Perhaps more important than the objective function itself is the _gradient_ of the function, since most convex problems are solved using computationally frugal gradient-based methods. Monotone gradient maps of convex functions also have critical applications in domains including gradient-based optimization, generalized linear models, linear inverse problems, and optimal transport. Therefore, in this work, we propose to _learn_ the gradient of convex functions in a data-driven manner using deep learning. Our approach is a fundamental step toward blending strengths of both deep learning and convex optimization and offers a wide array of applications in data science and signal processing. **Contributions**: We propose two neural network architectures for learning gradients of convex functions, i.e., monotone gradient functions [4]. To the best of our knowledge, we are the first to propose a method for _directly_ parameterizing and learning monotone gradients of convex functions, without first learning the underlying convex function or its Hessian. In contrast to current methods, our networks are considerably easier to train and generalize to high-dimensional problem settings. In this work, we show empirically the efficacy of our approach on a set of standard problems and an image color domain adaptation task. ## 2 Related Work ### Learning Loss Functions and Regularizers Parameterized, monotone gradient functions are useful when we want to optimize input data to minimize a desired loss function that is difficult to express analytically. Recent works explored _learning_ parameters of an objective function during training and then using it to optimize an input at inference time. The adversarial method for training a regularizer in [5] entails a nonconvex optimization problem to generate predictions at inference time. In contrast, an Input Convex Neural Network (ICNN) [6] constrains the learned objective function to be convex with respect to its input. To optimize a proposed input using gradient descent updates, the ICNN must be differentiated at inference time. In this work, we avoid the ex
2310.06941
A framework for characterizing covariational reasoning in physics
Covariational reasoning--considering how changes in one quantity affect another, related quantity--is a foundation of quantitative modeling in physics. Understanding quantitative models is a learning objective of introductory physics instruction at the college level. Prior work suggests that covariational reasoning in physics contexts differs in important ways from reasoning about functions and graphs in purely mathematical contexts; this reasoning is effortful in physics even for mathematically well-prepared students. In order to help students learn to reason covariationally in physics contexts, we need to characterize what we mean by physics covariational reasoning. To this end, we present a framework of covariational reasoning in physics contexts, to describe the ways that covariational reasoning is used in physics modeling. The framework can be used as a tool by which instructors can recognize physics covariational reasoning patterns and researchers can analyze student reasoning. The framework can also help inform the development of effective instructional materials and methods.
Alexis Olsho, Charlotte Zimmerman, Suzanne White Brahmia
2023-10-10T18:58:55Z
http://arxiv.org/abs/2310.06941v1
# A framework for characterizing covariational reasoning in physics ###### Abstract Covariational reasoning--considering how changes in one quantity affect another, related quantity--is a foundation of quantitative modeling in physics. Understanding quantitative models is a learning objective of introductory physics instruction at the college level. Prior work suggests that covariational reasoning in physics contexts differs in important ways from reasoning about functions and graphs in purely mathematical contexts; this reasoning is effortful in physics even for mathematically well-prepared students. In order to help students learn to reason covariationally in physics contexts, we need to characterize what we mean by physics covariational reasoning. To this end, we present a framework of covariational reasoning in physics contexts, to describe the ways that covariational reasoning is used in physics modeling. The framework can be used as a tool by which instructors can recognize physics covariational reasoning patterns and researchers can analyze student reasoning. The framework can also help inform the development of effective instructional materials and methods. ## I Introduction In a typical physics lecture, statements such as "...it goes like 1/r..." or "...they're inversely proportional..." are common, and often considered part of "thinking like a physicist." More specifically, these statements are examples of _covariational reasoning_--considering how changes in one quantity affect another, related quantity [1; 2]. Covariation of quantities is central to reasoning in physics, especially reasoning related to quantitative modeling. Given how productive this reasoning is in physics, we would like our students to leave our courses with a facility for covariational reasoning. However, unlike well established content-based learning objectives (such as Newton's Laws or conservation of energy), effective methods for helping students learn to use covariational have not been identified. In addition, instructors may not recognize a need for instruction in covariational reasoning, and indeed often expect their students to come into class able to reason this way from experiences in prerequisite mathematics courses. While many students do enter physics courses with some covariational reasoning experience from previous courses in mathematics, research has demonstrated that some important types of reasoning learned in mathematics courses do not translate directly to physics contexts [3; 4; 5]. In addition, introductory physics courses may not be succeeding at helping students learn this kind of reasoning. Recent research into measuring physics students' covariational reasoning finds that it does not improve substantially as a result of instruction in introductory-level physics, even in settings where the instruction is research-based [6]. Understanding more fully how covariational reasoning is used in physics will benefit both researchers and instructors. Covariational reasoning frameworks developed by mathematics education researchers have served as a guide for the creation of precalculus curricular materials that show promise towards helping students learn this kind of thinking [1; 7]. These frameworks have also been a productive lens in mathematics education research to identify important ways that students and experts reason. While covariation is a relatively new research lens in physics education, early results suggest the covariational reasoning frameworks from mathematics education research are productive for analyzing physics students' work, but do not fully characterize the ways covariation is used in physics [8; 9; 10]. Based on these findings, we suggest that _physics covariational reasoning_ is distinct from covariational reasoning as taught in mathematics courses, and that physics courses aren't currently very successful at helping students learn to reason in this way. As a first step in helping improve learning outcomes, we propose that physics covariational reasoning be characterized in an assessable, fine-grained way to inform subsequent instructional materials and methods designed to help physics students develop covariational reasoning. Using prior work by both mathematics and physics education researchers, we have developed the Covariational Reasoning in Physics (CoRP) framework to formally characterize physics covariational reasoning. The CoRP framework operationalizes physics covariational reasoning, distinguishing it from covariational reasoning as described in the mathematics education literature. The CoRP framework is informed by observations of physics experts engaged in covariational reasoning, as well as prior work in both physics and mathematics education research. We designed the CoRP framework as a tool for physics instructors and physics education researchers to catalyze change, organizing a complex phenomenon into one that can be comprehended more readily and usefully. In this paper, we describe the development and relevance of the CoRP framework. We demonstrate its use for instructors and researchers as a tool to analyze students' covariational reasoning, which can identify topics for instruction and aid in the develop targeted interventions. Background Our characterization of physics covariational reasoning is built on work by mathematics education researchers, and is informed by both established and more recent work in physics education research. In this section, we describe the frameworks of covariational reasoning developed by mathematics education researchers. We then discuss research findings from physics education that provide some structure for experts' covariational reasoning in physics. Finally, we describe the building blocks of mathematical modeling that are essential to covariational reasoning and that appear directly in the CoRP framework. ### Covariational reasoning in mathematics education research _Covariational reasoning_ has been defined by mathematics education researchers as "the cognitive activities involved in coordinating two varying quantities while attending to the ways in which they change in relation to each other" [1]; that is, it describes reasoning about how changes in one quantity affect changes in another, related quantity. In mathematics education research, covariational reasoning has been studied widely and has been identified as an essential part of reasoning in pre-calculus and calculus [11; 12; 13; 14; 15; 16]. Covariation has been studied in contexts of reasoning about function [2; 17; 18; 19; 20; 21], as well as graphing and the use of coordinate systems [22; 23; 24; 25; 26]. It has also been identified as necessary for reasoning about rates of change [27]. In 2002, mathematics education researchers Carlson, Jacobs, Coe, Larsen, and Hue developed frameworks describing hierarchical levels and associated "mental actions" (MA) of covariational reasoning [1], based on studies of undergraduate math students interpreting and creating representations of functions [18]. The covariational reasoning mental actions were designed to allow researchers and educators to assess the level of students' covariational reasoning. Each of the mental actions is associated with specific behaviors related to covariational reasoning. The mental actions range from a recognition that variables are related (MA 1), to considering the specific relationship between the variables, including the rate of change and the rate of the rate of change (MA 5). In 2017, mathematics education researchers released an updated framework of covariational reasoning [2], incorporating research performed subsequent to the development of the original framework. Table 1 shows a summary of the relevant aspects of the 2002 and 2017 covariational reasoning frameworks, adapted from Jones [28]. ### Covariational reasoning in physics education research In physics education research, reasoning about how two or more quantities change with respect to one another often falls under the names of proportional reasoning or scaling [29; 30; 31; 32]. Proportional reasoning typically refers to directly proportional relationships (i.e. \(F\propto a\)), and has at times been extended to refer to non-linear relationships (i.e. \(U\propto-1/r\)). Scaling is often used in geometric contexts; however, it is also used throughout the literature to refer to relating discrete changes of two quantities (e.g., "if I double this quantity, what happens to that quantity?"). In the language of covariational reasoning, we consider proportional reasoning to be _linear_ covariational reasoning and scaling to be an instance of _discrete_ covariation. Work in physics education research has demonstrated that the language of covariational reasoning from mathematics education is helpful in analyzing novice and expert work [33; 34; 35; 36; 37; 38; 9; 8]. Recent work by Zimmerman, Olso, Loverude, and White Brahmia has sought to explore the extent to which the mathematics covariational reasoning mental actions framework can be used to analyze _physics_ covariational reasoning of physics experts engaged in modeling tasks [39; 10]. The Zimmerman et al. study involved individual, think-aloud interviews with 20 physics experts (graduate students and faculty) engaged in tasks designed to elicit covariational reasoning. The tasks prompted the participants to create a graph that related two quantities. For example, one task depicted a Ferris wheel cart in motion (see Fig. 1) and asked participants to relate the height of the cart and its total distance traveled. The overarching results of study were that: 1. Physics experts demonstrate mathematical reasoning that is consistently woven into physical sensemaking of the quantities involved [39]. 2. Physics experts engaged in patterns of covariational reasoning and modeling that were not well described by the mathematics covariational reasoning framework summarized in Table 1[10]. Figure 1: A still from a covariational reasoning graphing task prompt that asks experts to create a graph relating the distance traveled by a Ferris wheel cart and the height of the cart. ### Foundations of physics covariational reasoning Research about the ways students and experts reason about mathematics and quantities in the context of physics underlies our work on covariation. In this section, we offer a brief overview of the fundamental ideas and terminology that may be unfamiliar to the reader but are central to our characterization of covariation in physics. #### ii.3.1 Mathematization and Conceptual Understanding _Mathematization_ in physics is the association of a system or context from the physical world with a mathematical representation (e.g., symbolic or graphical). One aspect of mathematization in physics is _quantification_--the process of conceptualizing a system and a quality of it so that the quality includes a value, a unit of measure, and very often a sign [27; 40]. Research in mathematics and physics education suggests that mathematization and quantification are challenging for students; for example, it has been demonstrated that students are unlikely to think of compound quantities, such as momentum, as quantities unto themselves [40; 41; 42; 11]. In addition, students likely come into physics courses with some foundational mathematical resources but may not yet use them productively for mathematization in physics [44; 45; 30; 40; 46; 34]. For example, introductory physics students typically have mastered multiplying and dividing numbers; however, they may struggle to conceptualize product quantities (such as momentum and energy) and ratio quantities (such as velocity and acceleration) as distinct from the quantities that contribute to their calculation [47; 30; 46]. Mathematization, including quantification, depends on connecting mathematical ideas with what they represent in the physical world. Students engaged in modeling tasks have been shown to refer back to the context of the task consistently throughout a productive modeling process as part of making sense of and validating their model [48]. _Proceptual understanding_ is defined by Gray and Tall as a combination of _pro_cedural mastery and _conceptual_ understanding [49]. For example, in the context of fractions, "the symbol \(\frac{3}{4}\) stands for both the process of division and the concept of fraction"; that is, a student with a conceptual understanding of fractions would move fluidly between the procedure of dividing 3 by 4, and the instantiation of the fraction \(\frac{3}{4}\) as a precise quantification of portion. Maintaining a propellual view of a mathematical representation--consistently making sense of both the mathematical formalism and the physical meaning of the expression--is a key part of modeling, and therefore essential for covariational reasoning. Mathematics and physics meaning are interwoven in physics. This blended nature of physics and mathematics reasoning is an important facet of the body of work that describes ways in which reasoning mathematically in physics contexts is not the same as reasoning mathematically in purely mathematical contexts [50; 51; 52; 43; 3; 5; 3; 5]. Several researchers consider this difference through the lens of semiotics, highlighting the importance of symbols and the meaning they carry in physics [53; 54]. #### ii.3.2 Symbols and Quantities The symbolic form framework was developed to explain how successful physics students understand and construct equations [50]. In a study of calculus students solving problems in the contexts of areas and volumes, mathematics education researchers observed that students were more productive when they perceived symbols as representing _measures_, including both a value and a unit, throughout their reasoning [55]. This led to the introduction of the _measurement_ symbolic form, which includes both a value and an associated unit. The _quantity_ symbolic form, introduced in physics education research, modifies this form by including sign as an es \begin{table} \begin{tabular}{l l l l} Label & Mental Action [1; 2] & Brief Description [28] & Example Behavior \\ \hline MA 1 & Recognize Dependence & Identify variables that are dependent & Labeling axes \\ MA 1.5 & Precoordination & Asynchronous changes in variables & Articulating that first, one quantity changes, and then the other changes \\ MA 2 & Gross Coordination & General increase/decrease relationship & Describing that as one quantity increases, another decreases \\ MA 3 & Coordination of Values & Tracking variable’s values & Plotting points \\ MA 4 & Chunky Continuous & Values changing in discrete chunks & Articulating that as one quantity doubles, the other triples \\ MA 5 & Smooth Continuous & Continuous, simultaneous changes & Describing that the quantities vary together, smoothly and continuously \\ \end{tabular} \end{table} Table 1: A summary of the covariational reasoning mental actions (MA) frameworks developed by mathematics education researchers [1; 2]. Summary adapted from Jones [28]. sential element of a physics quantity [40]. The quantity symbolic form represents an important difference between _quantiity_ in physics and _measurement_ in mathematics by including connection to the physical system itself. In mathematics education research, literal symbols (letters, sometimes loosely referred to as variables) are classified into several categories including: variables, which represent values that can vary (e.g. \(x,y\)); constants, which represent values that are always the same (e.g. \(\pi\)); parameters, which represent values that are not changing in that context (e.g. \(m\) and \(b\) in \(y=mx+b\)); generalized numbers, which are used in definitions of theorems (e.g. \(a\) and \(b\) in \(a+b=b+a\)), or could represent a quantity (e.g., a block of mass \(M\)); unknowns, which represent a quantity to be found (e.g. \(5x=13\)); and labels, which represent units of measure (e.g. "m" for "meters") [56]. Research has demonstrated that literal symbols may invoke student difficulties due to the various roles that they play in a symbolic expression [56; 57]. In mathematics textbooks, problem solving with measures typically involves a measure symbolized by a letter, and often does not include units. The symbols are used throughout the problem; the units are declared at the outset, and tagged on to the solution of the problem. Units are not typically central to mathematical reasoning as part of instruction. Physics, in contrast, often assumes that symbols carry physical information [3]. For example, positive and negative signs play an essential role in physics symbolizing in a way that is sometimes distinct from mathematics, and take on different meanings depending on the physical context [58]. Physics symbolizing can be challenging for novices to decode. Prior research demonstrates that physics students are more successful at solving physics problems when physical quantities are given as their numeric values instead of as literal symbols [59]. Mathematization, conceptual understanding, symbols, and reasoning about quantity are not themselves covariational reasoning; nor are they uniquely important to physics covariational reasoning. They do, however, provide the underpinnings for productive covariation in physics, and play a foundational role for the framework described in the following sections. ## III Characterizing covariational reasoning in physics In this section, we describe the Covariational Reasoning in Physics (CoRP) framework. We created the CoRP framework by synthesizing the mathematics covariational reasoning frameworks, the recent study of physics experts' covariational reasoning, and other prior work in both mathematics and physics education. The CoRP framework operationalizes the use of covariational reasoning in introductory-level physics, though its application spans the physics curriculum and is not limited to introductory-level physics content. While the CoRP framework is informed by algebra, precalculus, and calculus, this level of mathematics is foundational to quantitative reasoning across the physics major. The framework consists of three sections: Conceptual Understanding **(PU)**, Physics Mental Actions **(PMA)**, and Expert Behaviors **(EB)**. The Conceptual Understanding section encompasses mathematicization and the interplay of quantities and models (symbolic and graphical) in both mathematics and physics contexts. Generally speaking, the conceptual understanding portion of the framework describes mental resources for physics covariational reasoning. _Mathematical Foundations_ describes mathematics that is necessary specifically for physics covariational reasoning, and that students may bring with them from previous math courses. _Mathematization in Physics_ describes applications of mathematical ideas to physics contexts underlying physics covariational reasoning. The Physics Mental Actions are parallel to the Mental Actions presented by mathematics education research (see section IIA and Tab. 1) but are attentive to the ways in which recent work suggests that physics experts may reason differently than mathematicians about change and rates of change [10]. The Physics Mental Actions portion of the framework describes how physics experts think about explicit changes in one quantity, and the effect of those changes on another quantity. Finally, the Expert Behavior section describes what experts do when generating models [10], using a combination of Conceptual Understanding and the Physics Mental Actions. These behaviors are emergent from prior work [10]. In this paper, we suggest how they might appear in broader contexts. Some of these behaviors describe how expert physicists reason using the Physics Mental Actions in ways that are distinct from behaviors described in the mathematics education research literature. Others describe behaviors that rely on perceptual understanding of the relevant mathematics and physics content. Although the framework shown in Table 2 is presented as three distinct parts, there is significant interaction between the three parts when physics experts reason about covarying quantities [10]. Physics covariational reasoning is complex; the CoRP framework represents one possible operationalization that teases apart foundational reasoning and reasoning about how quantities change with respect to each other. ### Procedtual Understanding The Procedtual Understanding portion of the CoRP framework identifies aspects of perceptual understanding of mathematical foundations and mathematization as used in physics covariational reasoning. This section of the framework is divided into math and physics sections to attend to the foundational mathematical reasoning that students bring to physics courses, and the distinct ways that physics uses that mathematical reasoning. Attending to this distinction is an important part of physics instruction [43]. #### ii.1.1 PU I. Mathematical Foundations The Mathematical Foundations portion of the CoRP framework describes the aspects of mathematical reasoning that are necessary for productive covariational reasoning in physics. Generally speaking, introductory physics students see and learn these ideas from prerequisite math courses, though they may not display the same fluency as physics experts. _A. Mathematical Symbols_ is characterized by using symbols to represent mathematical concepts and _measures_ (i.e., a value and an associated unit, as described by the measurement symbolic form [55]). This framework element includes, but is not limited to, symbols that represent values (e.g., \(x\), \(\pi\)) and operations (e.g., \(+\), \(-\)) as used in mathematics. _B. Mathematical Foundations: Common Operations_ is characterized by the use of operations that are ubiquitous in introductory physics and calculus classrooms, including but not limited to addition, multiplication, subtraction, division, taking a derivative or limit, and integration. Making sense of the meaning and contextual relevance of these operations is essential for covariational reasoning in physics. _C. Parent Functions_ is characterized by reasoning about a handful of common functions, chosen because they are the most common functions used in introductory physics: * linear (\(y\propto x\)), * quadratic (\(y\propto x^{2}\)), * sine / cosine (\(y\propto\sin x\)), * inverse (\(y\propto 1/x\)), * inverse square (\(y\propto 1/x^{2}\)), * exponential (\(y\propto e^{x}\)), and * logarithmic (\(y\propto\ln(x/x_{0})\)). We adopt the mathematics language of "parent functions," commonly used when teaching functional transformations, to illustrate that we are referring to the functional relationship between the variables. Essential background knowledge involves a general familiarity with the behavior of parent functions, including: * the ability to sketch a graph of a given parent function, or associate a graph with a parent function, * the ability to describe the general behavior of the function, including concavity and end behavior. * familiarity with function transformations (stretching, translation, etc.) in order to use them in a wide variety of scenarios. #### ii.1.2 PU II. Mathematical in Physics Mathematization in Physics describes the foundational quantitative reasoning about physics quantities that is necessary for productive physics covariational reasoning. Introductory physics students may not have ample experience with mathematization from prior coursework. _A. Mathematical Symbols_ is characterized by symbolizing \begin{table} \begin{tabular}{l l} \hline \hline \multicolumn{2}{c}{PROCEPTUAL UNDERSTANDING} \\ \hline **I. Mathematical Foundations** & **II. Mathematization in Physics** \\ A. Mathematical Symbols (mathematics) & A. Mathematical Symbols (physics) \\ B. Common Operations & B. Constructing Quantities \\ C. 7 Parent Functions & 1. Mathematical Structure \\ & 2. Composite physical quantities \\ & C. Variable Quantities \\ \hline \multicolumn{2}{c}{PHYSICS MENTAL ACTIONS} & EXPERT BEHAVIORS \\ \hline **PMA 1** & **Related Quantities** & **I. Reasoning Devices** \\ **PMA 2** & **Trend of Change** & A. Proxy Quantity \\ **PMA 3** & **Coordination of Values** & B. Regions of Consistent Behavior \\ **PMA 4** & **Discrete Change** & C. Physically Significant Points \\ **PMA 5** & **Small Chunks of Change** & D. Neighborhood Analysis \\ & & E. Compiled Models \\ & & **II. Modeling Modes** \\ & & A. Function Knowing \\ & & B. Function Choosing \\ & & C. Symbolic and Graphical Generation \\ \hline \hline \end{tabular} \end{table} Table 2: Current version of a framework to describe the use of covariational reasoning in physics modeling values and physical constants (e.g., \(G\), \(\pi\)), operations and physical quantities which might be a variable, parameter, or general variable in a particular context. This also includes recognition, based on context, of a symbol as a representation of a given quantity. _B. Constructing Quantities_ is characterized by using common operations to construct a quantity, or to make sense of how a quantity is constructed. Here, we focus on three aspects of constructing quantities. 1. Reasoning about the _mathematical structure_ of a quantity includes recognition of features of the representation of the quantity--for example, whether the quantity includes a direction or a sign. This category includes recognizing physical attributes of a given quantity (e.g., whether it can be positive or negative, or discrete or continuous) and understanding how those attributes will be represented symbolically or graphically. 2. Combining two or more quantities to create a new, _composite physical quantity_ is ubiquitous in physics. Most physics quantities are product or ratio combinations of the seven base quantities (length, time, amount of a substance, electric current, temperature, luminous intensity, mass). Graphical features such as slope (a ratio) and area under the curve (an accumulated, multiplicative quantity) are important composite physical quantities. _C. Variable Quantities_ is characterized by a recognition of which quantities in an equation make sense to vary, how they vary, and which other quantities don't vary. Physical models typically involve many symbols in which the literal symbols might represent constants, general variables, parameters, or varying quantities; the classification of a particular literal symbol can change from context to context--sometimes even within the same problem. A perceptual view of variable quantities includes being able to reason about which quantities are varying with respect to one another, and which represent parameters or constants.Variable Quantities also includes paying attention to the units of a function. For example, since a graph is a representation of how the dependent variable changes relative to changes in the independent variable, the points on the curve take on the units of the dependent variable, and the slope is a rate of change, which has units of the dependent variable over the independent variable. ### Physics Mental Actions Physics Mental Actions (PMA) describe the explicit consideration of the change in one quantity as the result of the change in another quantity. Though the PMA are similar to the mental actions described by mathematics education researchers (See Tab. 1), they differ in a key way: a focus on _quantities_ as the objects of covariation. This results in not only a superficial change (i.e., using the word "quantity" rather than the word "variable" in the descriptions) but also modifications that make them more consistent with the ways that physics experts use them with physics quantities. Expert-like reasoning about a quantitative relationship between quantities is generally not separable from reasoning about the physics quantities themselves [39, 60]. The Physics Mental Actions are often guided by what is physically reasonable [10]. For example, physics experts often rely on understanding of how a quantity can change (e.g., continuously or discontinuously), or whether a change in one quantity would, in the real world, cause the change in another. The PMA are not hierarchical in the sense that PMA 5 is "better" than PMA 1; however, with the exception of PMA 3, the PMA are listed in order of increasing specificity about how quantities are related. PMA 3 is not included in this hierarchy because it does not include reasoning about change; we include it in the framework nevertheless because it describes a way that physicists relate quantities. In the covariational reasoning frameworks developed by mathematics education researchers, only one mental action (MA 4) is associated with consideration of discrete change [1, 2, 28]. All instances of discrete covariation are associated with MA 4, regardless of the size of the discrete "chunk." Different considerations of how quantities change in discrete chunks led to development of two PMA related to discrete covariational reasoning [15]. PMA 4 is most similar to what physics education research has termed scaling [32, 47, 61], and typically involves large, often integer-valued chunks. Prior work suggests that physics experts rarely if ever consider smooth, continuous changes of multiple quantities simultaneously [10], as described by mathematics education researchers' MA 5; instead, physics experts engaging in PMA 5 are likely to consider small "chunks" of change (e.g., considering \(dx\) to be a very small \(\Delta x\)) [10, 62]. For this reason, we do not include a PMA that is analogous to MA 5, but think of PMA 5 as the "most continuous" instance of discrete covariation. **PMA 1: Related Quantities** PMA 1 is characterized by a recognition that one quantity is related to another quantity. Some related behaviors include labeling axes of a graph, and a verbal acknowledgement that if one quantity changes the other will as well. This typically includes a choice of which quantity is the independent quantity and which is the dependent quantity, often guided by understanding a cause and effect relationship between the quantities. The recognition that the potential energy of a spring changes as a result stretching the spring from its equilibrium length is an example of PMA 1. **PMA 2: Trend of Change** PMA 2 is characterized by describing whether a quantity will increase or decrease as a result of another quantity increasing or decreasing. Some related behaviors include drawing arrows to indicate increases or decreases, drawing graphs that represent linear approximations, and verbalizing the trend of change. The recognition that the potential energy of a spring increases as the spring is stretched is an example of PMA 2. **PMA 3: Coordination of Values** PMA 3 involves tracking the values of two quantities to create a discrete set of associated pairs. Related behaviors involve plotting points or creating a table of values. This does not necessarily entail consideration of simultaneous change of both quantities, but rather considering multiple values of one quantity and determining the associated values of another quantity for those values. Determining the value of the potential energy of a spring for a discrete set of values of the amount that a spring is stretched from its equilibrium length is an example of PMA 3. **PMA 4: Discrete Change** PMA 4 refers to reasoning around what happens to one quantity if another, related quantity changes by a substantial, discrete amount. It is most often characterized by what is sometimes called "scaling" in physics education research: considering how changing one quantity by a multiplicative factor affects another quantity (e.g., "if I triple x, what happens to y?"). In some instances, it is consideration of change by a substantial, additive amount. Some related behaviors include plugging in numbers and comparing the change, considering how the dependent quantity changes with a substantial change in the dependent quantity, and verbalizing multiplicative changes. Recognizing that doubling the stretch of a spring results in the spring potential energy increasing by a factor of four is an example of PMA 4. **PMA 5: Small Chunks of Change** PMA 5 is characterized by reasoning about the resulting change in one quantity due to small, discrete changes made to another, related quantity. This reasoning is grounded in examining what happens for _small_ pieces of change. Some related behaviors include "zooming in" to a graph by examining the slope for a small region [15], verbalizing an awareness that a change is small compared to the scale of the problem, and moving fluidly between representations of discrete change and derivative notation [62; 63]. Recognizing that the potential energy of a spring changes slowly near the equilibrium point, and more quickly further from the equilibrium point is an example of PMA 5. ### Expert Behaviors In this section, we describe a number of specific physics covariational reasoning behaviors that we call _Expert Behaviors_. The Expert Behaviors portion of the CoRP framework is not independent of either Conceptual Understanding or the Physics Mental Actions; rather, expert behaviors are dependent on a perceptual understanding of the underlying mathematics and the relevant physics quantities, and use of the Physics Mental Actions. The expert behaviors described below are based on reasoning and approaches seen in physics experts' reasoning while completing graphing tasks designed to elicit covariational reasoning [10]. #### iii.3.1 EB I. Reasoning Devices A reasoning device is a tool or small piece of reasoning that is employed while reasoning covariationally [10]. Some of the reasoning devices are associated with the Physics Mental Actions. In this section, we define each of the CoRP framework reasoning devices. Then we give an example of how the reasoning devices could be used together to create a graphical representation using a context typical of an introductory physics back-of-chapter problem. _A. Proxy Quantity_ describes when a quantity is substituted for another while covarying two quantities. Use of a proxy quantity may allow for easier covariational reasoning. Use of proxy quantities is fundamentally related to related quantities (PMA 1), the recognition that two quantities are related. A proxy quantity may be used to make a novel physics context into a more familiar context. For example, for constant motion contexts, time is commonly used as a proxy quantity when the task specifies "total distance traveled" as one of the quantities to be covariated [10]. Considering how a quantity changes with respect to time is oft-practiced in physics; that is, time is a very frequently the independent quantity involved when related quantities (PMA 1) is engaged. Using time as a proxy quantity in this way often allows for easier, more rapid covariational reasoning, or the application of a familiar model. Use of time as a proxy quantity is consistent with work by mathematics and chemistry education researchers [64; 65]. A proxy quantity may also be used because of how particular quantities are understood. For example, electric potential is typically understood as a function of position \(r\) as measured relative to the source (i.e., \(V=V(r)\)). When asked to covary potential \(V\) with a quantity other than \(r\), it is more familiar to use \(r\) as a proxy quantity [10]. This use of proxy quantity differs from the use of time as a proxy quantity in one important way: \(r\) is not necessarily directly proportional to the quantity that it substitutes. This, again, engages PMA 1, as some quantities are considered to be functions of another quantity, and are therefore necessarily related. _B. Region of Consistent Behavior_ describes when a domain is separated into sections that could be modeled by the same function, or where the behavior of the relevant system is constant or consistent in some way. An expert may break up a domain into regions of consistent behavior in order to associate a single trend, function, or model with each section as appropriate, and may consider each region separately when constructing a graphical or symbolic representation. Sometimes, regions are determined by whether a quantity is increasing, decreasing, or constant; in these cases, this behavior is related to trend of change (PMA 2). For example, a graphical representation of a car speeding up steadily, then moving at a steady speed, and finally slowing down steadily between two stopllights would be represented by three distinct regions on a graph of \(v\) vs \(t\). _C. Physically Significant Points_ describes the identification of a point that holds physical meaning for a given context. An expert may choose and plot a small number of physically significant points to begin to construct a model. Examples of physically significant points are: a boundary between regions of consistent behavior, a bound of a quantity (a local or absolute maximum or minimum), or a point where a quantity changes the most or least rapidly. These points can be used to guide the construction of either a graphical or symbolic representation. Identification of physically significant points is an example of coordination of values (PMA 3). _D. Neighborhood Analysis_ involves use of the Physics Mental Actions around physically significant points. This device is typically used in the construction of a graphical representation by considering the rate of change of the quantities around chosen points, and drawing a small line segment centered on the point to indicate the slope of the graph at that point. In other cases, neighborhood analysis can be used to check the appropriateness of a constructed representation. Experts may consider whether a constructed graphical representation models the behavior of the quantities correctly near physically significant points. This reasoning could be extended to the construction and checking of a symbolic representation by considering how the derivative of an expression should behavior at certain values; for example, whether the derivative should be positive, negative, or zero at a point. Neighborhood analysis involves reasoning about small chunks of change (PMA 5), as it involves covariation of quantities for small deviations near a physically significant point. _E. Compiled Models_ are usually rapid, almost-automatic associations between a relevant parent function and a given physics context. Experience with physics problem-solving often includes having an almost-automatic association between a physics context and a function [10]. For example, a physics expert may associate: circular motion with a sinusoidal function; the trajectory of a projectile with a quadratic function; or the electric potential near a point charge with the function 1/\(r\). Use of a compiled model requires trivial use of PMA 1 (a recognition that the quantities are related), but typically does not rely on other Physics Mental Actions. **Example Application:** To show how the reasoning devices can be used to create a graphical representation, we use the example shown in Fig. 2. The task asks for a graph of the gravitational potential energy of the car-Earth system for a toy car on a track While this specific question has not been used as an interview task with physics experts, the tasks in the related study are quite similar in structure [10]. Thus, we are not claiming that the following is evidence of expert reasoning. We present what follows to exemplify use of the reasoning devices (EB IA-E) as was observed with similar tasks [10]. To begin this task, one could recognize that the potential energy of the system is directly proportional to the height of the car. Therefore, the height of the car can be used as a proxy quantity (EB IA) for gravitational potential energy. Next, one could break the motion of the task into several regions of consistent behavior (EB IB): the flat portion at the top of the hill, the hill portion, the flat portion of the track between the hill and loop, and the circular loop. This could be done in tandem with the identification and plotting of physically significant points (EB IC): the potential energy is at a maximum at the beginning of the hill, when the car has traveled zero distance; the potential energy is at a minimum at the end of the hill, when the car is about half-way through its journey. Similarly, one might note the beginning of loop, top of loop, and end of loop (see Fig. 2). One might then reason about what happens between the plotted points. During the hill portion, one may use neighborhood analysis (EB ID) to identify that the potential energy is changing most rapidly toward the middle of the hill, and indicate that with a steep, negative line segment about halfway between the points at the top and bottom of the hill. For the circular section, one may recognize that the height is related to time (proxy quantity, EB IA), and apply a compiled model (EB IE) in which the height varies sinusoidally with time for circular motion contexts. Finally, an expert may recognize that the height changes smoothly, Figure 2: Task and example graph provided here to exemplify use of the Expert Behavior reasoning devices. For this task, create a graph of the potential energy of the car-Earth system as a function of distance traveled for a toy car of mass \(m\). and connect the points with a smooth curve. #### iii.2.2 EB II. Modeling Modes In addition to the reasoning devices described above, we include approaches to modeling novel scenarios that have been observed in physics experts engaged in covariational reasoning tasks, referred to as _Modeling Modes_[10]. Modeling Modes describe the ways that experts may construct a specific mathematical representation for a given physics context. For the purposes of this work, we define a mathematical representation as a symbolic expression, or a graph with or without an accompanying symbolic expression [66]. The Modeling Modes involve perceptual understanding of the relevant mathematical foundations and physics quantities, Physics Mental Actions, and Reasoning Devices. In the descriptions below, we use some examples that are familiar to the reader to help illustrate the reasoning modes. _A. Function Knowing_ is a behavior that relies on PMA 1 (the recognition that two given quantities are related to each other), use of a Compiled Model (EB IE) that associates a parent function with a given context, and conceptual understanding of the relevant parent function (PU IC) itself to associate a quantitative representation to a given physics context. Function Knowing can be used to create a graphical representation by considering relevant parameters, plotting a few points based on those parameters, and connecting those points with the known function (e.g., for an object in circular motion at a steady speed, the initial conditions such as starting position and radius of the circle can be used with a familiar model for uniform circular motion). Relevant parameters can be similarly used to create a symbolic representation for the specific context. This type of "knowing" is typically guided by the physics content and draws on well-tested models of nature that are familiar to the expert. Function Knowing can be accessed in a wide range of physics contexts, based on well-established connections between the context and a function-based model, such as the connection between uniform circular motion and sine or cosine functions; potential near a point source and 1/\(r\); as well as many, many contexts with direct proportionality. _B. Function Choosing_ is the behavior of using a combination of physically significant points (EB IC), trend of change (PMA 2) and compiled models (EB IE) to select one of several possible functions that might be fruitful in a particular context. This process is informed by physics content knowledge of the context, and a perceptual understanding of parent functions. For example, to generate a model of the relationship between the gravitational potential energy of a system of two objects and the distance separating the objects, one may initially identify a trend (PMA 2)--that is, that gravitational potential energy decreases as the distance between the objects decreases. Lacking additional information, assumptions about the context guide the choice of a compiled model (EB IE): assuming that the system consists of a relatively small object near the surface of a very massive planet may lead to the assumption that the change in potential energy is linear, whereas farther from the surface of the planet, the potential energy could be assumed to decrease as 1/\(r\). To create a graph, one may define symbolic values for initial and final positions and potential energies (EB IC), plot those points, and connect them with the function chosen based on any assumptions. Similarly, to create a symbolic expression, one could symbolize the relevant quantities, and then use them with the chosen compiled model (EB IE)--in this case, an association between the expressions \(U=mgh\) and \(U=-\frac{GMm}{r}\) with gravitational potential energy. _C. Symbolic and Graphical Generation_ refers to behaviors that use consideration of covarying quantities to generate an explicit covariational relationship--either with a generalized symbolic representation or a qualitatively correct graph. If a graph is generated, it may (or may not) be used to identify an appropriate parent function that can be used as the basis of a refined symbolic expression. Symbolic generation is based on prior research in physics education. Sherin's work on symbolic forms focuses on how physics students interpret and create symbolic expressions [50]. In that work, he describes how students might use small pieces of reasoning about change to develop a generalized quantitative model. For example, one student is reported to state: "The coefficient of friction has two components. One that's a constant and one that varies inversely as the weight." The student then develops the expression: \(\mu=\mu_{1}+C\frac{\mu_{2}}{m}\)[50]. Notably, the product of symbolic generation is not necessarily a fully developed quantitative model. Rather, it is characterized by using symbols to express a covariational relationship between quantities. Symbolic and graphical generation typically occurs when there is no known quantitative model for a context [10]. One approach begins with the identification of physically significant points (EB IC) and neighborhood analysis (EB ID), which involves consideration of how quantities covary around the physically significant points (PMA 5). Small line segments representing the approximate rate of change of the height with respect to time around those points can be drawn and then connected together in ways deemed appropriate for the context. Consider the _Spherical Bottle_ task shown in Fig. 3, in which water is poured at a constant rate into a spherical bottle of radius \(R\). Variations of the bottle-filling task have been used to assess students' covariational reasoning [1, 67]. To create a graph of how the height \(h\) of the water in the bottle varies with time, one could consider three physically significant points (EB IC): a point near the base of the bottle, a point at the middle of the bottle, and a point near the top of the bottle. The height of the water at these points will be \(0,\ R\), and \(2R\), respectively. After plotting these points, neighborhood analysis (EB ID) with covariation of height and time (PMA 5) around them can be done. At the top and bottom, where the bottle is more narrow, the height will change quickly. At the middle, where the bottle is wider, the height will change more slowly. Based on this analysis, small line segments representing the rate of change of the height with time can be added around the identified points. The line segments should be steeper at the points at \(h=0\) and \(h=2R\) and less steep at \(h=R\). Given that the height should be changing smoothly, with no discontinuities or sharp points, the line segments can be connected with a smooth curve. An expert may (or may not) identify that the created graph looks like a cubic function, and create a symbolic expression that represents the graph. ## IV Relevance of the covariational reasoning framework In this section we demonstrate how the CoRP framework can be used to analyze student reasoning. In particular, the framework allows us to identify reasoning resources that students have about covariation at various levels of instruction, allowing for a glimpse into reasoning that might be in their _zone of proximal development_--the space between what a learner can do without assistance and what a learner can do with instruction or collaborative help [68]--at each stage. The analysis of students at various levels of physics can contribute to characterizing the novice-to-expert continuum. We begin with analyses of math students' covariational reasoning, as reported in mathematics education literature. These analyses are useful because many students develop covariational reasoning in math courses first, before coming to physics. As such, we suggest that algebra and calculus students are representative of students coming into introductory physics, and analysis of their covariational reasoning through the lens of the CoRP framework can help us identify the resources they bring with them from mathematics courses. These analyses also indicate opportunities for targeted instruction to develop physics students' physics covariational reasoning. Next, we analyze studies from the current literature of physics students at both introductory and upper-division undergraduate levels, to highlight some of the physics covariational reasoning used across the major. By looking at the reasoning of students through the lens of the CoRP framework, starting with students enrolled in math courses and progressing to upper-division physics students, we can characterize some trends in physics covariational reasoning development with progressively more physics instruction. ### Identifying resources of algebra and calculus students Mathematics education researchers Johnson, McClintock, and Hornbein report on a case study of high school algebra student "Ana" [64], who participates in a number of interviews about various covariational reasoning tasks. For a variation of a bottle-filling task (similar to the task shown in Fig. 3, above), Ana is shown an animation of water being poured into a spherical bottle with a cylindrical neck at the top, and is asked to create a graph to relate the amount of water in the bottle and the height of the water in the bottle. Initially, Ana creates a linear graph, saying "the height would still increase...and the volume would still increase." This is consistent with identifying a trend of change (PMA 2), as Ana is recognizing that the height increases as the volume increases. After some probing from the interviewer, Ana creates a piece-wise linear graph, saying, ...the water is filling up very fast, and then it's like slower, and then it's really fast. Really fast would happen here [indicating neck of bottle], and kind of fast here [indicating bottom of bottle], and slow here [indicating middle of spherical portion of bottle]. Through the lens of the CoRP framework, we characterize this statement as proto-expert-like: Ana is beginning to identify Regions of Consistent Behavior (EB IB), and is doing an early version of Neighborhood Analysis (EB ID) by considering differences in the rates at which the height changes in different regions in the bottle. However, rather than considering "instantaneous" rates of change around physically significant points (EB IC), Ana seems to consider average rates over larger regions. This is consistent with use of larger chunks of change (PMA 4), rather than the small chunks of change Figure 3: Top: Bottle-filling task figure [1]. Bottom: A solution to the bottle-filling task, showing expert behaviors that may be used. (PMA 5) that experts typically consider for neighborhood analysis. According to the sections of transcript provided by the researchers, Ana does not consider what the physical significance of a linear (or piecewise linear) graph would be and does not generally consider the meaning of the slope of the graphs she constructs. Mathematics education researchers Carlson, Jacobs, Coe, Larsen, and Hsu report on 20 college-level calculus students' performance on the same variation of the bottle-filling task (a spherical bottle with cylindrical neck) [1]. Carlson et al. report that only one student ("Student D") created a linear graph, and the student justifies their graph saying "as the volume comes up, the height would go up at a steady rate." While it is tempting to classify this reasoning as indicative of PMA 2 (Trend of Change), we believe that because the student makes a connection between the linearity of their graph and the identified "steady" rate that the reasoning is more consistent with application of a compiled model (EB IE) or function knowing (EB IIA). A compiled model (EB IE) is not associated with the Physics Mental Actions--that is, there is rarely explicit consideration of how a change in one quantity affects another quantity. Rather, use of a compiled model is typically built on familiarity with and understanding of a given context and function. In this case, use of a straight line to indicate a constant rate implies understanding of why a constant slope is associated with a constant rate. We note that a linear functions are often a default assumption for unfamiliar contexts for both students and experts [9; 10; 39]. The remainder of the students in the study by Carlson et al. created non-linear graphs, suggesting an awareness that the height of the water is not directly proportional to the amount of water in the bottle. A majority of the remaining students created graphs that were increasing and entirely concave-up or concave-down. "Student B" explains their concave-down graph by saying, Okay, the more water, the higher the height would be...Right here [indicates bottom of bottle] the height will be zero and the volume is zero. As you go up, a little more height increases and the volume increases quite a bit, so the amount by which the height goes up is not as much. Once you get there [indicates halfway up the spherical part of the bottle], the height increases even slower...So, every time you have to put more and more volume in to get a greater height towards the middle of the bottle and once you get here [pointing to the top of the spherical portion], it would be linear, probably. So, it's always going up [tracing his finger along the concave-down graph], then it would be a line. Student B begins by indicating a trend of change (PMA 2) and by coordinating values (PMA 3) at a physically significant point. The student continues by engaging in a preliminary version of neighborhood analysis (EB ID). Expert-like neighborhood analysis involves identification of physically significant points, and use of PMA 5 in small regions around those points; here, Student B instead considers regions, starting at the bottom of the bottle and moving towards the top, and compares relative rates of change in those regions. Unlike Ana, Student B does seem to distinguish between average and instantaneous rates of change. Student B associates the shape of their graph with a perceived decreasing rate of change, having to add "more and more volume" for a given change in height. This is suggestive of reasoning about small chunks of change (PMA 5). When reasoning about the cylindrical neck of the bottle, Student B's statement that "it would be linear, probably," is consistent with a compiled model relating cylinders and volume proportional to height. From this review of math students' covariational reasoning, we identify facility with linear functions as a reliable resource. All of the math students discussed here are able to reason using the spectrum of Physics Mental Actions for linear relationships. The calculus students displayed additional emerging resources that are important for covariational reasoning in physics: 1. Reasoning about changing rates of change. While we don't consider reasoning about rates of change for linear functions to be less important than non-linear functions, we do recognize that reasoning about non-constant rates of change is more challenging for students in courses in both mathematics and physics. 2. Interpretation of the slope of a constructed graph as a rate of change, and association of a linear function with a physical context. Both of these lines of reasoning indicate sensemaking about a physical context with mathematics. ### Identifying resources of physics students In this section, we analyze prior work in physics education research through the lens of the CoRP framework. Development of instruction targeted at improving students' physics covariational reasoning requires understanding what types of reasoning are in students' zones of proximal development. Little work has been done in physics education research to study physics students' physics covariational reasoning, and how it may progress with instruction. Therefore, recognizing changes in physics covariational reasoning as a result of physics instruction is more difficult than characterizing the development of covariational reasoning by analyzing math students' reasoning. However, the CoRP framework provides a lens for understanding the existing work in a new light. These analyses also suggest that a focus on students' physics covariational reasoning could be a productive area for future research. In physics, where quantities are often not easily visualized, making sense of the meaning of quantities and their covariational relationships becomes more difficult--and more important to learning. Sherin's symbolic forms frame work describes how physics students understand and construct physics equations [50]. Observations of advanced-introductory physics students informed Sherin's development of the symbolic forms framework, which describes multiple aspects of how students reason about symbolic expressions. Here, we look at some of the students described by Sherin to begin to characterize how introductory physics students reason covariationally when engaged in quantitative modeling using symbolic expressions. We note that Sherin emphasizes in his paper that students often needed prompting from the interviewer (a physics expert) in order to display some of the behaviors that the later categorized as symbolic forms. In one task, Sherin asked students to determine how a dropped object's terminal velocity depended on its mass. In working on this task, all of the students recognized that the force of air resistance is dependent on the velocity of the object (PMA 1), and that the air resistance increases as the velocity of the object increases (PMA 2). Sherin reports that at least one student reasoned mechanistically about why this must be the case, saying that the force of air resistance "gets greater as the velocity increases because it's hitting more atoms...of air." Some students then engaged in an incomplete version of function choosing (EB IIB), trying to decide the functional relationship between velocity \(v\) and the force of air resistance \(F_{U}\). One student, "Jack," listed a number of possibilities, writing \[F_{u} \propto v\] \[\propto v^{2}\] \[\propto v^{3}\] and then saying: Somehow they're related to the velocity but we're not sure what that relationship is. All we can say is that \(F_{U}\) is either proportional to \(V\) or to \(V\) squared or to \(V\) cubed. While Jack is able to generate a number of possibilities, he is not able to choose one as being the most appropriate for the situation, nor does he state why those are the only possibilities. Using assumptions, facility with common operations (PU IB) and parent functions (PU IC), and familiarity with the physics content to choose an appropriate function for a given context are key components of function choosing. Jack (and others) did not eliminate functions or choose one option as being more reasonable than others. This suggests that function choosing is an emerging, rather than reliable, behavior for these students. Similarly, for a task involving a spring, students "Mike" and "Karl" struggled to remember whether the expression for the force exerted by a spring was \(F=kx\) or \(F=\frac{1}{2}kx^{2}\). Karl: Okay, now, qualitatively, both \(kx\) and half \(kx\) squared do come out to be the same answer because as [pause] looking at it qualitatively, both half--both half \(kx\) squared and \(kx\), um, you know, increase as \(x\) increases. Mike and Karl recognized that the force exerted by the spring depended on the stretching of the spring (PMA 1) and that it would increase as the spring was stretched (PMA 2), but could not use other information to help them decide which was more appropriate. We note, for example, that the students couldn't rely on solid understanding of force as a quantity (PU IIB) to determine which equation could be correct; \(\frac{1}{2}kx^{2}\), the expression for spring potential energy, is necessarily a scalar quantity and could not be the correct expression for a force, a vector quantity. A conceptual understanding of quantity is valuable and relevant for contexts involving more advanced physics. Physics education researchers Van den Eynde, Schermerhorn, Deprez, Goedhart, Thompson, and De Cock describe a case study of second-year physics and math students reasoning covariationally in the context of the heat equation [9]. In the Van den Eynde et al. study, students were prompted to generate graphs of the relationship between heat flow and time based on the information in the task (see Fig. 4). One pair of students,"Evan" and "David," appear to engage in function choosing (EB IIB): Evan: It [the graph] flattens out again, because that, that is just what things do under influence of the heat equation, but how would it look like at the end? Something like, eh, straight line or so maybe? I don't really know...What makes most sense?... David: Wait, the end should be kept at 50, so there should be some kind of heat source. Evan: Yes. David: Yes, and if there is a heat source, heat is again flowing over there, to the beginning of the rod (referring to the boundary condition at position 0). But the beginning of the rod is so cold that it stays zero and so it cancels each other Figure 4: Graphing task used with upper-division physics students in study by Van den Eynde, et al. The students are working on II. out a bit and I think you will just get a straight line (10). Evan: Yes, I also have the feeling it will become a straight line, because ehm, if it is not going to be a straight line, what else? In this exchange, the students begin by using neighborhood analysis (EB ID) and making sense of the behavior of the graph at the boundaries (PMA 5). They reason about the rate of change by discussing the physical nature of the quantities. They then connect the points by choosing a function based on the trend of the quantities and the physical context (function choosing, EB IIB). The students agree that a linear relationship is likely appropriate here. Like the math students discussed above, a linear relationship between quantities is a default assumption for these physics students. However, the physics students are continually relating their representation to the physical quantities and context. Later in their discussion, Evan continues to argue for why the function should be linear, based on the physical properties of the rod: Evan: Cause, yeah, if it wouldn't be a straight line, then it should be something else and I cannot imagine what it would, would be then. Because the rod in its whole is heat conductive in the same way everywhere. So, if this side is held perfectly at zero and that side is perfectly held at 50 (adds "0" to the starting point of the three graphs and "50" to the end point of the three graphs) and everything in between is heat conductive in the same way, it doesn't seem to me that it would be something else than a straight line... We note that the students here exhibit journeyman expertise--that of one between a novice and a true expert [69]--in rejecting non-linear models based on physical sense-making with quantity. Analysis of these results from physics education research for both introductory and upper-division through the lens of the CoRP framework reveal the importance of mathematization and and understanding of quantity for productive physics covariational reasoning. At the introductory level, students seem to display reasoning that is evidence of understanding of physics quantities as an emerging resource, reasoning about some--but not all--quantities across multiple contexts. We suggest that encouraging sensemaking about quantities could support development of physics covariational reasoning at this stage. The analysis of the upper-division students' work using the CoRP framework illuminates the importance of understanding of quantity as a foundation of physics covariational reasoning in more sophisticated contexts: the upper-division students' understanding of the physics context and relevant quantities allows the students to productively engage in the expert behavior of Function Choosing. In summary, in this section we have demonstrated how the CoRP framework facilitates recognizing several emerging resources that are important for helping calculus and physics students develop their mathematical reasoning in physics: 1. Introductory students recognize that covarying quantities are related by function; an understanding of the quantities is helpful for reasoning about the functions involved. Students' familiarity with linear functions could help in developing reasoning skills about other functions, which are more challenging at this level. 2. Upper-division students are familiar with more physics quantities, and have a deeper understanding of some. Relying on student understanding of quantity might help further their skills at this level to choose between different symbolic models--both linear and non-linear. ## V Conclusion Physics covariational reasoning plays a central role in expert-like quantitative modeling and is a key aspect of what it means to "think like a physicist." Because of its focus on quantitative modeling, introductory-level physics provides a unique opportunity for the development of physics covariational reasoning for a large population of students. Therefore, reliable covariational reasoning with physical quantities is a desirable student learning outcome of introductory physics courses. This is especially true because quantitative reasoning developed by instruction in physics is transferable to "real-world" contexts, and provides a foundation for scientific literacy more generally. However, it can be challenging for instructors to recognize the ways in which their own reasoning patterns differ from the focus of the prerequisite mathematics their students have taken. The differences in mathematics and physics covariational reasoning was one significant motivation for the development of the CoRP framework. The use of covariational reasoning in physics contexts has not before been operationalized. The CoRP framework provides one operationalization. We expect that it will help instructors recognize the role (and importance) of covariational reasoning in quantitative modeling in physics, and better address its development as an important student learning outcome. In this paper, we have described three ways that the CoRP framework can be used to achieve this goal. First, the CoRP framework can make clear the ways in which physics covariational reasoning is distinct from the covariational reasoning described by mathematics education researchers and taught in mathematics courses. Frameworks developed by the mathematics education research community focus largely on direct consideration of how changes in one variable results in changes to another. While consideration of changing quantities plays an important role in physics covariational reasoning, the interconnected structure and facets of the CoRP framework attend to research that demonstrates the inherently blended nature of mathematical and physical reasoning [9; 70; 39; 71]. The foundation of physics covariational reasoning is the Conceptual Understanding of the underlying mathematics and physics mathematization. The Physics Mental Actions described in the CoRP framework are similar to the mental actions described by Carlson et al. [1], but also involve quantities, rather than variables which may be free of physical context. We argue that consideration of how quantities change with respect to each other cannot happen effectively without understanding of the quantities themselves. The expert behaviors in the CoRP framework also rely on physics content knowledge, and often guide use of the Physics Mental Actions. Students may come into introductory physics courses with experience with covariational reasoning in math contexts, but as shown in section IV, this does not guarantee facility with physics covariational reasoning. The characterization of physics covariational reasoning provided by the CoRP framework provides guidance for leveraging the experience that students have from mathematics courses. Familiarity with how covariational reasoning is used in physics, as described in the framework, can help instructors meet their students where they are, leading to more productive reasoning and quantitative modeling in physics contexts. Second, the CoRP framework can be used to analyze student reasoning in a variety of contexts and in a number of ways; here, we discuss two such ways. First, analysis of introductory student covariational reasoning can help instructors track changes in students' covariational reasoning that occur with instruction in physics, and can inform assessment, as it explicates facets of covariational reasoning. This allows covariational reasoning to be a truly _assessable_ learning objective of introductory physics courses. Also, analysis of student reasoning can aid education researchers and curriculum developers. As demonstrated in Section IV, analysis of students' reasoning through the lens of the CoRP framework allows for the identification of expert-like and proto-expert-like covariational reasoning, as well as aspects of covariational reasoning in students' zones of proximal development. Finally, the CoRP framework can guide the development of instructional interventions. The framework identifies perceptual understanding of the foundational mathematics and physics mathematization as a basis of physics covariational reasoning. Covariational reasoning in physics is not simply doing math with physics quantities--it requires deep understanding and facility with both the mathematics and the quantities themselves. By identifying the foundations of physics covariational reasoning, the CoRP framework provides a way to determine "essential skills" [72] that can be targeted with interventions or instruction. For example, while students may come into physics courses able to produce or interpret graphs in purely mathematical contexts, they may lack facility with the physics knowledge embedded in a graph of physics quantities. Being able to interpret the meaning of a graphical feature such as a slope or an area under a curve, or identify a quantity associated with a graphical feature, may aid physics learners in understanding graphical representations. Student difficulties with mathematics in a physics context have long been viewed as a problem of mathematical under-preparedness. While lack of adequate practice with algebraic manipulations characterizes some students' difficulties, physics has its own work to do in helping its students learn to reason mathematically in a physics context. The work described in this paper is situated in ongoing efforts to help build physics quantitative literacy for all physics students. We have developed the CoRP framework to help support the research and instructional communities in physics gain new knowledge and develop instructional interventions. We suggest the CoRP framework can help scaffold physics students' understanding of mathematical models through their development of covariational reasoning. ###### Acknowledgements. This work is supported by the National Science Foundation under grants No. DUE-1832836, DUE-1832880, DUE-1833050, DGE-1762114. The work described in this paper was performed while the first author held an NRC Research Associateship award at Air Force Research Laboratory.
2305.02991
Notes on Refactoring Exponential Macros in Common Lisp
I recently consulted for a very big Common Lisp project having more than one million lines of code (including comments). Let's call it "System X" in the following. System X suffered from extremely long compilation times; i.e., a full recompile took about 33:17 minutes on a 3.1 GHz MacBook Pro Intel Core i7 with SSD and 16 GBs of RAM, using ACL 10.1. It turns out that a number of macros were causing an exponential code blowup. With these macros refactored, the system then recompiled in 5:30 minutes - a speedup by a factor of ~ 6. In this experience report, I will first illuminate the problem, and then demonstrate two potential solutions in terms of macro refactoring techniques. These techniques can be applied in related scenarios.
Michael Wessel
2023-05-04T16:58:02Z
http://arxiv.org/abs/2305.02991v1
Notes on Refactoring Exponential Macros in Common Lisp Or: Multiple \(\copyright\)S\(\left.\)Body Considered Harmful ###### Abstract. I recently consulted for a very big Common Lisp project having more than one million lines of code (including comments). Let's call it "System X" in the following. System X suffered from extremely long compilation times; i.e., a full recompile took about 33:17 minutes on a 3.1 GHz MacBook Pro Intel Core i7 with SSD and 16 GBs of RAM, using ACL 10.1. It turns out that a number of macros were causing an exponential code blowup. With these macros refactored, the system then recompiled in 5:30 minutes \(-\) a speedup by a factor of \(\approx 6\). In this experience report, I will first illuminate the problem, and then demonstrate two potential solutions in terms of macro refactoring techniques. These techniques can be applied in related scenarios. Common Lisp, Macros, Exponential Code Blowup, Macro Refactoring, Very Large Lisp Systems + Footnote †: journal: Computer Physics Communications ## 1. Introduction Macros and the ability to program language extensions in the language _itself_ is one of the most beloved and powerful features of many members of the Lisp family, and especially in Common Lisp (Wessel et al., 2015), which has been coined a _programmable programming language_ by John Foderaro. The availability of the full programming language at macro expansion / compile time makes Common Lisp an ideal implementation platform for Domain Specific Languages (Foderaro, 2017; D'Alessio et al., 2017), and always has been (i.e., Lisp was an early target platform for object-oriented programming concepts (D'Alessio et al., 2017)). Unlike macros in most other programming languages, Common Lisp allows macros to be defined in the same language. Thanks to its _homotonicity_, it offers a unified "programs as data" representation and allows the construction, manipulation, and most importantly, _computation_ of macro expansions in the language _itself_. The full power of the language is always available - not only at runtime, but also at _macro-expansion ("compile") time_(Brandt et al., 2017; D'Alessio et al., 2017; D'Alessio et al., 2017). As always, with great power comes great responsibility: macros can be a double-edged sword. This is especially true in languages like Common Lisp, where the main development mode is not the traditional "edit \(-\) full recompile - debug" cycle, but an interactive, dynamic one, based on incremental redefinition, evaluation, and compilation. Unintended consequences of changes to the code base, especially macros, can sometimes be left unnoticed for a longer time period if full recompiles of the system are delayed. This holds true especially in larger projects with bigger teams. Once compilation times exceed half an hour, full recompilation is avoided by the developers during daily development, and a build system will usually be entrusted to deliver new base images overnight, containing the changes of multiple developers. Of course, regressions will be recorded and monitored on a daily basis. But even if _build times_ and _the size of the fast load (FASL) files_ are reported by the build system, it might not be entirely clear which changes increased the build time \(-\) after all, the build system might just have had a bad night and was busy performing backups as well, and so on and so forth. Consequently, tracing back unintended system behavior to (no longer so recent) changes to the code base can become more difficult. For this reason, incremental compilation of Common Lisp code can become a drawback. I advise that developers should not only check for unintended changes in semantics and functional characteristics of the system caused by their code changes, but also to the non-functional characteristics (e.g., FASL sizes and build time). And especially for macros. I recently had the opportunity to work on "System X", which is a very large, multiple decades in-the-making Common Lisp system with over one million lines of code. System X suffered from extremely long compilation times: a full compile required 33:17 minutes on a 3.1 GHz MacBook Pro Intel Core i7 using an SSD and 16 GBs of RAM, using ACL 10.1. It turned out that _three badly engineered macros_ were causing an exponential code blowup. With these macros refactored, a full recompile is possible in 5:30 minutes \(-\) a speedup by a factor of \(\approx 6\). In this experience report, I will first illuminate the problem, and then demonstrate two possible solutions in terms of macro refactorings. The effectiveness of the refactoring methods is not only demonstrated by the 6 fold reduction in compilation time, but also in terms of FASL size reductions. The used methodology can be applied in related scenarios. I conclude with some advice. ## 2. The Problem The problems of System X are easily illustrated with a few synthetic examples. Consider the context-establishing with th-bad macro in Figure 1. Like many with th- macros, it utilizes a _special variable_ (here, **) with dynamic scope to control the runtime behavior beyond its lexical scope. In the example, the binding of the dynamic variable ** determines if with th-bad macro's expansion prints x or (1+ x). This macro serves to illustrate the problem of _exponential macro expansion_. Frequently, with th- macros are nested, which can obscure the magnitude of such problems from developers. For example, the macro might be part of a framework for website HTML generation and, as such, contain macros such as with th-head, with th-body, with-table, and so on. Not only will complex web pages contain many deeply nested occurrences of these macros, but it might also be the case that certain common design elements of such pages (common headers, footers, and navigation menus) have been aggregated into even higher-level macros, which are then being used in other macros, and so forth. It should be noted that Common Lisp does not contain a macroexpand-all recursive macroexpansion facility; only macroexpand-1 is offered. This provides a single level of macro expansion. However, third-party solutions are available. I used one of these packages to diagnose the problems in System X [1, 2]. Considering the macroexpansion of blowup containing four nested with-bad occurrences in Figure 1, we can clearly see that it is exponential in the size of the original definition, due to the duplicated,#body forms. In general, given a nesting depth of \(n\), the size of the expanded macro code is \(2^{n}\). I even spotted with- macros with more than two,#body forms "in the wild"; in general, a with-macro with \(m\),#body forms will expand to size \(m^{n}\) if nested \(n\) times. This should clearly be avoided. Sometimes, such an _exponential macro_ is easy to fix. In the case of Figure 1, it suffices to move the,#body form into a local function definition (flet) and call the local function in the two places at runtime rather than duplicating the code. The expansion size of the resulting with-good macro shown in Figure 2 is now linear in the nesting depth rather than exponential. Unfortunately, exponential macros are not always easily fixed. For example, say the macro argument x in with-good was used to establish lexical bindings for use _within_ the,#body instead of just being an "input parameter" to the macro. In this case, a (let ((,x...))...,#body) would be used within the macro to establish a corresponding lexical scope for,x. Moreover, the concrete binding to,x might depend on complex runtime and compile time conditions. In particular, the value of,x might depend on the runtime value of *a*, which is unknown at compile time / macroexpansion time, and hence, cannot be anticipated by means of code rewritings / transformations. It is thus important that the correct lexical contexts are established, for example, via the local function's lambdalist. Of such "more difficult" nature was the exponential macro that I had to refactor in System X. Instead of revealing the details of this macro I will use the synthetic example from Figure 3 in the following. This macro has a similar complexity and serves to illustrate the problems and possible solutions. Figure 1. Macro with exponential macro expansion Figure 2. Macro with linear macro expansion The idea behind with-bad-recording is to establish a context of dynamic scope for keeping track of "instructions" that are being recorded onto a stack; these instructions can be entries to a log file, an output recording presentation history, etc. The "hidden" special variable *recording-stack* (with dynamic scope) is used to keep track of the values on the stack. This special variable is not supposed to be visible to the user's code (it is "internal"); instead, accessor functions (or macros) such as (do-something-and-record x) are used to work with it. Moreover, for whatever reason, clients of with-bad-recording also like to know whether the current invocation is toplevel, or already part of a "nested" invocation at runtime; hence, a variable nested-p can be passed in which is then bound to nil or t, respectively. To decide this, another special variable within-recording* had been (maybe redundantly) introduced by the original author of the macro. Again, this is an "internal" special variable which should not be visible to the user code, hence, nested-p is supplied. A use case is shown in test-bad-recording. Note that the runtime value of n-p (i.e., nested-p in the macro) is not knowable from the lexical definition, as test-bad-recording might occur nested within another with-bad-recording context at runtime. The use case shows that the stack holds (3 2 1) in the end. Clearly, this macro now has the potential for an exponential macro expansion, and test-bad-recording already suffers from this blowup. Can we fix this macro in the same way as in Figure 2? ## 3. First solution - refactoring with Flet The first solution is a generalization of the idea we already discussed in Section 2 -- we replaced each,@body by a call to a single local function containing a single,@body. We can use Flet or LABELS for that. A naive attempt of fixing with-bad-recording is shown in Figure 4. This macro is now clearly broken, as,@body refers to,nested-p = n-p, which is not visible in the outer fleet - hence the compiler warning that this variable is now unbound. The obvious solution is hence to make,nested-p = n-p an argument of the local function so that the required lexical variables for,@body are established by the local function. This is shown in Figure 5. A further complication is introduced if the lexical variable is modified in one of the branches -- consider the variation with-bad-recording-v2 shown in Figure 6, where nested-p is replaced by control-p. The value of control-p influences the output, and it might be set from either within the user-supplied,@body code, or from within the macro itself. Refactoring such a macro then becomes less mechanical, and more care is needed to ensure that the right lexical environments are established. Figure 4. Replacing multiple,@body with local function calls broke the macro — see the compiler warning Figure 3. The anti-pattern of an exponential with- macro as found in System X (defmacro with-recording-v2 ((nested-p) *body body) (let ((fn (gensym))) '(flet ((,fn (,nested-p) (declare (ignorable,nested-p)),$body)) (if *within-recording* (,fn t) (let* ('recording-stack* nil) (*within-recording* t)) (,fn nil) (process-recordings)))))))) (defmacro test-recording-v2 () '(with-recording-v2 (n-p) (if n-p (do-something-and-record 11) (do-something-and-record 1)) (with-recording-v2 (n-p) (do-something-and-record 2) (with-recording-v2 (n-p) (if n-p (do-something-and-record 3) (do-something-and-record 33)))))) (test-recording-v2) (defmacro with-recording-v3 ((control-p) *body body) (let ((fn (gensym)) '(flet ((,fn (,control-p branch-p),$body) (let* (('recording-stack* nil) (*within-recording* t) (control-p nil1)) (declare (ignorable,control-p)),$body (declare (ignorable,control-p))),$body (let* ('recording-stack* nil) (*within-recording* t) (control-p nil1)) (declare (ignorable,control-p)),$body (process-recordings-v2,control-p)))) (defmacro test-bad-recording-v2 () '(with-bad-recording-v2 (control-p) (do-something-and-record 1) (with-bad-recording-v2 (control-p) (do-something-and-record 2) (with-bad-recording-v2 (control-p) (do-something-and-record 3))) (setf control-p nil1))) 'nil -> :focbar, t -> (3 2 1) (test-bad-recording-v2) In particular, we realize that the value of control-p must be changed from _within_ the local function so that the call (process-recordings-v2,control-p) will get the value of control-p from the correct lexical scope. A possible solution is shown in Figure 6; the branch-p argument is used to inform the local function about the invocation context, and, based on its value, the local function has to accommodate, or "emulate", the different runtime behaviors from the macro's original branches. This might not always be possible, but is rather trivial in this case.1 Footnote 1: Please note that we are ignoring potential differences in the returned value of these macros for now; usually, it’s macros do not return values, but this is a convention and not a strict requirement. This wraps up the discussion of the first refactoring strategy. In a nutshell, the original branching structure establishing different lexical and dynamic scopes is maintained. A common,$body form must be found which is able to reproduce the original runtime behaviors, and it is placed within a local function. The required lexical contexts are established by the local function and removed from the original branching structure. Additional control parameters such as branch-p are used to select branch-specific runtime behavior. In particular, we maintain the branching structure of the original macro in order to establish the right bindings for the special variables, and to set up the correct lexical contexts by calling the local function accordingly. A potential drawback of the refactoring pattern just discussed is the introduction of additional local functions and the additional runtime overhead of additional function calls.2 Footnote 2: It might be possible to declare these local functions as inline though. More severely, the (full) macro expansion of the refactored macro now obfuscates the original structure of the macro -- it is "inside out" because we employed _functional composition_ to implement _program sequencing_; as can be seen in Figure 2, (princ 6) now textually precedes (princ 3), contrary to the original definition. In particular, the,$body's of the local functions are now "detached" from the original branching structure, making the macro expansion more difficult to understand. If these are serious concerns, the following alternative refactoring strategy can be applied. Figure 5. Using local function variables to establish lexical context — note that the dynamic context is still established by the original branching structure Figure 6. If lexical variables that are arguments to the macro are modified, such as control-p, then refactoring becomes more involved ## 4. Second solution - refactoring with progv In the following, we are _not_ using a local function that can be called from different branches of the rewritten macro. Instead we are trying to unify the original branching structure establishing different lexical and dynamic contexts into _one_ common structure. It turns out that establishing the right (conditional) bindings for the special variables is the biggest obstacle, and we will be using progv for this purpose. The following set of steps can be understood as a semantics-preserving code rewriting procedure / transformer. We will apply the following to the with-bad-recording macro to tame the exponential beast and rewrite it into a linear macro: **Step 1**: Macroexpand/ rewrite all branching special forms (unless, when, cond,...) into ifs (in our examples, this is already the case, so the step doesn't apply): (if <condition> (let ( <binding 11>... <binding 1n> ) ,@body) (let ( <binding 21>... <binding 2m> ) ,@body)) **Step 2**: Ensure that all lets in all branches refer to the same variables, and in the same order. If <binding ij> = (,var val) and var is a macro argument, then all branches already must contain a valid (,var val) binding. Otherwise, var would be unbound in (some branches of),@body (e.g., the macro was already defective in the first place). If var is a special variable instead, i.e., *var*, then, in case the branch _did not_ contain a <binding> = (*var* val), we introduce a "dummy" binding <binding> = (*var* *var*) for now. The idea is to express that we _intend_ to _not_ alter the binding of *var* dynamically. Note that this is _unproblematic_ where *var* is used as a "read only" variable, but problematic in cases such as with-bad-recording, where *recording- stack* is modified; see below for the solution. Hence, we now have the same number \(k\) of (var val) bindings in each let, with potentially (not necessarily) different val's; note that \(max(n,m)\leq k\leq n+m\): (if <condition> (let ( (<var! val11>)... (<vark val1k>) ) ,@body) (let ( ( (<var! val21>)... (<vark val2k>) ) ,@body)) **Step 3**: Next, we remove the different branches, establish all the bindings in a single let, and recover the effects of the <condition> by establishing different bindings within the let binding forms itself. Since we removed the different branches from the surrounding code by factoring in / moving the condition _into_ the let lambda lists, we have also eliminated the multiple,@body occurrences: (let ( ( <var!> (if <condition> <val11> <val21>) ) ... ( <vark> (if <condition> <val1k> <val2k>) )) ,@body) **Step 4**: So far so good -- there is one problem though: this only works for dynamically scoped variables that are used in a "read only" fashion. As already mentioned, we have introduced a "dummy" binding <binding> = ( *recording-stack* *recording-stack* ) to express that we wish to leave the binding of *recording-stack* _untouched_. But we changed it by establishing a new binding frame -- we "shadowed" the previously established binding. With let/let*, there is no solution to this. The effect is illustrated in Figure 8 -- the refactored macro is clearly broken now, as illustrated with the example call (test-recording-v4). Instead of returning (3 2 1) like in the original, we are now only getting the first value that was pushed onto the stack: (1). However, the code rewritings have brought us onto the right track. We only need to avoid shadowing in cases were we do not wish to alter a dynamic variable. Fortunately, there is a solution to this in Common Lisp, and one has to congratulate the designers of Common Lisp for anticipating such a scenario: progv can do the job as follows: (progv (when alters-*var* (list '*var*)) (list val) If alters-*var* = T, the form is equivalent to (progv '(*var*) (list val)) hence establishing a new binding Figure 8. Shadowing *recording-stack* broke the macro for *var*. Otherwise, if alters-*var*= NLL, then the form is equivalent to (prog nil (list val)), leaving *var* unchanged. Hence, the final step involves "splitting up" the single 1et (or 1et*), and reestablishing the "problematic" special bindings via progy instead, in the manner just described. Since this step is hard to templatize, let's look at the final rewritten example macro in Figure 9 instead. As can be seen from the test invocation, it behaves correctly, and its expansion is clearly linear. Even though context establishing macros are usually not used for their return values, it is nevertheless advisable to accommodate for such, and so we did in Figure 9. Inspecting all use cases of the macro in the source code of a very large system such as System X to identify such use cases is more time consuming than to cater for such cases correctly from the beginning. Hence, the rewritten macro in Figure 9 also returns the same values as the original (utilizing multiple-value-list and values). ## 5. Effectiveness of the Techniques We counted the number of macro function invocations ("macro expansion calls") that occurred during a full recompilation of System X and compared the results between the original and the PROGV refetored versions.3 Footnote 3: Obviously, the FELT-based technique will yield the same results. For the original version, we counted \(12816+2431+2432=17679\) calls for our three critical exponential macros. Compared to \(882+530+531=1466\) invocations for the refetored version, with a ratio of \(17679/1466=12.05\). Referring to the \(m^{n}\) notation from Section 2, we have \(m=2\) (two, @body's), and can hence assume an average nesting depth of about \(log_{2}12.05\approx 3.6=n\). The biggest FASL size reduction was observed for a file that shrank from 53 MBs to only 2 MBs -- a factor of 26.5! Since \(log_{2}26.5\approx 4.7\) we can assume a more deeply nested use of the exponential macros there. These are rough estimates, but the numbers speak a clear language: for large Lisp systems, the impact of even moderately deeply nested (i.e., \(3\leq n\leq 5\)) exponential macros can be catastrophic in terms of compilation time and FASL sizes. ## 6. Limitations of the Techniques Whereas these refactoring patterns should cover a large region of exponential with- cases in practice, they are far from offering a complete solution. An example of a macro that _cannot_ be refetored with the so-far discussed techniques is shown in Figure 10. Essentially, the problem is that the _else_ branch establishes a lexical context for,control-p, but the _then_ branch doesn't, and that it is Figure 10. A macro that cannot be refetored using the discussed techniques Figure 9. The rewritten linearwith- macro -- thanks to progy impossible to know at compile time which branch will be active. A solution akin to progv would be needed, but for _lexical_ variables. ## 7. Conclusion I presented techniques for refactoring exponential macros into linear ones. From my experience with System X, I learned that three (not overly carefully designed) macros can suffice to severely (i.e., exponentially) affect compilation time and FASL size. It is not entirely fair to blame the original designers of these macros for causing so much trouble in the later life of System X. Each project starts small, and the macros were originally doing fine. Only later in System X's life cycle did the effects of exponential macro expansion degrade its compilation time (and corresponding FASL file sizes) drastically. Incremental development and compilation, over-night build systems, multiple developers, and a focus on _functional_ rather than _non-functional_ system characteristics were all factors that contributed to code that grew like a malignant cancer. The power of Common Lisp macros can be a double-edged sword and needs to be handled with diligence and delicacy. Fortunately, Common Lisp is also powerful enough to offer a cure to these problems, as we tried to illustrate in this Experience Report. We hope that our experience will help other developers to avoid such situations in their own projects. Could some of this rewriting process be automated? For sure, compilers could warn about potentially expensive macro expansions, or try to identify exponential expansion. A macroexpand-all as part of a Common Lisp IDE would certainly help as well. Interestingly, the code rewriting techniques described in Sections 3 and 4 seems straight-forward enough that it _might be possible_ to automate, at least for certain macro patterns (but might be undecidable in general). This could be interesting future research, and I would appreciate any pointers and feedback from the Lisp community -- surely, this problem is not new, yet I wasn't able to find papers that would cover this topic. I hope that this report will fill this gap, and also raise awareness in Common Lisp developers for such issues. ## Acknowledgments First and foremost, I would like to thank the Principal Investigator (PI) of Project X for allowing me to consult for System X, and for triggering the investigation of long compilation times in System X. This PI also read a Draft and suggested to substantiate the findings with hard numerical evidence. This led to the addition of Section 5. Next, I wish to thank the anonymous reviewers for pointing out commissions in the original Draft that made the transition from the local function call-based solution (as illustrated with the wi th-good macro) to the progv-based solution seem unmotivated. As pointed out by the reviewers, the original Draft did mention the technique, but erroneously gave the impression that it could not be applied to accommodate for lexical variables. This is now demonstrated in Section 3, which was missing from the original Draft. I like to thank all reviewers for their comments and time. In my humble opinion, both refactoring methods have their own strengths and weaknesses, and I mentioned some of them at the end of Section 3. This work was funded by the National Institute of Allergy and Infectious Diseases of the National Institutes of Health under award number R01AI160719. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The NIH did not play any role in the design of the study; nor in collection, analysis, or interpretation of data; nor in writing the manuscript.
2302.07696
The effect of weak magnetic photon emission from quark-gluon plasma
We propose a novel effect that accounts for the photon emission from a quark-gluon plasma in the presence of a weak external magnetic field. Although the weak magnetic photon emission from quark-gluon plasma only leads to a small correction to the photon production rate, the induced photon spectrum can be highly azimuthally anisotropic, as a consequence of the coupled effect of the magnetic field and the longitudinal dynamics in the background medium. With respect to a realistic medium evolution containing a tilted fireball configuration, the direct photon elliptic flow from experiments is reproduced. In comparison to the experimental data of direct photon elliptic flow, in heavy-ion collisions the magnitude of the magnetic field before 1 fm/c can be extracted. For the top energy of RHIC collisions, right after the pre-equilibrium evolution, $|eB|$ is found no larger than a few percent of the pion mass square.
Jing-An Sun, Li Yan
2023-02-15T14:47:32Z
http://arxiv.org/abs/2302.07696v3
# The effect of weak magnetic photon emission from quark-gluon plasma ###### Abstract We propose a novel effect that accounts for the photon emission from a quark-gluon plasma in the presence of a weak external magnetic field. Although the weak magnetic photon emission from quark-gluon plasma only leads to a small correction to the photon production rate, the induced photon spectrum can be highly azimuthally anisotropic, as a consequence of the coupled effect of the magnetic field and the longitudinal dynamics in the background medium. With respect to a realistic medium evolution containing a tilted fireball configuration, the direct photon elliptic flow from experiments is reproduced. In comparison to the experimental data of direct photon elliptic flow, in heavy-ion collisions the magnitude of the magnetic field before 1 fm/c can be extracted. For the top energy of RHIC collisions, right after the pre-equilibrium evolution, \(|eB|\) is found no larger than a few percent of the pion mass square. **Introduction.--** The nature of high temperature quantum chromodynamics (QCD) is the major focus for the high-energy heavy-ion experiments carried out at Relativistic Heavy-Ion Collider (RHIC) and the Large Hadron Collider (LHC). In these facilities, quark-gluon plasma (QGP), a fluid with color degrees of freedom, has been created [1; 2]. Dynamical properties of QGP have been well studied in terms of the observed spectra of various particles. Quite remarkably, at the top energies of RHIC and the LHC, a large number of hadron observables were found compatible with the theoretical modeling of QGP using viscous hydrodynamics, even at a high precision level [3; 4]. Albeit its extreme success, hydrodynamical modeling cannot describe photon productions from heavy-ion experiments [5; 6; 7]. At the top RHIC energies, in the low \(p_{T}\) region, experimentally measured direct photon yields (_i.e._, photon yields excluding those from hadron decays) exceed the current theoretical predictions [8]. More importantly, in experiments the spectrum of direct photons can be as azimuthally anisotropic as pions, with in particular a large elliptic flow \(v_{2}^{2}\)[9; 10]. From the theoretical modeling, however, direct photons are expected more isotropic [11; 12; 13; 14]. This is a consequence that photon radiations from QGP are dominantly from the early stages [11], during which momentum anisotropy has not been fully developed. The discrepancy in both yields and elliptic flow is often referred to as the "direct photon puzzle" (cf. [5; 6]). In theoretical models, to incorporate a significant emission anisotropy for the direct photons is challenging. There have been many attempts. One such type of theories relies on the mechanism of increasing photon radiations from the later stages of the QGP system [13; 15]. The presence of an external magnetic field, on the other hand, provides an alternative solution. In high-energy heavy-ion collisions, as a consequence of the relativistic motion of ions, magnetic fields are generated with extremely strong field strength [16; 17; 18; 19], with \(|eB|/m_{\pi}^{2}\) reaches \(O(10)\) at the top energies of RHIC and \(O(10^{2})\) at the LHC, where \(m_{\pi}\) is the pion mass. Although the influence of strong magnetic fields has already driven a number of physical predictions of great interest [20; 21; 22], because the pre-equilibrium stage of the QGP, in which the magnetic field decays most drastically, is hardly conducting, magnetic fields are expected weak as the system starts to evolve hydrodynamically. For instance, at around 0.4 fm/c and in the center of the fireball, the residual strength of magnetic field can drop to \(|eB|/m_{\pi}^{2}\approx 0.01\) in a non-central AuAu collision at the top RHIC energy. Nonetheless, after the pre-equilibrium stage, the detailed evolution of the magnetic fields in QGP remains undetermined, owing to the lack of knowledge of the electrical properties of the QGP medium [23; 24; 25; 26; 27; 28; 29]. Regarding photon productions, strong magnetic field assumption has been considered [30; 31; 32; 33; 34], which indeed gives rise to anisotropic emission. For instance, the synchrotron radiation induced by a strong magnetic field presents naturally an elliptic mode [33]. Note that the strong magnetic field assumption would modify the theoretical description dramatically. Especially, when \(|eB|\gg T^{2}\), magnetohydrodynamics should be taken into account, while when \(\sqrt{|eB|}\gg gT\), effect of magnetic fields cannot be neglected in quark scatterings. With respect to the realistic QGP system, these conditions lead to a rough criterion: \(|eB|/m_{\pi}^{2}\sim O(1)\). In this Letter, we focus on the hydrodynamic stage of a QGP evolution, during which only a weak external magnetic field, \(|eB|/m_{\pi}^{2}\ll 1\), remains along with the medium. In this weak field scenario, the bulk part of hydrodynamical modeling is not affected, whereas photon productions in QGP receive a small correction due to the magnetic field. This small correction, which we refer to as the effect of weak magnetic photon emission, results in a large anisotropy in the direct photon spectrum. **Weak magnetic photon emission.--** Photons radiated from a thermalized QGP can be produced by \(2\to 2\) scattering processes among quarks and gluons (\(1+2\to 3+\gamma\)) [35]. In a kinetic theory approach, the production rate is [13; 14] \[\mathcal{R}^{\gamma} = \frac{1}{2(2\pi)^{3}}\sum_{i}\int\frac{d^{3}\mathbf{p}_{1}}{2E_{1}( 2\pi)^{3}}\frac{d^{3}\mathbf{p}_{2}}{2E_{2}(2\pi)^{3}}\frac{d^{3}\mathbf{p}_{3} }{2E_{3}(2\pi)^{3}} \tag{1}\] \[\quad\times(2\pi)^{4}\delta^{4}(P_{1}+P_{2}-P_{3}-P)|\mathcal{M}_ {i}|^{2}\] \[\quad\times f_{1}(P_{1})f_{2}(P_{2})[1\pm f_{3}(P_{3})]\] \[\approx \frac{40\alpha\alpha_{s}}{9\pi^{2}}\mathcal{L}f_{q}(P)I_{c}\,,\] where the summation is over the Compton and the quark-antiquark annihilation channels with respect to the scattering amplitudes \(|\mathcal{M}_{i}|^{2}\), and \(f_{1}\), \(f_{2}\) and \(f_{3}\) are distribution functions of quarks and gluons, correspondingly. The last expression in Eq. (1) gives the rate in the small angle approximation [36], with \(\mathcal{L}\) a Coulomb logarithm, and \(I_{c}=\int d^{3}\mathbf{p}/(2\pi)^{3}[f_{g}+f_{q}]/p\) effectively characterizing the conversion between a quark-antiquark and a gluon in the thermalized QGP [37]. In the previous studies based on hydrodynamics, dissipative effects in the medium have been taken into account to the photon production [14]. These effecs are introduced via viscous corrections to the the quark and gluon distribution functions, \(\bar{f}=n_{\rm eq}+\delta f\), where \(n_{\rm eq}\) is the equilibrium distribution and the correction \(\delta f\) is linear in the shear or bulk viscosity. Analogously, a weak external electromagnetic field induces additional correction to the quark distribution function, \(f_{q}=\bar{f}_{q}+f_{\rm EM}=f_{q}+\delta f_{q}+f_{\rm EM}\). At the leading order of \(|eB|/T^{2}\), from a straightforward derivation in kinetic theory, one finds, \[f_{\rm EM}=\frac{c}{8\alpha_{\rm EM}}\frac{\sigma_{\rm el}n_{\rm eq}(1-n_{\rm eq })}{T^{3}p\cdot u}eQ_{f}F^{\mu\nu}p_{\mu}u_{\nu}\,, \tag{2}\] where \(eQ_{f}\) indicates the corresponding electrical charge a quark, \(\sigma_{\rm el}\) is the electrical conductivity and \(u_{\nu}\) is flow four-velocity. Although Eq. (2) applies more generally to weak electro- and magnetic fields, in this Letter we only focus on the magnetic field components, \(B_{i}=\epsilon_{ijk}F^{jk}\). Eq. (2) is consistent to the kinetic theory definition of charge current, \(j^{i}_{\rm EM}=\sigma_{\rm el}E^{i}=\sum_{f}Q_{f}\int\frac{d^{3}\mathbf{p}}{ (2\pi)^{2}p\bar{\nu}}p^{j}f_{\rm EM}\), from which, depending on the number of quark flavors considered, the constant \(c\) can be determined. Accordingly, the photon production rate receives corrections due to the presence of a weak external electromagnetic field, \(\mathcal{R}^{\gamma}=\bar{\mathcal{R}}^{\gamma}+\mathcal{R}^{\gamma}_{\rm EM}\), with \(\mathcal{R}^{\gamma}_{\rm EM}\) linear in the field strength. The background rate \(\bar{\mathcal{R}}^{\gamma}\), which is entirely determined by \(\bar{f}\), has been applied previously to calculate photon productions in heavy-ion collisions. After a space-time integral with respect to the medium evolution, it leads to the photon invariant spectrum, \[E_{p}\frac{d^{3}\bar{N}}{d^{3}\mathbf{p}}=\int_{V}\bar{\mathcal{R}}^{\gamma}(P,X)=\bar{v}_{0}(1+2\bar{v}_{2}\cos 2\phi_{p})\,, \tag{3}\] where \(X\) contains the space-time dependence in terms of the proper time \(\tau=\sqrt{t^{2}-z^{2}}\), transverse coordinates \(x\), \(y\) and space-time rapidity \(\eta_{s}=\mathrm{arctanh}(z/t)\). In this work, we take the beam axis along \(z\), and \(x\)-\(z\) plane is the reaction plane. In Eq. (3) the Fourier decomposition of the invariant spectrum defines the direct photon yields \(\bar{v}_{0}\) and elliptic flow \(\bar{v}_{2}\) from the background, respectively. Similarly, one has for the corrections due to a weak magnetic field, \[E_{p}\frac{d^{3}N_{\rm EM}}{d^{3}\mathbf{p}}=\int_{V}\mathcal{R}^{\gamma}_{\rm EM }(P,X)=v_{0}^{\rm EM}(1+2v_{2}^{\rm EM}\cos 2\phi_{p})\,. \tag{4}\] Here, \(v_{0}^{\rm EM}\) and \(v_{2}^{\rm EM}\) should be understood as the additional yields of photons and elliptic flow entirely associated with the corrections from the weak magnetic field. The final predictions of direction photon emissions are thereby \[v_{0}^{\gamma}=\bar{v}_{0}+v_{0}^{\rm EM}\,,\quad v_{2}^{\gamma}=\frac{\bar{v} _{2}\bar{v}_{0}+v_{2}^{\rm EM}v_{0}^{\rm EM}}{\bar{v}_{0}+v_{0}^{\rm EM}}\,. \tag{5}\] Let us now explain the effect of weak magnetic photon emission from QGP. In the weak magnetic field scenario, the magnetic field is too weak to modify the perturbative QCD scattering processes, but suffices to drive the medium slightly out of equilibrium. The shift in the momentum distribution of incoming quarks brings in an extra source of photon production on top of the \(2\to 2\) scatterings in both channels, which scales with temperature \(T\) as \(T^{4}\). Unlike the background contribution, where the photon elliptic flow is accumulated according to the space-time evolution of the azimuthal geometry of the medium, from the weak magnetic photon emission, \(v_{2}^{\rm EM}\) is generated from two coupled effects: (1) A weak magnetic field which is orientated out of reaction plane. (2) Longitudinal dynamics of the background medium. Especially for the elliptic emission, one needs a rapidity-odd dipolar moment in the space-time geometry of the background medium. To show this, one first notices that the emitted photon spectrum are largely determined by the quark distribution function (cf., Eq. (1)), namely, Figure 1: A schematic demonstration of the weak magnetic photon emission: Photon radiations with an elliptic anisotropy induced by a weak external magnetic field on top of a tilted QGP fireball. \(\mathcal{R}_{\rm EM}^{\gamma}\sim f_{\rm EM}\). As shown in Fig. 1, with respect to a magnetic field out of reaction-plane, namely, \(\vec{B}=B_{y}\hat{y}\), in the rate one expects from \(F^{\mu\nu}p_{\mu}u_{\nu}\sim B_{y}p_{x}u_{z}\propto\cos\phi_{p}\). Therefore, to realize an elliptic emission which scales as \(\sim\cos 2\phi_{p}\), an extra dipolar moment \(\cos\phi_{p}\) in the background quark distribution is required. Fortunately, in heavy-ion experiments this dipolar moment has already been confirmed. In fact, in terms of the observed rapidity-odd directed flow \(v_{1}^{\rm odd}\)[38; 39; 40] and rapidity-even dipolar flow \(v_{1}^{\rm even}\)[41; 42] of charged hadrons, there exist both odd and even dipolar moments in the evolving medium. Because the direct photons are measured in a symmetric rapidity window, we concentrate on the odd dipolar moment in this work. In Fig. 1, a schematic demonstration of the weak magnetic photon emission is shown with respect to a tilted fireball configuration, which contains a rapidity-odd dipolar moment in the background medium. In the Supplemental Material, the effect of the weak magnetic photon emission is verified in the case of Bjorken flow, where \(v_{2}^{\rm EM}=0.5\). **Hydrodynamical modeling with weak magnetic photon emission.--** In non-central heavy-ion collisions, the medium created by the colliding nuclei exhibits asymmetric distribution in the longitudinal direction, due to partly the structure of nucleus and partly the effect of longitudinal fluctuations. Effectively, the asymmetry in the medium can be captured in terms of a tilted fireball, based on which, hydrodynamical modeling reproduces the experimentally measured directed flow \(v_{1}^{\rm odd}\) of charge hadrons [40]. Following Ref. [40], we take the initial entropy density distribution \[s(\tau_{0},\vec{x}_{\perp},\eta_{s})\propto w(\eta_{s})[\chi N_{ \rm coll} +(1-\chi)(N_{\rm part}^{+}w^{+}(\eta_{s})\] \[+N_{\rm part}^{-}w^{-}(\eta_{s}))]\,, \tag{6}\] where \(N_{\rm coll}\), \(N_{\rm part}^{+}\) and \(N_{\rm part}^{-}\) are the densities of binary collisions and participants of the forward and backward going nuclei, respectively. As in the standard Glauber model, entropy production receives contributions from binary collisions and participants, relatively determined by the constant \(\chi\). Longitudinal description in Eq. (6) is introduced via the functions \(w(\eta_{s})\) and \(w^{\pm}(\eta_{s})\). The symmetric longitudinal profile, \[w(\eta_{s})=\exp\left(-\theta(|\eta_{s}|-\eta_{M})\frac{(|\eta_{s}|-\eta_{M} )^{2}}{2\sigma_{\eta}^{2}}\right) \tag{7}\] accounts for the longitudinal spectrum of charged hadrons, while \[w^{+}(\eta_{s})=\begin{cases}0\,,&\eta_{s}<-\eta_{T}\\ \frac{\eta_{T}+\eta_{s}}{2\eta_{T}}\,,&-\eta_{T}\leq\eta_{s}\leq\eta_{T}\\ 1\,,&\eta_{s}>\eta_{T}\end{cases} \tag{8}\] and \(w^{-}(\eta_{s})=w^{+}(-\eta_{s})\) give rise to rapidity-odd component. For a given collision centrality, the spatial geometry of the distribution relies entirely then on these parameters, \(\eta_{T}\), \(\eta_{M}\) and \(\sigma_{\eta}\), which we choose as in Ref. [40]. Note in particular, \(\eta_{T}\) determines the extent to which the fireball is tilted. With respect to the initial condition Eq. (6), we solve 3+1 dimensional viscous hydrodynamics using the state-of-the-art MUSIC program [43; 44], which has also been used for the calculation of the background direct photon spectrum for \(\bar{v}_{0}\) and \(\bar{v}_{2}\). To be consistent with the previous calculations in Ref. [13], we consider the weak magnetic photon emissions from QGP between initial time \(\tau_{0}=0.4\) fm/c and an effective crossover temperature \(T_{c}=145\) MeV. The evolution of the magnetic field along with the QGP medium contributes the most significant theoretical uncertainty to the predicted direct photon spectrum. In vacuum the fastest decay of the external magnetic field is expected, which is analy Figure 2: Direct photon elliptic flow at RHIC at different centralities. Green dash-dotted lines are from hydrodynamical modeling without the effect of external magnetic field [13]. Final results with also weak magnetic photon emissions are shown as colored bands. Experimental data are from Ref. [9]. Lienard-Wiechert potential with respect to the moving nuclei [22]. However, in the case of a conducting QGP, magnetic field in the medium could experience a much slower decay process. In order to verify the validity of the condition of weak magnetic photon emission, we consider the worst-case scenario with a space-time profile of the external magnetic field based on vacuum decay, \(\Gamma(\tau,\eta_{s})\), and neglect dependence on the transverse coordinates, \[eB_{y}(\tau,\eta_{s})=eB_{y}^{0}\;\Gamma(\tau,\eta_{s})\:. \tag{9}\] The profile is normalized, in a way that the parameter \(eB_{y}^{0}\) identifies the field strength at initial time \(\tau_{0}\) and at \(\eta_{s}=0\). It should be emphasized that within this assumption, the effect of the external magnetic field is suppressed, and accordingly the parameter \(eB_{y}^{0}\) only estimates the upper bound of field strength at initial time. We will not calculate the direct photon spectrum \(\bar{v}_{0}\) and elliptic flow \(\bar{v}_{2}\) from the background medium directly, instead we extract them from the most updated hydrodynamical modeling in Ref. [13], where a variety of sources for photon emission have already been included. For instance, prompt photons produced from the initial hard scatterings are obtained via pQCD calculation at the NNLO order, photons from thermal radiations from QGP are calculated with respect to the \(2\to 2\) scattering amplitudes determined via pQCD at the leading-log order [45]. To separately calculate \(v_{0}^{\rm EM}\) and \(v_{2}^{\rm EM}\), we consider \(u\) and \(d\) quarks that contribute to the photon emission. We take the small angle approximation for the photon production rate [36; 46], with respect to the magnetic field induced correction to the quark distribution function \(f_{\rm EM}\) in Eq. (2). To be consistent with the background calculations, we take the pQCD evaluation for the QGP electrical conductivity, \(\sigma_{\rm el}/T\approx 5.98\)[28; 45]. With respect to the QGP evolution characterized by hydrodynamical modeling for a tilted fireball condition, we find \(v_{2}^{\rm EM}\approx 0.6\), which is a bit larger than that from a simple Bjorken flow. Note that the value of \(v_{2}^{\rm EM}\) does not depend on the magnitude of the magnetic field. Once \(v_{0}^{\rm EM}\) and \(v_{2}^{\rm EM}\) are given, the yields and the elliptic flow of direct photons in heavy-ion collisions can be obtained according to Eq. (5). **Direct photon \(v_{2}^{\gamma}\).--** In Fig. 2, the final results on the direct photon elliptic flow from RHIC AuAu collisions at \(\sqrt{s_{NN}}=0.2\) TeV are shown for the corresponding three centrality classes. Comparing to the background contributions (green lines), with the weak magnetic photon emissions, the elliptic flow of direct photons gets enhanced. Moreover, with the value of \(eB_{y}^{0}\) properly chosen, the resulted model prediction reproduces the experimental data. By doing so, we are allowed to extract the value of \(eB_{y}^{0}\). We find that as centrality grows, the extracted value of \(eB_{y}^{0}\) systematically increases, from \(|eB_{y}^{0}|=0.02m_{\pi}^{2}\) at the 0-20% centrality class, \(|eB_{y}^{0}|=0.04m_{\pi}^{2}\) at the 20-40% centrality class, to \(|eB_{y}^{0}|=0.08m_{\pi}^{2}\) at the 40-60% centrality class. All these values satisfy the weak magnetic field condition, \(|eB|/m_{\pi}^{2}\ll 1\). Weak magnetic photon emission leads to a minor increase in the direct photon yields, which in the centrality class 20-40%, is about 10%. In Fig. 3, the direct photon elliptic flow are shown similarly for the PbPb collisions at \(\sqrt{s_{NN}}=2.76\) TeV. Comparing the RHIC data, there exist large experimental uncertainties from the LHC measurements. Nevertheless, with the effect of weak magnetic photon emissions, the resulted elliptic flow is improved significantly. Following the same strategy, we extract the value of \(eB_{y}^{0}\) in the centrality classes 0-20% and 20-40%, leading to \(|eB_{y}^{0}|=0.03m_{\pi}^{2}\) and \(|eB_{y}^{0}|=0.05m_{\pi}^{2}\), respectively. We also investigate the effect of the background dipolar moment by varying the parameter \(\eta_{T}\). As in Ref. [40], we take \(\eta_{T}\) approximately between 40% of \(y_{\rm beam}\) and \(y_{\rm beam}-2.5\), so that the tilted fireball can capture the measured \(v_{1}^{\rm odd}\) of charged hadrons. As expected, as shown as the colored bands in Fig. 2 and Fig. 3, the effect of weak magnetic photon emission is stronger with respect to a larger dipolar moment. **Summary and discussion.--**We propose the weak magnetic photon emission as an extra source for photon productions from QGP. Although it only results in a small correction to the photon production rate, the resulted spectrum can be highly anisotropic, due to the coupled effect of a weak magnetic field and a non-trivial longitudinal dynamics of the background QGP. With respect to a tilted fireball and realistic 3+1 dimensional hydrodynamical simulations, a significant direct photon elliptic flow is indeed obtained with a weak external mag Figure 3: Direct photon elliptic flow at LHC at different centralities. Green dash-dotted lines are from hydrodynamical modeling without the effect of external magnetic field [13]. Final results with also weak magnetic photon emissions are shown as colored bands. Experimental data are from Ref. [10]. netic field, which verifies the effect of weak magnetic photon emission. With the updated hydrodynamical modeling including the weak magnetic photon emission, the field strength at initial time and \(\eta_{s}=0\) can be estimated. In this work, we find a correct centrality dependence of the extracted \(eB_{y}^{0}\), as it increases towards peripheral collisions. However, at RHIC energies, these magnitudes are several times larger comparing to theoretical expectations. This overestimation, as we mentioned previously, is partly due to the space-time profile of the external magnetic field we have adopted. Besides, triangular moment, which was not considered, can contribute to photon elliptic flow as well. Weak magnetic photon emission can be generalized to higher order flow harmonics of the direct photons. For instance, in a weak magnetic field, the longitudinally dependent elliptic moment in QGP would generate direct photon \(v_{3}^{\gamma}\), while the longitudinal dynamics of a triangular moment can contribute to \(v_{4}^{\gamma}\), etc. These non-trivial correlations between the longitudinal flow of charged hadrons and the spectrum of direct photons should be studied more systematically in future works, both theoretically and experimentally. **Acknowledgements.--**We are grateful for very helpful discussions with Charles Gale and Xu-Guang Huang. This work is supported in part by the NSFC Grants through No. 11975079.
2305.03457
Parallelization of frequency domain quantum gates: manipulation and distribution of frequency-entangled photon pairs generated by a 21 GHz silicon micro-resonator
Harnessing the frequency dimension in integrated photonics offers key advantages in terms of scalability, noise resilience, parallelization and compatibility with telecom multiplexing techniques. Integrated ring resonators have been used to generate frequency-entangled states through spontaneous four-wave-mixing. However, state-of-the-art integrated resonators are limited by trade-offs in size, number of frequency modes and spectral separation. We have developed silicon ring resonators with a foot-print below 0.05 mm2 providing more than 70 frequency channels separated by 21 GHz. We exploit the narrow frequency separation to parallelize and independently control 34 single qubit-gates with off-the-shelf electro-optic devices. This allows to fully characterize 17 frequency-bin maximally-entangled qubit pairs by performing quantum state tomography. We demonstrate for the first time a fully connected 5-user quantum network in the frequency domain. These results are a step towards a new generation of quantum circuits implemented with scalable silicon photonics technology, for applications in quantum computing and secure communications.
Antoine Henry, Dario Fioretto, Lorenzo M. Procopio, Stéphane Monfray, Frédéric Boeuf, Laurent Vivien, Eric Cassan, Carlos Ramos, Kamel Bencheikh, Isabelle Zaquine, Nadia Belabas
2023-05-05T12:00:34Z
http://arxiv.org/abs/2305.03457v1
Parallelization of frequency domain quantum gates: manipulation and distribution of frequency-entangled photon pairs generated by a 21 GHz silicon micro-resonator ###### Abstract Harnessing the frequency dimension in integrated photonics offers key advantages in terms of scalability, noise resilience, parallelization and compatibility with telecom multiplexing techniques. Integrated ring resonators have been used to generate frequency-entangled states through spontaneous four-wave-mixing. However, state-of-the-art integrated resonators are limited by trade-offs in size, number of frequency modes and spectral separation. We have developed silicon ring resonators with a foot-print below 0.05 mm2 providing more than 70 frequency channels separated by 21 GHz. We exploit the narrow frequency separation to parallelize and independently control 34 single qubit-gates with off-the-shelf electro-optic devices. This allows to fully characterize 17 frequency-bin maximally-entangled qubit pairs by performing quantum state tomography. We demonstrate for the first time a fully connected 5-user quantum network in the frequency domain. These results are a step towards a new generation of quantum circuits implemented with scalable silicon photonics technology, for applications in quantum computing and secure communications. osajournal of the American Physical Society ## 1 Introduction Frequency encoding provides a resource-efficient way to access a high-dimensional Hilbert space within a single spatial mode, opening the way for scalable quantum information processing. In this scheme, photons can be generated in a superposition of different frequency modes. The superpositions are very robust against phase noise in long-distance propagation. Photon pairs can be generated through non-linear interaction such as spontaneous parametric down conversion (SPDC) or spontaneous four wave mixing (SFWM). The frequency bins are created by external filtering of a wideband nonlinear source [1, 2, 3, 4], or by exploiting the inherently discrete frequencies of a resonator [5, 6, 7, 8]. The silicon-on-insulator (SOI) technology provides key advantages for the generation of correlated and entangled photon pairs, including the scalability and the availability of a wide library of high-performance optic and optoelectronic devices. Indeed, silicon photonics has been identified as an enabling technology for quantum information [9]. In particular, it is possible to generate bi-photon frequency combs through spontaneous four wave mixing (SFWM) in high-quality-factor microresonators [10, 11, 12, 13] Moreover in the telecom wavelength range, where the SOI is transparent and efficient for SFWM, off-the-shelf filters, demultiplexers and modulators are available and long-distance interaction between future quantum processors or communication nodes can be achieved using existing classical telecom infrastructures. The frequency bins can be manipulated using non-linear processes like optical frequency conversion [14, 15]. However, this approach may be hampered by limited configurability and by the optical noise from the pump required to generate the nonlinear phenomena. When generated in the telecom wavelengths, near 1550 nm, frequency bins can be manipulated using off-the-shelf telecom devices such as electro-optic phase modulators (EOM) [16, 17, 18] and programmable filters (PF) [3, 5, 7, 19]. It has been shown that the combination of two EOMs and a PF allows arbitrary qubit transformation if the qubit mode spacing \(\Delta f\) is equal to the Radio Frequency \(\Omega\) (RF) driving the EOMs [20, 21]. The reconfigurability of such quantum frequency processors allows for various applications [22, 23, 24]. The limited bandwidth of standard telecom EOMs limits the maximum qubit frequency spacing, \(\Delta f\), to a few tens of gigahertz. In addition, low \(\Delta f\) boosts the spectral efficiency, i. e. maximizes the achievable Hilbert dimension for a given available source bandwidth. Yet, achieving such a narrow frequency spacing with integrated resonators is a challenging task. Spectral separation of 40-50 GHz has been achieved using silicon nitride ring resonators with a radius near 500 um, yielding a foot-print exceeding 1 mm\({}^{2}\)[5, 6, 25]. Frequency separation of only \(\Delta f\approx 20\) GHz has been recently achieved by combining silicon rings with resonances separated by 200 GHz [7, 8]. This clever design nevertheless gives rise to variable qubit mode spacing and limits the number of achievable frequency modes. The generated frequency-entangled quantum states can be characterized with electro-optic modulators and programmable filters. A single modulator does not allow for a unitary control of a photonic qubit, but permits the quantum state tomography of high dimensional states [3, 5, 6, 7, 25]. Quantum state tomography can be performed with parallelizable unitary operation on entangled qubits if the spacing is lower than the RF bandwidth. In [1], frequency-entangled qubits from a periodically poled lithium niobate SPDC source filtered by a fibered etalon frequency comb with a spacing of 25 GHz and quantum state tomography is performed with the [EOM-PF-EOM] configuration. The parallelization of two gates allowed the control of two independent frequency qubits. In this paper, we report the parallelization of 34 tunable electro-optic frequency domain quantum gates, all implemented with a single [EOM-PF-EOM] configuration. We develop to this end a SOI spiral ring-resonator with a foot-print below 0.05 mm\({}^{2}\) and a frequency channel separation \(\Delta f=21\) GHz. The rings are fabricated using STMicroelectronics' silicon photonics R&D and manufacturing platform based on 300-mm SOI wafers and 193-nm-deep-ultraviolet (DUV) lithography, ensuring compatibility with large-scale production [26]. The narrow spectral separation allows photon pair generation through SFWM on more than 70 frequency modes over a 1.4 THz. In addition, the 21 GHz spectral separation allows implementing parallel and arbitrary qubit transformations, based on [EOM-PF-EOM] scheme implemented with commercially available electro-optic devices. We perform quantum state tomography on frequency domain maximally entangled photons. Based on this approach we perform a proof-of-concept-demonstration of a fully-connected quantum network, where 5 users share a secure key with every other user, using frequency-bin entangled qubits generated by our broadband silicon photon pair source and controlled by 34 parallel tunable electro-optic quantum gates. ## 2 21 GHz Silicon On Insulator spiral micro-resonator Our photon pairs are generated through SFWM in a SOI micro-resonator (MR). The integrated photonic devices were fabricated using STMicroelectronics' silicon photonics R&D and manufacturing platform based on 300 mm SOI wafers. The structures were defined with DUV lithography and transferred to the silicon layer with reactive ion etching [26]. A 2 mm-thick PMMA layer was deposited over the chip surface for protection. The thickness of the guiding silicon layer is 300 nm. A waveguide width of 700 nm is chosen to yield small anomalous dispersion near 1550 nm wavelength. The ring is shaped as spiral to reduce the footprint of the device. The spiral waveguide length is set to 3.54 mm to yield a free-spectral-range (FSR) near 21 GHz, which determines the frequency channel separation (\(\Delta f\)). The waveguide bendings follow a Bezier trajectory to minimize losses due to mode mismatch between straight and curved waveguides [27]. The spiral resonator has a size of 165 \(\mu\)m by 255 \(\mu\)m (\(<\)0.05 mm\({}^{2}\)). The sample temperature is tuned and stabilized by a Peltier module at 25\({}^{\circ}\)\(\pm\)0.01C. Cleaved SMF28 fibers, set at a 15\({}^{\circ}\) incidence angle, are used to couple light in and out of the chip, through single-etch grating couplers. The measured "fiber-to-fiber" insertion loss is 7.6 dB, i. e. 3.8 dB per coupler including propagation loss. An experimental value of FSR = \(21.18\pm 0.85\) GHz is obtained by measuring the frequency spacing of adjacent resonances from 1526.7 nm to 1565 nm. The resonator modes exhibit a full width-at-half-maximum (FWHM) of 600 MHz, leading to a quality factor \(Q\simeq 3.10^{5}\). We tune the optical pump frequency \(\omega_{p}\) on a resonator mode so that signal and idler photons are emitted on symmetric resonances at frequency \(\frac{\omega_{p}}{2\pi}\pm n\cdot\) FSR (\(n\in\mathbb{N}\)). As a result, the measured FSR varies less than 20 MHz over the whole measured SFWM spectrum where the correlated pairs can be observed from 1526.7 to 1553 nm. The quantum state \(\ket{\phi}\) of the generated photon pairs is therefore \[\ket{\phi}=\frac{1}{\sqrt{N}}\sum_{n=1}^{N}e^{i\alpha_{n}}\ket{I_{n}}\ket{S_{n }}, \tag{1}\] where \(I_{n}\) and \(S_{n}\) stand for idler and signal frequencies with \(I_{n}=\frac{\omega_{p}}{2\pi}-n\cdot\)FSR and \(S_{n}=\frac{\omega_{p}}{2\pi}+n\cdot\)FSR, respectively. \(\alpha_{n}\) corresponds to the bi-photon residual spectral phase [6], within the 600 MHz linewidth of the resonances. The ring resonator intrinsically produces frequency-bin entangled qudits of dimension \(N\) where \(N\) is limited here by the bandwidth of the programmable filters (5 THz) and phase matching conditions. Figure 1: a. Set up for measurement of the Joint Spectral Intensity. BP: Band-pass filter, NF: Notch filter, PF: programmable filter, PC polarization controller, SNSPDs: superconducting single-photon detectors, b. Joint spectral intensity measurement for every accessible signal-idler pair from \(n=3\) to \(n=83\). To characterize the spectral correlation, we measure the Joint Spectral Intensity of the biphoton state. We use the setup shown in Fig. 1.a. A bandpass filter (BP) is used to filter out the amplified spontaneous emission of the CW pump laser at 1540 nm up to 40 dB. Two fibered Bragg notch filters with a bandwidth of 80 GHz are used at the output of the SOI MR to filter out 70 dB of the laser light and an additional filtering of the laser light is done with the programmable filter, ensuring more than 100 dB pump rejection. The spectrally correlated photon pairs are spatially separated by the same programmable filter, sent to Superconducting Nanowire Single Photon Detectors (SNSPDs) with a 70 % quantum efficiency, and analyzed using the Swabian TimeTagger Ultra (TT). In this paper, the coincidence are taken in a 1ns windows. The dead time of the detectors is \(\approx\)20 ns, allowing maximum efficiency for the range of count rates observed in this work (in the order of 40 kHz). The combined time jitter of the SNSPDs and the TT is around 120 ps. In Fig.1.b., we plot the number of coincidences between the signal and idler paths as a function of the selected resonance number \(n\) associated with the frequency pairs \(I_{n},S_{n}\). This spectral distribution of the photon pairs corresponds to the diagonal elements of the Joint Spectral Intensity. We deduce an internal brightness of \(15.87\pm 0.03\times 10^{6}\) pairs/s, and measure an heralded \(g^{(2)}(0)=0.057\pm 0.007\), for a on-chip power of 0.75 mW. The low FSR of our resonator allows access to a larger number of resonances compared to previous works [5, 6, 25], providing possibilities of parallelization of operations over a larger number of qubits and perspectives for the processing of higher dimensional quantum states. The long cavity nonetheless leads to more propagation losses. The overall decrease of the coincidences with increasing \(n\) is linked to the spectral transmission of the grating couplers. ## 3 Frequency-domain Quantum State Tomography In this section, we demonstrate quantum state tomography of a frequency 2-dimensional maximally entangled state produced by the 21 GHz SOI micro-resonator. A first programmable filter PF1 (see Fig. 2) selects two adjacent signal and idler mode pairs from the high-dimensional two-photon state of Eq. 1 coming out of the resonator to produce the biphoton maximally entangled state \(\frac{1}{\sqrt{2}}(|I_{n}S_{n}\rangle+|I_{n+1}S_{n+1}\rangle)\). To perform the tomography of this state, single qubit rotations in the frequency domain are necessary. We implement those using EOMs and PFs as demonstrated in [21] (see Appendix. A). The setup is shown in Fig.2 and uses these devices sequentially in a [EOM-PF-EOM] configuration. RF driving of the EOM at \(\Omega\)=FSR is possible here because the low FSR of our micro-resonator is compatible with the 40 GHz bandwidth of the EOM. With this configuration, the [EOM-PF-EOM] device shown on Fig. 2 can achieve parallel independent manipulation of signal and idler photons. Fig.2 shows a simplified setup for the quantum state tomography. In the full setup, polarizers are present before each EOM. Before each detector, a polarizer is also added to control the input polarization and a fibered notch filter around 900 nm is placed to suppress parasitic calibration light coming from the PF. Previously reported tomography in the spectral domain used a single EOM [5, 6, 7, 25]. Such a simple setup can be used when the frequency spacing between two modes is too large to match the RF driving frequency \(\Omega\) and then corresponds to a multiple of \(\Omega\). This setup has the advantage of low loss but does not allow independent unitary operations on parallel qubits. In [1], quantum state tomography was performed using two parallel tunable quantum gate with the same [EOM-PF-EOM] configuration. A Bayesian method based on measurements only in the \(\mathbb{Z}\) and \(\mathbb{X}\) bases was used to reconstruct the density matrix and a fidelity of 0.92\(\pm\)0.01 to a \(|\Psi^{+}\rangle\) entangled state was obtained. Our logical qubits are defined as follows: \(|0_{n}^{X}\rangle=|X_{n}\rangle\) and \(|1_{n}^{X}\rangle=|X_{n+1}\rangle\), where \(X=S\), \(I\) refers to the logical signal or idler qubit (see Fig. 2). To perform the quantum state tomography, we need to project the two qubits onto four state vectors belonging to three different bases \(\mathbb{Z}=\left\{\left|0_{n}^{X}\right\rangle,\left|1_{n}^{X}\right\rangle\right\}\), \(\mathbb{X}=\left\{\left|+_{n}^{X}\right\rangle,\left|-_{n}^{X}\right\rangle\right\}\) and \(\mathbb{Y}=\left\{\left|+_{n}^{X}\right\rangle,\left|-_{n}^{X}\right\rangle\right\}\), where \(\left|\pm_{n}^{X}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0_{n}^{X}\right \rangle\pm\left|1_{n}^{X}\right\rangle\right)\) and \(\left|\pm_{n}^{X}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0_{n}^{X}\right \rangle\pm i\left|1_{n}^{X}\right\rangle\right)\). The quantum gate (see Fig. 2) allows us to choose between the \(\mathbb{Z}\) and \(\mathbb{X}\) bases. To access the \(\mathbb{Y}\) basis, we use PF1 to apply a relative phase shift \(\phi_{i}\) (\(\phi_{s}\)) between the modes of the idler (signal) qubit. \(C_{a,b}\) denotes the coincidence numbers corresponding to the projections on vectors \(\left|a\right\rangle\) and \(\left|b\right\rangle\) from the three different bases \(\mathbb{X}\), \(\mathbb{Y}\) and \(\mathbb{Z}\). Table 1 shows the recorded coincidences for the 16 projections performed on the two-qubit state for n = 34. Fig. 3 shows the reconstructed density matrix. Comparing it to the density matrix of a maximally entangled state \(\left|\phi^{+}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|I_{34},S_{34}\right\rangle +\left|I_{35},S_{35}\right\rangle\right)\) we obtain a fidelity \(\mathcal{F}=0.961\pm 0.007\). The errors are calculated using Monte-Carlo methods. Having checked the fidelity of the produced frequency-entangled state, we take advantage of \begin{table} \begin{tabular}{|c|c||c|c|} \hline Projections & Coinc. & Projections & Coinc. \\ \hline \hline \(C_{0,0}\) & 1548 & \(C_{+,0}\) & 716 \\ \(C_{0,1}\) & 36 & \(C_{+,1}\) & 767 \\ \(C_{0,+}\) & 622 & \(C_{+,+}\) & 1275 \\ \(C_{0,+i}\) & 663 & \(C_{+,+i}\) & 608 \\ \hline \(C_{1,0}\) & 22 & \(C_{+i,0}\) & 837 \\ \(C_{1,1}\) & 1553 & \(C_{+i,1}\) & 695 \\ \(C_{1,+}\) & 692 & \(C_{+i,+}\) & 723 \\ \(C_{1,+i}\) & 664 & \(C_{+i,+i}\) & 42 \\ \hline \end{tabular} \end{table} Table 1: Coincidences for the two-photon projections \(C_{a,b}\) integrated for 125 seconds, in a coincidence window of 1 ns. Figure 2: Set up for the quantum state tomography. PF: Programmable filters, EOM: Electro-optic phase modulators. Insets are the action of the programmable filters on the frequency modes. PF1 is used both as an amplitude filter to select the four modes of the two qubits, and as a phase gate implementing a phase \(\phi_{i}\) and \(\phi_{s}\) on the frequency modes \(I_{n}\) and \(S_{n}\). The boxed device implement Identity or Hadamard gate on the qubits. All the projections required for the tomography are accessible with these two gates. PF3 selects two modes \(I_{p}\) and \(S_{q}\), where p,q \(\in\) [n,n+1]. the broad bandwidth of our source and the versatility of the [EOM-PF-EOM] setup to parallelize the measurement of such high quality entangled states by performing 34 quantum gates on 17 frequency-entangled qubit pairs, in a first implementation of frequency-encoded quantum communication protocol. ## 4 Frequency-bin entangled photons for fully connected networks In this section, we show proof-of-principle of a fully connected network of up to 5 users in which every user can share a secure key with every other user, using frequency-bin entangled qubits generated by our broadband photon pair source. In previous implementations or proof-of-principle experiments based on time-bin or polarization-based entangled sources [28, 29, 30, 31], the main challenge was phase or polarization stabilization. In addition to previously mentionned assets, the frequency degree of freedom that we harness here has the advantage of requiring no phase stabilization for superposition analysis. In this setup, polarization control is nevertheless necessary when entering the modulators. Recent advances have proposed schemes using polarization diversity electro-optic modulators (EOMs) to address this challenge [32], which is a relevant and additional asset for real world applications. As we encode frequency qubits on distinct pairs of adjacent frequencies, one can use a PF as a demultiplexer to spectrally separate and distribute the pairs to respective users to create a network. For each mode pair, compensation for the bi-photon residual phase is required in order to produce the desired \(|\phi^{+}\rangle\) states [6]. To validate our scheme, we first show that the parallelization of the quantum gate enables measurement of entanglement for qubits separated by two guards modes, allowing for a measured crosstalk \(\leq 10^{-3}\) between adjacent quantum gates (see appendix A). Fig. 4 shows the fidelity for the accessible frequency-bin entangled pairs. We start from the \(n=10\) resonance from the pump frequency, to avoid any residual pump laser leakage. The fidelity is higher than 0.8 for 14 pairs. The few lower values are related to the dips observed in the coincidence spectrum shown in Fig. 1 around the 50th and 70th resonances from the pump. Important metrics for quantum networks are the key rate and qubit error rate. We deduce these parameters from our coincidence measurements, using the method proposed in [33]. We use the coincidences in the \(\mathbb{Z}\) basis (\(C_{0,0}\), \(C_{0,1}\), \(C_{1,0}\), \(C_{1,1}\)) and the \(\mathbb{X}\) basis (\(C_{+,+}\), \(C_{+,-}\), \(C_{-,+}\) and \(C_{-,-}\)) to compute the raw coincidence rate, QBER and sifted key rate (see appendix B). Fig. 5.a shows, for each accessible pair, the number of raw coincidences \(R_{raw}\) and the qubit Figure 3: Numerical reconstruction of the experimental density matrix of a two-qubit frequency-bin entangled state generated by the SOI resonator + PF1. a.: real part, and b.: imaginary part. error rate QBER. By comparing it with Fig. 4, we see that as expected, the qubits with a lower raw key rate (and higher QBER) are also the ones with the lower fidelity to an entangled state. A given pair achieves secure quantum communication only if the corresponding QBER is lower than the positive key rate threshold of 11% [34]. Among the 17 accessible pairs, only 12 satisfy this condition. A sifted key rate in bit/s can also be derived from the raw rate \(R_{raw}\) using the method in [29] as depicted in Fig. 5.b. The inset in Fig. 5.b. shows how the distribution of 10 photon pairs is used to create a node-free quantum network of 5 users. In this proof of principle experiment, the achieved key rates (0.5 to 2.5 bits/second) are limited by the performances of the source, as well as the insertion loss of the devices (14 dB for the 3 PFs and the 2 EOMs). Integrating these components on the same chip would help to limit losses and improve the overall insertion loss of the [EOM-PF-EOM] configuration to 73% [35]. Several methods are proposed to achieve the integration of elements allowing the manipulation of frequency-bin qubits [36, 37]. It is also worth noting that the number of accessible pairs could be increased by setting the pump frequency at the center of the PF bandwidth. Figure 4: Fidelity to a maximally entangled state for several frequency-bin entangled photon pairs. The x axis correspond to the distance of the frequency qubit to the pump frequency Figure 5: a.: Raw coincidences (bars) and qubit error rate (QBER) (dots) between two users, and b.: sifted key rate, calculated using the method in [29] as a function of \(n\), spectral detuning from the pump Conclusion In this paper, we introduce a SOI micro-resonator for producing photon pairs at telecom wavelengths through SFWM and encoding frequency-bin qubits on adjacent pairs of resonances. Due to the broadband emission of the source and the 21 GHz FSR, we can generate up to 17 pairs of frequency entangled qubits. We manipulate these qubits using quantum gates based on a single set of devices composed of one programmable filter between two electro-optic phase modulators, enabling independent control of each qubit. This leverages the ability to use these reprogrammable quantum gates for quantum information applications. We demonstrate quantum state tomography with these quantum gates on parallel qubits, and assess a fidelity to a \(|\phi^{+}\rangle\) entangled state of over 0.8 for 14 of them. Finally, we demonstrate a local proof-of-concept of a fully connected network and compute the key rate and QBER of 17 photon pairs, taking into account the threshold for attacks on the quantum channels and error correction. We can distribute 10 photon pairs and create a fully-connected network of up to 5 users. To scale up this approach, it is possible to encode and manipulate higher-dimensional quantum states with minimal additional resources using this setup, for example, by adding a single additional harmonic to the fundamental RF tone that drives the EOMs. Moreover, on-chip integration of electro-optic devices and programmable filters are a promising lead to reduce the losses (currently around 8.5 dB for our quantum gates implementation) [35]. On-chip integration has been proposed, for instance, using a lithium niobate resonator driven by RF signals [36, 37]. Our work thus offers perspectives for scalable frequency-domain architectures for high-dimensional and resource-efficient quantum communications ## Acknowledgments This work has been supported by Region Ile-de-France in the framework of DIM SIRTEQ. Lorenzo M. Procopio acknowledges the European Union's Horizon 2020 research under the Marie Sklodowska-Curie grant agreement No 800306.
2310.07487
Cognate Transformer for Automated Phonological Reconstruction and Cognate Reflex Prediction
Phonological reconstruction is one of the central problems in historical linguistics where a proto-word of an ancestral language is determined from the observed cognate words of daughter languages. Computational approaches to historical linguistics attempt to automate the task by learning models on available linguistic data. Several ideas and techniques drawn from computational biology have been successfully applied in the area of computational historical linguistics. Following these lines, we adapt MSA Transformer, a protein language model, to the problem of automated phonological reconstruction. MSA Transformer trains on multiple sequence alignments as input and is, thus, apt for application on aligned cognate words. We, hence, name our model as Cognate Transformer. We also apply the model on another associated task, namely, cognate reflex prediction, where a reflex word in a daughter language is predicted based on cognate words from other daughter languages. We show that our model outperforms the existing models on both tasks, especially when it is pre-trained on masked word prediction task.
V. S. D. S. Mahesh Akavarapu, Arnab Bhattacharya
2023-10-11T13:34:22Z
http://arxiv.org/abs/2310.07487v2
# Cognate Transformer for Automated Phonological Reconstruction and Cognate Reflex Prediction ###### Abstract _Phonological reconstruction_ is one of the central problems in historical linguistics where a proto-word of an ancestral language is determined from the observed cognate words of daughter languages. Computational approaches to historical linguistics attempt to automate the task by learning models on available linguistic data. Several ideas and techniques drawn from computational biology have been successfully applied in the area of _computational historical linguistics_. Following these lines, we adapt MSA Transformer, a protein language model, to the problem of _automated phonological reconstruction_. MSA Transformer trains on multiple sequence alignments as input and is, thus, apt for application on aligned cognate words. We, hence, name our model as _Cognate Transformer_. We also apply the model on another associated task, namely, _cognate reflex prediction_, where a reflex word in a daughter language is predicted based on cognate words from other daughter languages. We show that our model outperforms the existing models on both tasks, especially when it is pre-trained on masked word prediction task. ## 1 Introduction _Phonological reconstruction_ of a word in an ancestral proto-language from the observed cognate words, i.e., words of supposed common origin, in the descendant languages is one of the central problems in _historical linguistics_, a discipline that studies diachronic evolution of languages (Campbell, 2013). For example, the cognate words French _enfant_, Spanish _infantes_ and Italian _infanti_ all trace to the proto-form _infantes_ in Latin meaning 'children', which is an attested language in this case. In most cases, the proto-language is not attested and has to be rather reconstructed. The process of arriving at such phonological reconstruction usually involves multiple steps including gathering potential cognate words, identifying systematic sound correspondences, and finally reconstructing the proto-phonemes. This procedure is known as the 'comparative method' (Ringe and Eska, 2013), which is traditionally carried out manually. Several _automated phonological reconstruction_ algorithms emerged in the last decade. Some of these are drawn or inspired from computational biology, for example, Bouchard-Cote et al. (2013). In general, computational historical linguistics draws techniques such as sequence alignment and phylogenetic inference from computational biology, in addition to the techniques known from historical linguistics and computational linguistics or natural language processing (Jager, 2019). On similar lines, we adapt the MSA transformer, introduced in Rao et al. (2021) for modeling multiple sequence alignment (MSA) protein sequences, for the problem of phonological reconstruction which takes as input a cognate word set in the form of MSAs. Henceforth, we name the model introduced here as _Cognate Transformer_. We also apply our model on _cognate reflex prediction_ task, where an unknown, i.e., a masked reflex in a daughter language is to be predicted based on the attested reflexes from other daughter languages (List et al., 2022). For instance, in the previous example, if we mask French _enfant_, the task would involve arriving at the word form correctly based on Spanish _infantes_ and Italian _infanti_. One can notice that this task can serve as a pre-training objective for the proto-language reconstruction task described previously. Hence, we also pre-train the Cognate Transformer on the cognate reflex prediction task. Further, most of the existing models are fitted on a per language family basis, i.e., on one dataset at a time consisting of a single language family. Thus, the utility of either transfer learning or simultaneous fitting across several language families has not yet been demonstrated. This is desirable even from the linguistic perspective since it is well known that the sound changes are phonologically systematic and, thus, often similar sound changes operate across different language families (Campbell, 2013). For instance, the sound change involving palatalization of a velar consonant say /k/ > /t// can be observed in the case of Latin _caelum_ /kailum/ to Italian _cielo_ /t[g:l0/ as well as in the supposed cognate pairs _cold_ versus _chill_, which is a reminiscence of historical palatalization in Old English.1 Hence, owing to the presence of commonalities across language families in terms of sound change phenomena, training models simultaneously across multiple language families should be expected to yield better results than when training on a single language family data at a time. This is well reflected in our present work. Footnote 1: For International Phonetic Alphabet (IPA) notation, see [https://en.wikipedia.org/wiki/International_Phonetic_Alphabet](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). ### Problem Statements There are two tasks at hand as mentioned before, namely, _cognate reflex prediction_ and _proto-language reconstruction_. An input instance of the cognate reflex prediction task consists of a bunch of cognate words from one or more related languages with one language marked as unknown; the expected output would be the cognate reflex in that particular language which is marked unknown. An example from the Romance languages is: _Input:_ [French] 50njevb, [Portuguese]?, [Italian] 45i neepr0 _Output:_ [Portuguese] 5u nipiro The input for the proto-language reconstruction task consists of cognate words in the daughter languages and the expected output is the corresponding word in the ancestral (proto-) language. We model this as a special case of cognate reflex prediction problem where the proto-language is always marked as unknown. For instance, in the above example, Latin would be marked as unknown: _Input:_ [Latin]?, [French] 50njevb, [Portuguese] 5u nipiro _Output:_ [Latin] ju:nipiro ### Contributions Our contributions are summarized as follows. We have designed a new architecture, Cognate Transformer, and have demonstrated its efficiency when applied to two problems, namely, proto-language reconstruction and cognate reflex prediction, where it performs comparable to the existing methods. We have further demonstrated the use of pretraining in proto-language reconstruction, where the pre-trained Cognate Transformer outperforms all the existing methods. The rest of the paper is organized as follows. Existing methodologies are outlined in SS2. The workflow of Cognate Transformer is elaborated in SS3. Details of experimentation including dataset information, model hyperparameters, evaluation metrics, etc. are mentioned in SS4. Results along with discussions and error analysis are stated in SS5. ## 2 Related Work Several methods to date exist for proto-language reconstruction, as mentioned previously. We mention a notable few. Bouchard-Cote et al. (2013) employs a probabilistic model of sound change given the language family's phylogeny, which is even able to perform unsupervised reconstruction on Austronesian dataset. Ciobanu and Dinu (2018) performed proto-word reconstruction on Romance dataset using conditional random fields (CRF) followed by an ensemble of classifiers. Meloni et al. (2021) employ GRU-attention based neural machine translation model (NMT) on Romance dataset. List et al. (2022) presents datasets of several families and employs SVM on trimmed alignments. The problem of cognate reflex prediction was part of SIGTYP 2022 shared task (List et al., 2022), where the winning team (Kirov et al., 2022) models it as an image inpainting problem and employs a convolutional neural network (CNN). Other high performing models include a transformer model by the same team, a support vector machine (SVM) based baseline, and a Bayesian phylogenetic inference based model by Jager (2022). Other previous approaches include sequence-to-sequence LSTM with attention, i.e., standard NMT based (Lewis et al., 2020) and a mixture of NMT experts based approach (Nishimura et al., 2020). The architecture of MSA transformer is part of Evoformer used in AlphaFold2 (Jumper et al., 2021), a protein structure predictor. Pre-training of MSA transformer was demonstrated by Rao et al. (2021). Handling MSAs as input by using 2D convolutions or GRUs was demonstrated by Mirabello and Wallner (2019) and Kandathil et al. (2022). ## 3 Methodology In this section, the overall workflow is described. The input phoneme sequences are first aligned (SS3.1), the resulting alignments are trimmed (SS3.2), and then finally passed into the MSA transformer with token classification head (SS3.3). In the training phase, the output sequence is also aligned while in the testing phase, trimming is not performed. The first two steps are the same as described in List et al. (2022) and are briefly described next. ### Multiple Sequence Alignment The phenomenon of sound change in spoken languages and genetic mutations are similar. As a result, multiple sequence alignment and the methods surrounding it are naturally relevant here as much as they are in biology. The phonemes of each language in a single cognate set are aligned based on the sound classes to which they belong. An example of an alignment is given in Table 1. We use the implementation imported from the library lingpy(List and Forkel, 2021) which uses the sound-class-based phonetic alignment described in List (2012). In this algorithm, the weights in pairwise alignments following Needleman and Wunsch (1970) are defined based on the sound classes into which the phonemes fall. Multiple sequences are aligned progressively following the tree determined by UPGMA (Sokal and Michener, 1975). ### Trimming Alignments In the example given in Table 1, one can observe that during testing, the final gap (hyphen) in the input languages (i.e., excluding Latin) will not be present. Since the task is essentially a token classification, the model will not predict the final token'm' of Latin. To avoid this, alignments are trimmed as illustrated in Table 2 for the same example. This problem is discussed in detail in (List et al., 2022) and the solution presented there has been adopted here. In particular, given the sequences to be trimmed, if in a site all tokens are gaps except in one language, then that phoneme is prepended to the following phoneme with a separator and that specific site is removed. For the last site, the lone phoneme is appended to the penultimate site. Following (List et al., 2022), trimming is skipped for testing as it has been observed to cause a decrease in performance. The reason for this is mentioned in (Blum and List, 2023). Briefly stating it, gaps in daughter languages can point to a potential phoneme in the proto-language. While training however, they are redundant and can be trimmed since proto-language is part of alignment. ### MSA Transformer The MSA Transformer, proposed in (Rao et al., 2021), handles two-dimensional inputs with separate row and column attentions (each with multiple heads) in contrast with the usual attention heads found in standard transformer architectures (Vaswani et al., 2017). It uses learned positional embeddings only across rows since a group of rows does not make up any sequential data. The outputs of row attentions and column attentions are summed up before passing into a fully connected linear layer (see Figure 1). MSA Transformer, despite its name, is not an encoder-decoder transformer but rather only an encoder like BERT (Devlin et al., 2018), except with the ability to handle 2D input (see Figure 2). ### Workflow The aligned input sequences thus trimmed are passed into MSA Transformer as tokens. A single input instance to an MSA Transformer is a 2D array of tokens. The overall architecture of the Cognate Transformer is illustrated in Figure 2. Due to trim \begin{table} \begin{tabular}{|l|c c c c c c c c c c|} \hline \hline [French] & 5 & o & n & j & e & v & - & n & - & - \\ [Italian] & d5 & i & n & - & e & p & - & r & o & - \\ [Spanish] & x & u & n & - & i & p & e & r & o & - \\ [Latin] & j & u: & n & - & i & p & e & r & o & m \\ \hline \hline \end{tabular} \end{table} Table 1: Aligned phoneme sequences \begin{table} \begin{tabular}{|l c ming, several phonemes can be joined together as one token. Hence, with trimming the total number of tokens or the vocabulary size can be above 1000 or even 2000 based on the training dataset, while without trimming the vocabulary size would essentially be close to the total number of phonemes possible which would be only a few hundreds. Meloni et al. (2021) incorporate the information regarding the language of a word through a language embedding concatenated to the character/token embedding. We instead treat _language information_ as a separate token attached to the beginning of the phoneme sequence. Use of language embeddings with transformer based models was initially present in the multi-language model XLM Conneau and Lample (2019). It was however discontinued in the later versionsConneau et al. (2020). We similarly have decided to remove the language embedding and instead use a special token denoting language as it is less complex in implementation. Other special tokens used include the usual [CLS] to mark the beginning, [SEP] to mark the ending of a word, [PAD] for padding, and [MASK] to replace '?' in the unknown word (see Table 2) or the word to be predicted. Thus, the input batch padded appropriately is passed on to the MSA Transformer. The normal output of an MSA Transformer is a 2D array of embeddings per instance. To this, we add an addition layer that sums over columns to give a 1D array of embeddings per instance as output. In other words, if the overall dimensions of the MSA transformer output were (batch_size \(\times\) num_languages \(\times\) msa_length \(\times\) hidden_size) then, for our case, the final dimensions after summing up along columns are (batch_size \(\times\) msa_length \(\times\) hidden_size). To this, we add a normalizer layer followed by a classifier, i.e., a linear layer followed by cross-entropy loss. This is illustrated in Figure 2. ### Pre-training The described model can support pre-training in a form similar to masked language modeling where a word from a cognate set is entirely masked but the language token remains unmasked corresponding to the language that is to be predicted. In other words, _pre-training_ is the same as training for cognate prediction task. For proto-language reconstruction, however, pre-training can be done. As a result, we pre-train Cognate Transfomer on the data of the cognate reflex prediction task. It is further _fine-tuned_ on the proto-language reconstruction task. We have used the publicly available implementa Figure 2: Cognate Transformer architecture: an input instance is passed into an MSA transformer, where the resultant embeddings are summed and normalized along columns, which are then finally passed into a classifier. tion of MSA transformer by the authors2, on top of which we added the layers required for the Cognate Transformer architecture. We have used tokenization, training, and related modules from Hugging-Face library Wolf et al. (2020). The entire code is made publicly available3. Footnote 2: [https://github.com/facebookresearch/esm](https://github.com/facebookresearch/esm) Footnote 3: [https://github.com/mahesh-ak/CognateTransformer](https://github.com/mahesh-ak/CognateTransformer) ## 4 Experimental Setup ### Datasets We use the SIGTYP 2022 dataset List et al. (2022) for the cognate reflex prediction task. It consists of two different subsets, namely, training and surprise, i.e., evaluation data from several language families. The statistics for this dataset is provided in Table 3. Surprise data was divided into different test proportions of 0.1, 0.2, 0.3, 0.4, and 0.5 for evaluation. Among these, we only report for the test proportions 0.1, 0.3, and 0.5. For the proto-language reconstruction task, the dataset provided by List et al. (2022) is used. It consists of data from 6 language families, namely, Bai, Burmish, Karen, Lalo, Purus, and Romance whose statistics are listed in Table 4. This is divided into test proportion 0.1 by List et al. (2022). We further test for proportions 0.5 and 0.8. For pre-training the Cognate Transformer for this task, we use the entire training data of both the tasks with words from proto-languages removed. ### Model Hyperparameters We have tested two variations of the proposed Cognate Transformer architecture, namely _CogTran-tiny_ and _CogTran-small_. CogTran-tiny has hidden size 128, intermediate size 256, 2 attention heads, and 2 layers with overall 1 million parameters. CogTran-small has hidden size 256, intermediate size 512, 4 attention heads, and 4 layers with overall 4.4 million parameters. Both models have a vocabulary size of about 2,300. For pre-training, only CogTran-small is used, since it consistently outperforms CogTran-tiny. The training is carried out with 48 epochs for pre-training, with 9 epochs for finetuning in the proto-language reconstruction task, 24 epochs for non-pre-trained in the same task, and 32 epochs for cognate-reflex prediction task, using Adam optimizer with weight decay Loshchilov and Hutter (2017) as implemented in HuggingFace transformers library Wolf et al. (2020) with learning rate 1e-3 and batch size of 64. For finetuning the pre-trained model, the batch size is 48. ### Evaluation We use the metrics _average edit distance (ED)_, _average normalized edit distance (NED)_, and _B-Cubed F1 score (BC)_ following List et al. (2022) for evaluating the models. Edit distance is the well-known Levenshtein distance Levenshtein (1965), both with or without normalization by the lengths of the source and target strings being compared. B-Cubed F1 score Amigo et al. (2009) was applied to phoneme sequences by List (2019), where similarity is measured between aligned predicted and gold sequences. B-Cubed F1 score measures \begin{table} \begin{tabular}{l r r r} \hline \hline **Family** & **Lngs.** & **Words** & **Cogs.** \\ \hline Bai & 10 & 459 & 3866 \\ Burmish & 9 & 269 & 1711 \\ Karen & 11 & 365 & 3231 \\ Lalo (Yi) & 8 & 1251 & 7815 \\ Purus & 4 & 199 & 693 \\ Romance & 6 & 4147 & 18806 \\ \hline \hline & & **6690** & **36122** \\ \hline \hline \end{tabular} \end{table} Table 4: Dataset for Proto-language reconstruction task \begin{table} \begin{tabular}{l r r r} \hline \hline **Family** & **Lngs.** & **Words** & **Cogs.** \\ \hline **Training data** & & & \\ \hline Tshanglic & 8 & 2063 & 403 \\ Bai & 9 & 5773 & 969 \\ Sino-Tibetan & 7 & 1426 & 248 \\ Sui & 16 & 10139 & 1048 \\ Uto-Aztecan & 9 & 771 & 118 \\ Afro-Asiatic & 19 & 2583 & 340 \\ Dogon & 16 & 4405 & 971 \\ Japonic & 10 & 1802 & 278 \\ Indo-European & 4 & 1320 & 512 \\ Burmish & 7 & 2501 & 576 \\ \hline \hline & & **32783** & **5463** \\ \hline \hline **Surprise data** & & & \\ \hline Atlantic-Congo & 10 & 1218 & 388 \\ Hui & 19 & 9750 & 518 \\ Chapacuran & 10 & 939 & 187 \\ Western Kho-Bwa & 8 & 5214 & 915 \\ Berta & 4 & 600 & 204 \\ Palaung & 16 & 1911 & 196 \\ Burmish & 9 & 2202 & 467 \\ Indo-European & 5 & 565 & 212 \\ Karen & 8 & 2363 & 379 \\ Bai & 10 & 4356 & 658 \\ \hline \hline & & **29118** & **4124** \\ \hline \hline \end{tabular} \end{table} Table 3: Dataset for reflex prediction task the similarity in the structures and, hence, in the presence of systematic errors, carries less penalty than edit distance. As (normalized) edit distance is a distance measure, the lower the distance, the better the model. On the other hand, for B-Cubed F1 it is opposite, i.e., the higher the score, the better the model. We import the metric functions from the LingRex package (List and Forkel, 2022). ### Methods for Comparison The results of the cognate reflex prediction task are compared directly against those of the top performing model in the SIGTYP 2022 task - Kirov et al. (2022). Here, direct comparison between the models is possible since the datasets including the test divisions are the same. However, for the proto-language reconstruction task, the previous state-of-the-art model (Meloni et al., 2021) reports only on the Romance dataset with test proportion 0.12 and the baseline SVM model (List et al., 2022) with additional features such as position, prosodic structure, etc., marked as SVM+PosStr is tested only with test proportion 0.1. However, the code is openly provided for the SVM-based model and, hence, results were generated for other test proportions 0.5 and 0.8 as well. To compare the results of proto-language reconstruction with the NMT model given by Meloni et al. (2021) for which the code is not publicly available, we build a best-effort appropriate model identical to the one described there with 128 units Bidirectional GRU encoder followed by same sized GRU decoder followed by attention and linear layer with dimension 256 followed by a classifier. The input is encoded as a 96-dimensional embedding for each token concatenated with 32-dimensional language embedding. The training parameters are the same as previously stated in SS4.2 except that the number of epochs trained is 32 and the batch size is 16. For the Romance data part, the results obtained are ED 1.287 and NED 0.157 whereas those reported by Meloni et al. (2021) for Romance data (IPA) with almost similar test proportion (0.12) are ED 1.331 and NED 0.119. Thus, the edit distances match whereas normalized ones do not. We speculate that the NED reported by Meloni et al. (2021) could be erroneous due to possible inclusion of delimiter while calculating the length of the strings, since by (mis)considering delimiters, we obtain a similar NED 0.121 for the model we train. This can be confirmed by observing the ED-to-NED proportions of the corresponding scores obtained by the SVM-based model for the Romance dataset: ED 1.579 and NED 0.190, which we generate using the code made available by List et al. (2022). Alternatively, the disparity in NED could also be attributed to differences in the sizes of the dataset used for training. However, it is unclear how agreement in ED score could have been then possible. Due to absence of both appropriate model and data, we assume that the NMT model we have built is a good reproduction of that built by Meloni et al. (2021). All models compared in the proto-language reconstruction task are 10-fold cross-validated. ## 5 Results In this section, we present and discuss in detail the results of our Cognate Transformer and other state-of-the-art models on the two tasks. ### Cognate Reflex Prediction The results of the cognate reflex prediction task are summarized in Table 5. The edit distance (ED), normalized edit distance (NED), and B-Cubed F1 (BC) scores are provided for Cognate Transformer across the test proportions 0.1, 0.3, and 0.5 along with the best performing model of the SIGTYP 2022 (List et al., 2022) task, namely, the CNN inpainting (Kirov et al., 2022). CogTran-small consistently outperforms the previous best models across all test proportions. In particular, the difference in scores between Cognate transformer and the CNN inpainting model becomes prominent with increasing test proportion. Hence, it can be concluded here that Cognate Transformer is more robust than other models. The language family wise results for the best performing model, CogTran-small, are provided in Appendix A. ### Proto-Language Reconstruction The results of the proto-language reconstruction task are summarized in Table 6 with the same evaluation metrics along with comparisons with other previously high performing models, namely, SVM with extra features by List et al. (2022) and NMT (GRU-attention) based by Meloni et al. (2021) for the test proportions 0.1, 0.5, and 0.8. Previously, there were no comparisons between SVM-based and NMT-based models. Here, we find that the SVM-based model performs consistently better than the NMT-based model. In other words, the GRU-Attention-based NMT model does not appear to scale well in harder situations, i.e., for higher test proportions when compared with the other models. While CogTran-small achieves results similar to the SVM-based models, pre-training makes a difference. The pre-trained Cognate transformer outperforms all the other models in all test proportions. Although the increase in the proportion 0.1 is not much significant, paired t-test between best performing model and the next best model i.e. CogTran-small Pretrained and SVM-based yield significance of \(p<0.01\) in low-resource proportions i.e. 0.5 and 0.8. The language family wise results and standard deviations for the best performing model, CogTran-small Pretrained are provided in Appendix B. Note that SVM-based model was also part of SIGTYP 2022 (List et al., 2022) where it lags well behind CNN inpainting model. Hence, cognate transformer generalizes well across tasks hence gains from architecture are obvious. ### Error Analysis To analyze errors, we consider the pre-trained and finetuned CogTran-small on the proto-language reconstruction task for the easiest and hardest test proportions 0.1 and 0.8 over fixed data (without cross-validation). Figure 3 shows the 30 most common sound exchange errors by the models. An example of sound exchange error, u/a means either 'a' is predicted in place of 'u' or vice versa. To make this plot, we first gather the frequencies of sound exchanges for the various language families in data by comparing the aligned predicted and gold reconstructions. These frequencies are normalized for each proto-language or language family and finally combined and normalized again. Normalization at the language family level is important since few language families show more tendencies for certain types of errors than others. Since data is not equally available from all families, a language family with \begin{table} \begin{tabular}{l l c c c} \hline \hline **Test proportion** & **Method** & **ED** & **NED** & **BC** \\ \hline \multirow{4}{*}{0.1} & CogTran-tiny & 0.8081 & 0.1760 & 0.7946 \\ & CogTran-small & 0.7772 & 0.1683 & 0.7968 \\ & CogTran-small Pretrained & **0.7459** & **0.1595** & **0.8081** \\ & SVM + PosStr (List et al., 2022) & 0.7612 & 0.1633 & 0.8080 \\ & NMT GRU + Attn. (Meloni et al., 2021) & 1.0296 & 0.1909 & 0.7560 \\ \hline \multirow{4}{*}{0.5} & CogTran-tiny & 0.9013 & 0.1966 & 0.7279 \\ & CogTran-small & 0.8750 & 0.1899 & 0.7330 \\ \cline{1-1} & CogTran-small Pretrained & **0.8177** & **0.1760** & **0.7534** \\ \cline{1-1} & SVM + PosStr (List et al., 2022) & 0.8455 & 0.1839 & 0.7425 \\ \cline{1-1} & NMT GRU + Attn. (Meloni et al., 2021) & 1.2585 & 0.2362 & 0.6733 \\ \hline \multirow{4}{*}{0.8} & CogTran-tiny & 1.1043 & 0.2455 & 0.6781 \\ & CogTran-small & 1.0697 & 0.2359 & 0.6817 \\ \cline{1-1} & CogTran-small Pretrained & **0.9754** & **0.2142** & **0.7132** \\ \cline{1-1} & SVM + PosStr (List et al., 2022) & 1.0630 & 0.2391 & 0.6800 \\ \cline{1-1} & NMT GRU + Attn. (Meloni et al., 2021) & 1.8640 & 0.3546 & 0.5538 \\ \hline \hline \end{tabular} \end{table} Table 6: Proto-language reconstruction results. \begin{table} \begin{tabular}{c l c c c} \hline \hline **Test proportion** & **Method** & **ED** & **NED** & **BC** \\ \hline \multirow{4}{*}{0.1} & CogTran-tiny & 1.0901 & 0.2997 & 0.7521 \\ & CogTran-small & **0.8966** & **0.2421** & **0.7823** \\ & Mockingbird - Inpaint (Kirov et al., 2022) & 0.9201 & 0.2431 & 0.7673 \\ \hline \multirow{4}{*}{0.3} & CogTran-tiny & 1.3223 & 0.3497 & 0.6612 \\ & CogTran-small & **1.1235** & **0.2919** & **0.6954** \\ & Mockingbird - Inpaint (Kirov et al., 2022) & 1.1762 & 0.2899 & 0.6717 \\ \hline \multirow{4}{*}{0.5} & CogTran-tiny & 1.4521 & 0.3873 & 0.6257 \\ & CogTran-small & **1.2786** & **0.3332** & **0.6477** \\ \cline{1-1} & Mockingbird - Inpaint (Kirov et al., 2022) & 1.4170 & 0.3518 & 0.6050 \\ \hline \hline \end{tabular} \end{table} Table 5: Cognate reflex prediction results. more data influences the outcome. For example, among the datasets used for the task, the Romance dataset comprises half of them. We observe that Romance data shows more vowel-length-related errors as also observed by Meloni et al. (2021) and, thus, proportion of such errors is inflated. Hence, normalization is carried out at the language family level to prevent such biases. We normalize per family by dividing the frequency of a particular error type in a family by the total number of errors in that family. Normalized frequencies thus obtained per error type per family are combined by adding up across families and then normalized again. The most frequent sound exchange errors are plotted in Figure 3 which make up respectively, for test proportions 0.1 and 0.8, about 71% and 60% of total such errors. One can observe from the plot that the most common vowel errors are the exchange of short vowels /u/ and /i/ with a neutral vowel /a/, vowel raising-lowering, i.e., exchange of /i/ \(\sim\) /e/, /u/ \(\sim\) /o/, diphthong-monophthong exchanges /ai/ \(\sim\) /i/, tense-laxed exchanges, i.e., \(\nicefrac{{1}}{{e}}/\)\(\sim\) /e/ and \(\nicefrac{{1}}{{3}}/\)\(\sim\) /o/. Vowel length confusions, i.e., /i/i/ \(\sim\nicefrac{{1}}{{1}}/\), /e/: /e/: /a/: /a/ \(\sim\nicefrac{{1}}{{a}}/\), /o/: /o/, /u/: /\(\sim\nicefrac{{1}}{{0}}/\) also make up a significant portion. Overall, vowel/consonant length errors make up to about 10% sound exchange errors each in both cases. Among consonant errors, one can observe voiced-unvoiced or glottalized-unglottalized consonant exchanges like /p/ \(\sim\) /b/, /\(\nicefrac{{1}}{{k}}/\)\(\sim\) /g/, aspiration errors, i.e., /p\({}^{h}\)\(\sim\) /p/, /\(\nicefrac{{1}}{{h}}/\)\(\sim\) /t/, change of place of articulation like /p/ \(\sim\) /n/, /s/ \(\sim\) /h/, etc. Tone exchange errors like /i/ \(\sim\) /i/ also exist which contribute to about 10% in each of the cases. Affricatives exchange error /t/f/ \(\sim\) /ts/ appears prominently in the case of test proportion 0.1. Overall, these are the most general kinds of errors; however, exact types of errors are observed to be dependent on the language family. Hence, although most general ones are universally observed, significant differences can be expected based on the particular datasets. ### Zero-shot Attempt Previously, we discussed the results of proto-language reconstruction for various test proportions. Among these, the highest proportion considered, i.e., 0.8, can be thought of as a _few-shot_ learning case, since for some of the language families like Purus and Burmish, the number of training instances, i.e., cognate sets is less than 50. We next consider the pre-trained model for the same task without any finetuning; in other words, we consider the _zero-shot_ case. The scores achieved by such a model are 2.6477 ED, 0.5758 NED, and 0.5499 BC, which means that more than 40% of a word on average in generated reconstructions are correct. An example input instance and its corresponding output and gold data from the Romance dataset: _Input:_ [Latin]?, [French] p e s p i k y i t e, [Italian] p e r s p i k u i t a _Output:_ [French] p e r s p i k y i t a _Gold:_ [Latin] p e r s p i k u i t a: t.e.m Figure 3: Top-30 most common sound exchange errors out of over 400 errors for pre-trained CogTran-small on proto-language reconstruction task with test proportions 0.1 (top) and 0.8 (bottom). In the above example, the output language token is incorrect. Since the proto-languages (in this case, Latin) have been excluded entirely in pre-training, this can be expected. One can also observe that the output word completely agrees with neither Italian nor French, although the inclination is more toward the former (with a ED of 1). A similar observation was made by Meloni et al. (2021) where the network attended most to Italian since it is conservative than most other Romance languages. ### Learned Sound Changes Here, we consider the finetuned pre-trained model on the proto-language reconstruction task to observe the learned sound changes by the network in the hardest scenario, i.e., with test proportion 0.8. The following example reveals an instance where palatalization appearing in Romance languages is correctly reconstructed to a non-palatal consonant: _Input:_ [Latin]?, [French] s j \(\varepsilon\), [Spanish] o j e _Output:_ [Latin] k \(\varepsilon\) We now consider _metathesis_, a non-trivial complex sound change where positions of phonemes are interchanged. The following example is from the training set which the network learns correctly and demonstrates the metathesis _-bil_- > _-ble_-. _Input:_ [Latin]?, [French] \(\tilde{\epsilon}\) p e e s e p t i b l, [Spanish] i m p e e p: t i b l e _Output:_ [Latin] i m p e r k e p t i b l e Following is an example from the test set where the model confuses a complex metathesis pattern occurring in Hispano-Romance, _-bil_- > _-lb_-. _Input:_ [Latin]?, [Spanish] s i l p a r, [French] s y b l e, [Portuguese] s i l v a a l _Output:_ [Latin] s y b l w a: r e _Gold:_ [Latin] s i: b l a: r e Even the model finetuned on test proportion 0.1 does not get this example correct. Its output is _Output:_ [Latin] s y b l o a: r e Hence, metathesis can be seen as a hard sound change to be learned by this model. This is not surprising since metathesis or site exchange does not naturally fit into the sequence alignment approach which fundamentally only models insertions and deletions at any site. Thus, it is worthwhile to investigate more on this aspect by training the network on language families that exhibit systematic metathesis to understand its behavior. ## 6 Conclusions In this paper, we adapted MSA transformer for two phonological reconstruction tasks, namely, cognate reflex prediction and proto-language reconstruction. Our novel architecture, called Cognate Transformer, performs either comparable to or better than the previous methods across various test-train proportions consistently. Specifically, the pre-trained model outperforms the previous methods by a significant margin even at high test-train proportions, i.e., with very less trainable data reflecting a more realistic scenario. To the best of our knowledge, this work demonstrates the utility of transfer learning when applied to historical linguistics for the first time. In this paper, the data is in IPA representation, but this is not necessary as long as words can be aligned with properly defined sound classes in the respective orthographic representations. Thus, relaxing the IPA input constraint can increase the amount of trainable data and pre-training with more data would most likely improve the performance of not only the problem of automated phonological reconstruction but can be demonstrated in the future for an important related task, namely automated cognate word detection. Further, more standard ways of pre-training such as masking only a couple of tokens across all languages instead of a complete word of a single language can be adapted in future. ## Limitations In the task of proto-language reconstruction, it can be seen from the results (Table 6) that CogTransall i.e. the plain Cognate Transformer model without pre-training slightly underperforms the SVM-based model at low test proportions. Only the pre-trained model performs well in this scenario. Further, it has already been mentioned in SS5.5 that metathesis sound change is not being captured correctly by the network which requires further investigation. Overall, very few languages and language families are included in the data used. Thus, it is desirable to create such datasets for other languages with at least cognacy information to improve the unsupervised training firstly, which can be then employed in supervised training successfully with fewer training examples. ## Ethics Statement The data and some modules of code used in this work are obtained from publicly available sources. As stated in SS4.4, the code for the model defined by Meloni et al. (2021) was not publicly available, hence we implemented it our own. Thereby the results produced by our implementation may slightly differ from those that would be produced by the original model. Otherwise, there are no foreseen ethical concerns nor conflicts of interest.
2305.17422
Understanding Emotion Valence is a Joint Deep Learning Task
The valence analysis of speakers' utterances or written posts helps to understand the activation and variations of the emotional state throughout the conversation. More recently, the concept of Emotion Carriers (EC) has been introduced to explain the emotion felt by the speaker and its manifestations. In this work, we investigate the natural inter-dependency of valence and ECs via a multi-task learning approach. We experiment with Pre-trained Language Models (PLM) for single-task, two-step, and joint settings for the valence and EC prediction tasks. We compare and evaluate the performance of generative (GPT-2) and discriminative (BERT) architectures in each setting. We observed that providing the ground truth label of one task improves the prediction performance of the models in the other task. We further observed that the discriminative model achieves the best trade-off of valence and EC prediction tasks in the joint prediction setting. As a result, we attain a single model that performs both tasks, thus, saving computation resources at training and inference times.
Gabriel Roccabruna, Seyed Mahed Mousavi, Giuseppe Riccardi
2023-05-27T09:07:18Z
http://arxiv.org/abs/2305.17422v2
# Understanding Emotion Valence is a Joint Deep Learning Task ###### Abstract The valence analysis of speakers' utterances or written posts helps to understand the activation and variations of the emotional state throughout the conversation. More recently, the concept of Emotion Carriers (EC) has been introduced to explain the emotion felt by the speaker and its manifestations. In this work, we investigate the natural inter-dependency of valence and ECs via a multi-task learning approach. We experiment with Pre-trained Language Models (PLM) for single-task, two-step, and joint settings for the valence and EC prediction tasks. We compare and evaluate the performance of generative (GPT-2) and discriminative (BERT) architectures in each setting. We observed that providing the ground truth label of one task improves the prediction performance of the models in the other task. We further observed that the discriminative model achieves the best trade-off of valence and EC prediction tasks in the joint prediction setting. As a result, we attain a single model that performs both tasks, thus, saving computation resources at training and inference times. ## 1 Introduction Speakers express their emotions in the language in different modalities (e.g. speech and/or text) and interaction contexts (e.g. dyadic interactions or social media posts). A type of document imbued with emotions conveyed through the recollection of personal events experienced by the speaker is the personal narrative. Personal Narratives (PN) have been recently studied to promote healthier mental health by modelling the patients' life events and monitoring emotional states (Mousavi et al., 2021; Danieli et al., 2022). Monitoring the narrators' emotional states in PNs is achieved through valence analysis and the identification of related emotion carriers. Valence analysis addresses the identification of emotion levels ranging from pleasantness to unpleasantness generated by an event or a stimulus (Russell, 1980). The valence can be manifested explicitly via emotion-laden words, such as _Death_ or _Birthday_ in the PN. Besides emotion-laden words, valence can also be manifested implicitly through Emotion Carriers (EC), i.e. persons, objects or actions, that may not represent any emotion at the surface level (such as "the office" or "Wednesday"). Figure 1 shows an example of a sentence consisting of two Functional Units (FU1, FU2; the minimal span of text expressing a dialogue act (Bunt et al., 2012)) by Mousavi et al. (2022) with the emotion-laden words and the ECs in each unit. Recent studies show that ECs yield a detailed and understandable representation of the emotional state by highlighting the source of the valence such as "colleagues", "a vacation" or "a stroll along the river" (Tammewar et al., 2020; Mousavi et al., 2022). The two elements of valence and EC are inter-dependant since valence represents the intensity of the experienced emotions while the ECs are the means through which emotions are expressed and conveyed throughout the PN. Consequently, when narrators recount an event that activated their emotional state, the intensity of such emotion is manifested as valence while the expression of the emotion is through the recollection of the event and/or the participants characterising the activation of the emotional state. In this work, we explore the natural inter-dependency of valence and the related ECs in PNs. Figure 1: Example of two Functional Units (FU1, FU2) by Mousavi et al. (2022). In each unit, the emotion-laden words convey an explicit emotion while the emotion carriers are implicit manifestations of emotions even though they represent neutral emotion at the surface level. This inter-dependency is characterised by the relations between the presence or absence of ECs and neutral or non-neutral valence. Namely, the presence of ECs in a FU implies a non-neutral valence, while neutral valence for a FU implies the absence of ECs. Moreover, the polarity of the valence might be related to the presence of domain-specific ECs. For instance, the ECs "the office" or "boss" might appear more frequently with a negative valence as opposed to "vacation" or "children". We investigate the contribution of this inter-dependency in the prediction of the valence and the related ECs in a Multi-Task Learning (MTL) approach. MTL is to train a single model on multiple related tasks to achieve inductive transfer between the tasks, which is to leverage additional information sources while learning the current task. Inductive transfer enhances generalisation by introducing an additional source of inductive bias used by the learner to prefer a hypothesis over the other hypothesis Caruana (1998). We experiment with two MTL approaches i.e. joint and two-step prediction. While in the joint approach, the labels of the valence and EC prediction tasks are predicted simultaneously, in the two-step approach, the prediction of one label is used as the context for the prediction of the other task. We investigate whether this inter-dependency can be learned by Pre-trained Language Models (PLM). PLMs have prevailed over the other deep neural models in sentiment analysis Mao et al. (2021); Roccabruna et al. (2022), and they have been effectively used as a backbone of MTL architectures achieving state-of-the-art performance in intent classification and slot filling Chen et al. (2019); Qin et al. (2020), dialogue state tracking Hosseini-Asl et al. (2020); Su et al. (2021) and aspect-based sentiment analysis Mao et al. (2021); Jing et al. (2021). We experiment with discriminative AlBERTo (BERT) as well as generative GePpeTto (GPT-2) models. In particular, for the discriminative model, Figure 2: The joint and two-step settings applied to discriminative (AlBERTo) and generative (GePpeTto) PLMs in three sections: A) fine-tuning AlBERTo with the joint prediction; B) the two-step prediction applied to AlBERTo (the first task is valence prediction and the second task is EC prediction); C) the prompts designed to fine-tune GePpeTto for valence prediction (prompt 1), EC prediction (prompt 2), and two-step approach (prompt 3). we combine the two architectural solutions for valence and EC prediction tasks proposed by Mousavi et al. (2022). While for the generative model, we design two prompts for valence and EC prediction and one prompt for the two-step prediction setting. In these experiments, we evaluate both models for each MTL setting, where the baseline is the performance of the model fine-tuned on every task separately. Moreover, we compute the upper bound in the two-step prediction setting by replacing the first prediction with the ground truth. This upper bound also represents the level of inter-dependency between the two tasks. In this work, we use a corpus of PNs in Italian with valence and ECs annotated at the functional unit level. The contributions of this paper can be summarized as follows: * We study the inter-dependency of the valence and related Emotion Carriers in the corresponding prediction tasks; * We fine-tune two PLMs and experiment with multi-task learning settings for valence and Emotion Carrier predictions; * We evaluate and compare the performance of discriminative and generative models in the task of valence and Emotion Carrier prediction. ## 2 Related Works **Valence & Sentiment Analysis** The values of valence have been studied both in a continuous space (Ong et al., 2019; Kossaifi et al., 2017), and discrete space with a Likert scale (Tammewar et al., 2022; Mousavi et al., 2022), ranging from negative (unpleasant) to positive (pleasant). Using the discrete approach, valence can be assessed with different levels of granularity as the narrative level and functional unit level. A Functional Unit (FU) is the minimal span of text expressing a dialogue act (Bunt et al., 2012; Roccabruna et al., 2020). Narrative-level valence analysis provides a general yet flat perspective of the narrators' emotional state (Schuller et al., 2018), meanwhile, the sentence-level and FU-level analysis provide a detailed perspective as it highlights the variations and fluctuations of the valence throughout the narrative (Mousavi et al., 2022). A common practice in developing models for emotion analysis is to model valence analysis as sentiment analysis by mapping the valence values into three sentiment classes, i.e. _positive_, _negative_, and _neutral_(Roccabruna et al., 2022; Mousavi et al., 2022). However, valence differs from sentiment as the latter identifies the polarity of attitudes or beliefs, such as _hating_ or _liking_, towards a person (e.g. a politician) or an object (e.g. product or a movie) (Scherer et al., 2000). Meanwhile, valence represents the level of emotions in such as _anger_ or _happiness_. **Emotion Carrier and Valence:** Emotion carriers are closely related to emotional valence as they explain the valence. Mousavi et al. (2022) studied the correlation between the sequence tokens and the predicted valence (sentiment). The authors observed that the model focuses more on emotion-laden words (explicit), whereas humans identify the emotion carriers to explain the valence. **Multi-task Learning:** Multi-Task Learning (MTL) has been used for affective computing in aspect-based sentiment analysis (Schmitt et al., 2018; Mao et al., 2021; Jing et al., 2021), and emotion classification and emotion cause extraction (Turcan et al., 2021). MTL has been studied using discriminative models (BERT) for entity-relation extraction(Xue et al., 2019), as well as generative models (GPT-2) for task-oriented dialogues (Hosseini-Asl et al., 2020), dialogue state tracking (Zhao et al., 2021), and task-oriented response generation (Su et al., 2022). ## 3 Approach The inter-dependency of the valence and ECs results in the co-occurrence of both elements in the same utterance as the valence represents the level of emotion and the ECs are expressions through which the emotion is conveyed. In other words, the neutral valence of an utterance implies the absence of ECs, while the presence of ECs indicates a valence polarity for the same utterance. Furthermore, the ECs in an utterance can provide insights into the polarity of the utterance valence since certain ECs are more often associated with negative emotions (such as deadline) or positive ones (such as graduation). We investigate the natural inter-dependency of valence and ECs via the MTL approach, which allows the models to leverage additional information learned from other tasks while learning the current task, improving the generalization. We experiment with joint training, where the two labels of valence and ECs are predicted simultaneously, and two-step prediction, where one of the labels is predicted and used by the model to condition and guide the prediction of the second label. ### Dataset We use a corpus of written PNs collected and annotated by Mousavi et al. (2022). The dataset consists of 481 narratives from 45 subjects, who were employees with stress, with valence and EC annotations at the Functional Unit (FU). Out of 4273 FUs in the narratives, 40% are annotated by polarity (13% positive and 27% negative) and the related ECs, while the rest are annotated as neutral and do not contain any EC. Considering both neutral and non-neutral FUs, 18.5% of the span candidates are annotated as ECs (over 10763 span candidates). While the number of ECs considering only non-neutral FUs is 44.7% over 4452 span candidates. We use the official splits of the dataset, provided by the authors, consisting of train (80%), validation (10%) and test (10%) set, stratified on the polarity distribution. To measure the number of ECs specific for a valence polarity, we started by computing the intersection of ECs set annotated in FUs with positive and negative valence. We observed that only 4% of the ECs (14.8% from the positive and 6% from the negative sets) are present in FUs with both polarities and can convey both positive and negative emotions. That is, the majority of ECs (the remaining 94%) are indicators and carriers of only one valence polarity. Table 1 presents a representative sample of the ECs extracted from the three sets. ### Multi-Task Learning We experiment with Multi-Task Learning (MTL) approach to exploit the dependency between the valence and EC prediction tasks. We compare the performance of the models for EC and valence predictions as task-specific models, as well as two-step and joint-prediction models, and evaluate the performance. **Single-Task Prediction** In the single-task prediction, the models are trained and optimized for each task separately. This modality is a baseline to compare model performance in other MTL settings such as two-step and joint prediction. **Two-step Prediction** Inspired by Kulhanek et al. (2021) and Hosseini-Asl et al. (2020), we experiment with the two-step prediction setting. In this setting, the model predicts the discrete label for the first task as the first step, and as the second step, this prediction is concatenated to the input sequence following a prompt structure to predict the label for the second task via the same model. Afterwards, the loss values of the two tasks are summed or aggregated with a linear interpolation before back-propagating it. In this setting, we experiment with alternating the order for the two tasks (Valence \(\rightarrow\) EC, vs. EC \(\rightarrow\) Valence). The motivations behind this setting are that 1) the contribution of one task over the other task is explicit, enhancing the understanding of the interdependency between the two tasks; 2) this approach can potentially reduce the gap in the performance between two interdependent tasks by conditioning the prediction of the second task with the prediction of the first task. In this, the best-performing task is placed in the first step. To provide evidence of this, we experimented by replacing the first step with an oracle providing the ground truth. The two-step setting is similar to the pipeline setting Zhang and Weiss (2016) and Stack-Propagation framework Zhang and Weiss (2016); Qin et al. (2019). The two-step prediction is similar to the pipeline setting because the discrete output of a task is explicitly used in the prediction of another task, but in the pipeline setting, two different models are utilised instead of one. Moreover, the two-step prediction is close to the Stack-Propagation framework as the back-propagation of the loss updates the weights of the model used to predict the first task as well as the second task. However, the label guiding the model's prediction of the second task is not differentiable as in the Stack-Propagation. **Joint Prediction** The joint prediction setting is commonly used in MTL Cerisara et al. (2018); Jing et al. (2021) where a single model predicts labels for the different tasks simultaneously. Thus, \begin{table} \begin{tabular}{c|c|c} **Positive** & **Negative** & **Intersection** \\ \hline _“perfume”_ & “to tackle” & “work” \\ _“vacations”_ & “administration” & “home” \\ _”yoga”_ & “dentist” & “lunch” \\ _“a stroll”_ & “be late” & “today” \\ _“freedom”_ & “charged with” & “feeling” \\ \end{tabular} \end{table} Table 1: Examples of Emotion Carriers (EC) in positive and negative functional units of the dataset used (English translations). “Intersection” consists of the ECs that are present in both positive and negative FUs. the prediction of one task does not explicitly contribute to the prediction of another task. The loss is computed and back-propagated as in the two-step prediction approach. ## 4 Models We experiment with discriminative (BERT) and generative (GPT-2) models and investigate the performance of the two models for the joint and two-step prediction of valence and ECs. The joint and two-step prediction settings along with the two architectures are depicted in Figure 2. All the hyperparameters and model settings are reported in Appendix 7 to achieve the reproducibility of the results. ### Prediction tasks In the discriminative model, we formalize the valence and Emotion Carrier prediction tasks as text classification tasks by following the formalization of Mousavi et al. (2022). The valence prediction task is formally defined as calculating the probability for a given functional unit as \(p(valY_{i}|FU_{i})\), where label \(valY_{i}\in\{positive,negative,neutral\}\) and \(FU_{i}=\{w_{1},w_{2},..,w_{n}\}\) as a sequence \(i\) tokens \(w\). Meanwhile, the EC prediction task is to predict for each EC candidate span, which is an automatically extracted verb or noun chunk, in a FU if it is an EC or not. That is \(p(ecY_{j}|EC\text{-}candidate_{j},FU_{i})\) where the \(FU\) provides context information for the prediction, \(EC\text{-}candidate_{j}=\{w_{k},..,w_{l}\}\in FU\) and \(ecY_{j}\in\{yes,no\}\). In the generative model, inspired by Hosseini-Asl et al. (2020), we model the two tasks as causal language modelling tasks, in which the model is tasked to learn the joint probability over a sequence. For valence prediction, the sequence used to train the model is formally defined as \(x_{i}=[FU_{i};valY_{i}]\), i.e. the concatenation of \(FU_{i}\) and \(valY_{i}\), where the functional unit \(FU_{i}\) is the context for the model in the prediction of \(valY_{i}\in\{positive,negative,neutral\}\). While the training sequence for the EC prediction task is \(x_{i}=[FU_{i};EC\text{-}candidate_{i};ecY_{i}]\) where \(FU_{i}\) is the functional, \(EC\text{-}candidate_{i}\) is the complete list of the EC-candidate spans of \(FU_{i}\), and \(ecY_{i}\) is the list of the EC decision labels, i.e. \(\{yes,no\}\), corresponding to the list EC-candidate span. In this sequence, both \(FU_{i}\) and \(EC\text{-}candidate_{i}\) is the context for the model in the prediction of \(ecY_{i}\). ### Discriminative Discriminative models based on PLMs have been effectively used for text classification tasks Lei et al. (2019), however, such models may need additional architectural components, such as conditional random fields and/or additional feed-forward layers Shang et al. (2021), to tackle a specific task. _Architecture_ Our discriminative model is based on the same architectural components for valence and EC predictions proposed by Mousavi et al. (2022). This architecture is composed of a PLM and a set of feed-forward layers used to make the prediction. The PLM is based on AlBERTo which is BERT-based with 110M parameters pre-trained on a corpus in the Italian language collected from Twitter Polignano et al. (2019). The PLM takes as input a FU with special tokens [CLS] and [SEP], added at the head and the tail of the FU, and returns a sequence of hidden states. The valence is predicted from the hidden state of the [CLS] token by first applying a feed-forward layer with softmax to compute the probabilities over the classes of the valence (_positive_, _negative_ and _neutral_). For the EC prediction, an EC-candidate span is represented by a set of hidden states corresponding to the tokens of the span. Furthermore, the hidden state of the [CLS] token is concatenated to the hidden states of the EC span to give context information contained in the FU. These hidden states are passed through a max-pooling layer, to get the vector representation of the EC candidate, and a feed-forward layer with softmax to yield the prediction on the two classes (_yes_ and _no_). _Joint_ Regarding the join prediction setting, as depicted in Figure 2 part _A_), the valence and ECs are predicted in one step. In this, the shared part of the model between the two tasks is the PLM AlBERTo only. _Two-step_ The model in the two-step prediction setting, Figure 2 part _B_), has the same shared parts of the joint model, but the prediction of valence and ECs are done in two steps. The prediction of the first task is computed on a FU, while the prediction for the second task is computed on the concatenation of the FU with the label predicted in the first task. The prompt that concatenates the prediction when the first task is valence prediction is: \[FU_{i}=\{w_{1},w_{2},..,w_{n}\}\] \[FU_{i}\oplus\text{valence:}\oplus Prediction_{i}\] where \(i\) is a functional unit of the dataset, \(\oplus\) is the concatenation by white space, _valence_: is plain text, \(Prediction_{i}=(0|1|2)\), and \(\{0,1,2\}\) are the labels _negative_, _positive_ and _neutral_ respectively. When the first task is EC prediction the prompt is \[EC_{j}=\{w_{k},..,w_{l}\}\in FU_{i}\] \[FU_{i}\oplus\textit{EC:}\oplus\{EC_{1},..,EC_{N}\}\] where _EC_: is plain text, and \(\{EC_{1},..,EC_{N}\}\) is the list of EC spans of the detected ECs in the \(FU_{i}\). Furthermore, to reduce the training time and stabilise the performance, we experiment with the teacher forcing technique (Lamb et al., 2016) that substitutes the prediction of the first task with the ground truth with a certain probability (to be selected as a hyperparameter). _Loss function_ In both joint and two-step prediction settings, the loss function is the cross entropy and the loss values of the two tasks are combined with a linear interpolation: \[loss_{total}=\lambda*(loss_{valence})+(1-\lambda)*loss_{EC}\] where \(\lambda\) is a hyperparameter with a range from 0 to 1. ### Generative We used GePpeTto (De Mattei et al., 2020) an auto-regressive model based on GPT-2 architecture which is pre-trained for the Italian language with 117M parameters. For valence, EC and two-step predictions, we have designed three prompts by following the formalization of the two tasks. _Prompt design_ The prompt for valence prediction is composed of two segments, where the first segment is the \(FU_{i}\) and the second segment is the valence label \(valY_{i}\) to predict preceded by a special token. This prompt is depicted in Figure 2 part _C) prompt 1_, where \(Target=(0|1|2)\) indicating _negative_, _positive_ and _neutral_ respectively. The prompt used for EC prediction is organised into three segments: a) \(FU_{i}\); b) \(EC\)-\(candidate_{i}\) spans separated by a special token and; c) the list of labels corresponding to each EC candidate \(ecY_{i}\) separated by another different special token. This prompt is shown in Figure 2 part _C) prompt 2_ where \(Target=(y|n)\). A difference with the discriminative model is that in EC prediction the predicted label of one EC candidate is used as context to predict the next EC candidates due to the fact that the model is auto-regressive. _Joint_ In the joint prediction setting, we fine-tune a single model on valence and EC predictions using the two corresponding distinct prompts appearing in the same training batch. Thus, the prediction of one task does not occur in the context of the other task. _Two-step_ For the two-step prediction, we designed a specific prompt by combining the prompts for valence and EC predictions, which is composed of the 4 segments: a) \(FU_{i}\), b) \(EC\)-\(candidate_{i}\) spans, c) valence \(valY_{i}\) and d) ECs \(ecY_{i}\) targets, Figure 2 part _C) prompt 3_. The first two segments are the \(FU_{i}\) and \(EC\)-\(candidate_{i}\). The other two segments are the target labels of the valence and EC predictions tasks. Thus, in the first step, the model predicts, based on \(FU_{i}\) and \(EC\)-\(candidate_{i}\), the labels of the first task that are used as context in the second step to predict the other task. Moreover, alternating the order of the last two segments results in two prediction settings Valence \(\rightarrow\) EC and EC \(\rightarrow\) Valence. _Generation strategy_ At inference time, the generation of the target is guided by forcing the special tokens, i.e. they are not predicted by the model, into the generated sequence and limiting the possible output labels by considering the probabilities of the tokens in our searching space i.e. \(\{0,1,2\}\) for valence prediction and \(\{y,n\}\) for the EC prediction. Moreover, for the EC prediction task, we force the same number of special tokens of the EC candidates to get one output label for each EC candidate, relieving the model from the complexity of counting the EC candidates. _Loss function_ The generative model is trained as a language model, i.e. the model is tasked to predict the next most probable word given a sequence of words. In this, the loss function is the cross entropy with the objective of minimizing the perplexity on the training set. ## 5 Experiments Table 2 presents the macro F1-score achieved by AlBERTo and GePpeTto models with single-task, two-step and joint prediction settings. The results on valence prediction achieved via the single-task discriminative model are on-par with those reported by Mousavi et al. (2022), while the results achieved on EC prediction are incomparable with Mousavi et al. (2022) since our training set consists of all FUs with the authors train the models only using the FU with a valence polarity. Regarding the single task setting, we observe that AlBERTo outperforms GePeTto for EC prediction, while GePpeTto outperforms AlBERTo on the valence prediction task. Regarding the two-step prediction setting \(\text{Val}\rightarrow\text{EC}\), we observe a slight worsening in the performance of EC prediction for the AlBERTo model and a drop in valence prediction score for the generative model compared to the single-task setting. Nevertheless, domain adaptation improves the performance to achieve close results on the valence prediction and boosts the performance on EC prediction. Domain adaptation is performed by initially fine-tuning only on the first task and further fine-tuning on both tasks with the two-step approach. Regarding the reverse order of predictions, i.e. \(\text{EC}\rightarrow\text{Val}\)., the models do not manage to outperform the single-task alternatives on valence prediction, with the degradation being more significant for the generative model. On the contrary, the models exhibit a better performance for EC prediction in the two-step setting compared to single-task models, where GePpeTto with domain adoption achieves its best performance in all settings. Additionally, we have computed the upper bound for the two-step prediction by substituting the prediction of the first step with the corresponding ground truth and fine-tuning the model only on the second step. The results show a solid contribution of one task in predicting the other task by outperforming the models in all the other prediction settings. Furthermore, GePpeTto and AlBERTo achieve the highest performance in valence prediction and EC prediction respectively. The results of the discriminative model in the joint prediction setting are on par and better than the other two settings for valence and EC predictions, respectively. While the performance of the generative model is worse than the two-step prediction for both tasks, but slightly better than the single task in EC prediction. In particular, AlBERTo achieves the highest macro F1-score on the EC prediction task compared to the other settings and the generative model. ## 6 Discussion The inter-dependency between valence prediction and EC prediction tasks is quantified in terms of performance by the upper bound computed by substituting the first step prediction with the ground truth. We observe that the performance of both tasks is enhanced. Furthermore, we observe that the proposed MTL approaches, i.e. two-step and joint prediction, are effective in exploiting such inter-dependency. In particular, the two-step prediction boosts the performance of the generative model, while joint prediction improves the performance of the discriminative model. Moreover, we observe that these improvements affect mainly the EC prediction task. This is because, compared to valence prediction, the EC prediction task is objectively more challenging for the models due to the unbalanced distribution and the sparsity of the ECs (some ECs are personal w.r.t the narrator). Thus, the predictions of the EC tasks are too noisy to be exploited by the models to improve the performance of valence prediction. Indeed, the worst performance for valence prediction is achieved by both models in the two-step setting in which the first task is the EC prediction (i.e. \(\text{EC}\rightarrow\text{Val}\).). Regarding the comparison between discrimina \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{6}{c}{**Valence Prediction**} \\ \hline \hline **Model** & **Single Task** & \multicolumn{3}{c}{**Two-Step**} & **Joint** \\ \cline{2-6} & & Val. \(\rightarrow\text{EC}\) & EC \(\rightarrow\text{Val}\). & w. ground truth & \\ \hline AlBERTo & 76.0 & 76.0 & 75.7 & 81.2 & 76.0 \\ \hline GePpeTto & 77.1 & 74.7 & 65.1 & 86.5 & 75.6 \\ + domain adapt. & - & 77.0 & 70.6 & - & - \\ \hline \hline \multicolumn{6}{c}{**EC Prediction**} \\ \hline \hline AlBERTo & 63.7 & 63.4 & 64.8 & 74.9 & 65.0 \\ \hline GePpeTto & 57.8 & 58.3 & 58.2 & 65.4 & 59.5 \\ + domain adapt. & - & 59.5 & 60.7 & - & - \\ \hline \hline \end{tabular} \end{table} Table 2: The macro-F1 scores (average over 10 runs) in percentage of the Valence Prediction and Emotion Carrier (EC) Prediction tasks. The scores are achieved using discriminative (AlBERTo) and generative (GePpeTto) PLMs with single-task, two-step and joint prediction settings. Single task and two-step with (w.) ground truth are respectively the baseline and the upper-bound for the joint and the two-step settings. tive and generative models, the best trade-off between valence and EC prediction tasks is achieved by AlBERTo, although GePpeTto fine-tuned with the single-task setting achieves the best performance on valence prediction. ## 7 Conclusions In this work, we studied the inter-dependency between valence and ECs in personal narratives. For valence and EC prediction tasks, we compared task-specific models with two MTL settings, namely joint and two-step prediction. We experimented with discriminative and generative PLMs. The results indicate that PLMs fine-tuned with MTL settings achieve improved performance by exploiting the inter-dependency between valence and EC prediction tasks. In particular, the two-step setting is more effective for the generative model, while the joint setting best fits the discriminative model. Furthermore, the generative model outperforms the discriminative model on the valence prediction task, while the discriminative model achieves better results on EC prediction and the best trade-off between the valence and EC prediction tasks. Consecutively, one discriminative model performs the two tasks, reducing the demand for computational resources at training and inference time and, therefore, lowering carbon emissions in the environment. ## Limitations The dataset used in this work is in Italian and the PLMs are pre-trained for the Italian language. The performance of the models and the results may be influenced by language-specific properties.
2304.00277
Energy Consumption Optimization in Radio Access Networks (ECO-RAN)
In recent years, mobile network operators are showing interest in reducing energy consumption. Toward this goal, in cooperation with the Danish company 2Operate we have developed a stochastic simulation environment for mobile networks. Our simulator interacts with historical data from 2Operate and allow us to turn on and off network cells, replay traffic loads, etc. We have developed an optimization tool which is based on stochastic and distributed controllers computed by \uppaal. We have conducted experiments in our simulation tool. Experiments show that there is a potential to save up to 10\% of energy. We observe that for larger networks, there exists a larger potential for saving energy. Our simulator and \uppaal controllers, have been constructed in accordance to the 2Operate data and infrastructure. However, a main difference is that current equipment do not support updating schedulers on hourly bases. Nevertheless, new equipment e.g. new Huawei equipment do support changing schedulers on hourly basis. Therefore, integrating our solution in the production server of 2Operate is possible. However, rigorous testing in the production system is required.
Anders Mariegaard, Kim G. Larsen, Marco Muniz, Thomas Dyhre Nielsen
2023-04-01T09:46:11Z
http://arxiv.org/abs/2304.00277v1
# Energy Consumption Optimization in Radio Access Networks (ECO-RAN)+ ###### Abstract In recent years, mobile network operators are showing interest in reducing energy consumption. Toward this goal, in cooperation with the Danish company 2Operate we have developed a stochastic simulation environment for mobile networks. Our simulator interacts with historical data from 2Operate and allow us to turn on and off network cells, replay traffic loads, etc. We have developed an optimization tool which is based on stochastic and distributed controllers computed by Uppaal. We have conducted experiments in our simulation tool. Experiments show that there is a potential to save up to 10% of energy. We observe that for larger networks, there exists a larger potential for saving energy. Our simulator and Uppaal controllers, have been constructed in accordance to the 2Operate data and infrastructure. However, a main difference is that current equipment do not support updating schedulers on hourly bases. Nevertheless, new equipment e.g. new Huawei equipment do support changing schedulers on hourly basis. Therefore, integrating our solution in the production server of 2Operate is possible. However, rigorous testing in the production system is required. ## 1 Introduction In accordance with the enormous expansion of mobile networks in Denmark and the rest of the world, the number of mobile masts providing coverage has exploded, and with the upcoming expansion of 5G, there will be even more mobile radio devices that require power. In recent years, it has been in the interest of the mobile operators to bring the power consumption, and the first steps have already been taken. These measures are based on semi-automatic procedures and with strong assumptions e.g. everyone follows the same patterns. A more fully automated approach to the problem, based on artificial intelligence, is desirable and expected to be able to further reduce power consumption. Furthermore, in connection with the sales activities, both in and outside Europe, it has been made clear that the mobile operators are increasingly concerned about mobile network power consumption, now and especially in the future. The background for this is that electricity consumption in the mobile network will increase significantly with the introduction of 5G and several 4G frequency layers. There is therefore both a considerable financial gain by minimizing power consumption, and also a growing interest in contributing to the Danish climate action, where the goal is for Denmark to be energy-renewable in 2050. There are thus already good market leadership advantages for companies that can demonstrate that they are actively making an effort to achieve this goal. Together with one of our partners 2Operate we carried out feasibility studies that show that some Nokia-specific functions in Nokia's operating system can force the radio units to switch off at certain times - e.g. at 01:00-06:00. That is it is possible to synthesize and implement schedulers which turn on and off power cells. Conservative calculations show that the savings potential will be EUR100 - EUR300 annually per mobile mast per company. In Denmark, they have fire mobile operators together approx. 10,000 locations with their equipment. This gives a total savings potential of DKK 1 - 3 million. euros annually or up to 50 million KWh. Worldwide, this potential will be many times higher. ## 2 Preliminaries ### Mobile Networks In this work we consider some geographical area where there is a number of _base stations_. Base stations have number of _cells_ and every _cell_ operate in some _frequencies_. The geographical location is discredited using _pixels_. Cells provide coverage to pixels and each pixel has a traffic demand. Figure 1 shows a map with the building of the Department of Computer Science at Aalborg University. Base stations are in pink, every base station contain some cells, and pixels correspond to the grid elements. #### 2.1.1 Frequency Layers Each base station usually consists of a number of cells broadcasting on different frequencies. Lower frequencies are for coverage while higher frequencies are for capacity. For 4G, the 800 MHz frequency layer is considered the coverage layer and must not be turned off in the current setup. There is room for optimization at the higher frequency layers as the needed capacity fluctuates a lot during a typical day. The 4G (LTE) frequencies are: E: 800 MHz, V: 900 MHz, T: 1800 MHz, A: 2100 MHz, L: 2600 MHz. #### 2.1.2 Power Saving The goal is to shut down capacity layers then they are not needed. e.g. during the night. A current contraint of the system is to maintain coverage to all pixels. In the current system, to ensure this constraint the 800 MHz layer can not be shut down. #### 2.1.3 Historical Data The company 2Solve has relevant historical data, e.g. the traffic demands for every base station. There is also information about the signal strengh for every pixel and for every frequency layer. Our simulations will be based on existing historical data. ## 3 Computing Near Optimal Strategies ### Optimal Controlling The main mathematical formalism for our modeling and posterior optimization is a stochastic hybrid game. For details we refer the reader to [4]. At a high level the game is between a controller and the environment. In our concrete scenario the stochastic game \(\mathcal{G}\) corresponds to a communication network in which the environment consists of a number of cells \(numCells\), pixels \(numPixels\) and traffic demand per pixel. The controller consists of modes ON or OFF for every cell. Given a time horizon \(H\) e.g. of one day, a _control strategy_\(\sigma^{H}\) for horizon \(H\), determines if a given cell is ON or OFF. The stochastic of the system come from the traffic demands, which can be represented as probability distributions over the pixels. Note that given a stochastic hybrid game \(\mathcal{G}\) and control strategy \(\sigma^{H}\), the game under the strategy \(\mathcal{G}_{numCells}\upharpoonright\sigma^{H}\) is a stochastic process which induces a measure on the possible executions of the system. Definition 1 (Optimal Controlling): Given a stochastic hybrid game \(\mathcal{G}_{numCells}\), synthesize a strategy \(\sigma^{H}\) which minimizes the expected reward \[\sigma^{H}=\mathsf{argmin}_{\sigma}\mathbb{E}_{\sigma,H}^{\mathcal{G}_{numCells }}(\mathsf{reward})\] where reward accumulates the energy usage and a penalty for the lack of coverage \[\mathsf{reward}=\int_{0}^{H}\mathsf{penalty}(t)+\mathsf{energy}(t)\ dt\] Figure 1: Base station with three sectors. Sector 1 consists of one 800 cell and one 1800 cell. _with_ \[\textsf{penalty}(t) =\sum_{i}^{numPixels}\textsf{penalty}(t,i)\] \[\textsf{penalty}(t,i) =\begin{cases}0&\text{if }\textsf{contribution}(t,i)-\textsf{ demand}(t,i)>0\\ 1000&\text{otherwise}\end{cases}\] _and_ \[\textsf{energy}(t) =\sum_{i}^{numCells}\textsf{energy}(t,i)\] \[\textsf{energy}(t,i) =\begin{cases}0&\text{if }\text{cell is off}\\ \text{cell power + cost per mb}\end{cases}\] _\(\textsf{contribution}(t,i)\) indicates the cells contribution to pixel \(i\) at time \(t\) similarly demand\((t,i)\) indicates the demand for pixel \(i\) at time \(t\)._ In this project we aim to control real world communication networks with hundreds of cells and millions of pixels. Therefore computing the strategy \(\sigma^{H}\) is intractable. Instead we will compute near optimal-strategies using diverse techniques. ### Online Strategy Synthesis For this case study our goal is to compute a strategy (controller) \(\sigma^{H}\) to minimize energy consumption for a long horizon \(H\). As the number of choices for the controller grows exponentially in the horizon, computing the strategy for a long horizon \(H\) is intractable. To overcome this problem we resort to the _Online Strategy Synthesis_[4] methodology, where we periodically compute a online strategy \(\sigma^{h}\) for a short horizon \(h<H\). By composing the online strategies \(\sigma^{h}\) we can control the system for the horizon \(H\). The composed strategy is less optimal than the optimal strategy \(\sigma^{H}\) but it can be computed effectively. Figure 2 shows the online strategy synthesis approach for \(n\)_cells_, a horizon \(H\) of 1 day and controlling every 60 min. Short horizon \(h\) of 180 minutes. For \(n\)_cells_ for the offline controllers \(\sigma^{H}\) and online controller \(\sigma^{h}\) there are \(2^{720n}\) vs. \(2^{3n}\) decisions. Thus computing a near-optimal online controller \(\sigma^{h}\) is clearly more applicable. The methodology has successfully been applied to multiple case studies involving cyber-physical systems such as, intelligent traffic lights [3], floor heating systems [4], rerouting [2] etc. ### Distributed Online Synthesis In this project we aim to control large scenarios with hundreds of cells and millions of pixels. Therefore, directly applying online strategy synthesis is not scalable. To overcome this difficulty, we apply Distributed Online Synthesis as in [4]. Given a geographical area with hundreds of cells, we partition it to sub areas which contain at most 8 cells. Then we can compute a online strategy for every partition and then merge the strategies to control the full network. ### Methodology The real world consists of a number of base stations, cells, pixels, frequency layers, etc. Where the goal is to provide a _controller_ that powers ON or OFF cells to save energy. There exist a number of tools which can be used to simulate the behavior of mobile networks. Figure 3 illustrates our methodology. The real world (or a simulation) require a control or a _strategy_\(\sigma^{H}\) for minimizing energy consumption for a long horizon \(H\) e.g. 3 months. Since the number of choices for the controller grow exponentially on the horizon \(H\), computing a "global" strategy \(\sigma^{H}\) is not possible. Instead we periodically monitor the system and compute a near-optimal strategy Figure 3: System Architecture Figure 2: Online Strategy Synthesis for \(n\)_cells_, a horizon \(H\) of 1 day and controlling every 60 min. Short horizon \(h\) of 180 minutes. For \(n\)_cells_ for the offline controllers \(\sigma^{H}\) and online controller \(\sigma^{h}\) there are \(2^{3n}\) vs. \(2^{720n}\) decisions. \(\sigma^{h}\) for a short horizon e.g. 3 hours. In this work we will use Uppaal Stratego to compute online strategies \(\sigma^{h}\). ## 4 Experimental Evaluation ### Simulation Tool We use a simulation tool written in Python to replay and simulate historical data. From historical data we can observe the coverage contributions of every cell to every pixel. Then we can use this information to reproduce the effects of turning ON or OFF a given cell. In this way we can compute the values required by Definition 1. The traffic demand is based on historical data with the additional assumption that the demand is uniformly distributed over all pixels affected by cell. Figure 4 shows the overall architecture of the network simulator. ### Uppaal Stratego Controller The main contribution of our work is to synthesize a near-optimal strategy in accordance to Definition 1. Toward this goal we use the tool Uppaal Stratego[1]. The tool developed at Aalborg University and used to facilitate generation and optimization of strategies for abstract games with stochastic and real-time aspects. The tool uses simulation-based statistical machine learning methods. Figure 5 shows the Uppaal Stratego model for a stochastic hybrid game \(\mathcal{G}_{numCells}\). Solid arrows correspond to controllable actions where as dashed arrows correspond to environment actions. At every simulation step and for every Figure 4: Simulator architecture cell the controller has the choice to set ON or OFF the given cell, indicated by the command actions[cellId]=l or actions[cellId]=0. Once actions on cells have been chosen the environment executes its actions, this is done by calling a external C library with the command do_sim_step(stepSize,eps). This function return a real value which is then accumulated in the variable reward. These steps are then executed until the short time horizon \(h\) has been reached. Uppaal Stratego will perform a number of simulations and used machine learning techniques to find the controllable actions which optimize the variable reward. Once the learning is complete Uppaal Stratego returns the near-optimal strategy which is then implemented in the simulator (or the real world). ### Simulation Scenarios and Controllers As a proof of concept we have chosen to perform a simulation of 1 day in the following two geographical locations in Aalborg, Denmark: * City Syd with 39 cells and 2687 pixels * Norre Tranders with 107 cells and 6138 pixels In our experiments we have used the following controllers: * _ALLON_ all cells always ON * Uppaal Stratego as described in Definition 4.2 Table 1 shows the results of the different controllers in the different scenarios. The columns energy, penalty and reward correspond to Definition 1. The values on column energy are computing using a linear function on historical data and a constant for a cost per megabit. Figure 5: Uppaal Stratego Controller We observe that while having no penalty, the Uppaal Strateg controller is able to save about 10% energy on a single day. Concerning controller Uppaal Stratego, the computation time for strategies about 22 and 31 hours for City Syd and Frydendal - Norre Tranders on 16 cores. This means that given sufficient hardware resources, the scenarios could be controlled in real time. This is because using Online Strategy Synthesis (c.f. Section 3.2) will give a time window of up to 60 minutes to compute the next near optimal strategy. ## 5 Conclusion Large mobile networks can profit from energy savings. This can be achieved by computing schedulers which turn off or on cells while maintaining some optimality criteria. In this work we have model a given portion of the mobile network as a stochastic game, applied different methodologies and finally used the tool Uppaal Stratego to synthesize near optimal strategies which minimize energy consumption while maintaining coverage. We have implemented a simulator which replays historical data. We have performed simulations for two large geographical areas in Aalborg, Denmark. Our initial results are encouraging, showing energy savings from up to 10% and showing the scalability of our approach. Future WorkCurrently we have distributed controllers which do not communicate with each other. It would be interesting to study cooperative distributed controllers in this contexts. Our traffic demand model is quite abstract and could be refined if more is available. In particular a forecasting model for the traffic development could be of interest. In the same manner our optimization function is quite simple, one could consider to optimize different KPI's.
2304.07518
Uniqueness of solution to boundary value problems for time-fractional wave equations
We consider an initial boundary value problem in a bounded domain $\Omega$ over a time interval $(0, T)$ for a time-fractional wave equation where the order of the fractional time derivative is between $1$ and $2$ and the spatial elliptic operator has time-independent coefficients and is not necessarily symmetric. We prove that if for arbitrarily chosen subdomain $\omega\subset \Omega$ and $T>0$, a solution to the problem vanishes in $\omega \times (0,T)$, then $u=0$ in $\Omega \times (0, T)$. The uniqueness does not require any geometric condition on $\omega$.
Paola Loreti, Daniela Sforza, Masahiro Yamamoto
2023-04-15T10:03:55Z
http://arxiv.org/abs/2304.07518v1
# Uniqueness of solution to boundary value problems for time-fractional wave equations ###### Abstract. We consider an initial boundary value problem in a bounded domain \(\Omega\) over a time interval \((0,T)\) for a time-fractional wave equation where the order of the fractional time derivative is between \(1\) and \(2\) and the spatial elliptic operator has time-independent coefficients and is not necessarily symmetric. We prove that if for arbitrarily chosen subdomain \(\omega\subset\Omega\) and \(T>0\), a solution to the problem vanishes in \(\omega\times(0,T)\), then \(u=0\) in \(\Omega\times(0,T)\). The uniqueness does not require any geometric condition on \(\omega\). **Key words.** fractional wave equation, uniqueness, inverse source problem, uniqueness **AMS subject classifications.** 35R30, 35R11 \({}^{1}\) Dipartimento di Scienze di Base e Applicate per l'Ingegneria, Sapienza Universita di Roma, Via Antonio Scarpa 16, 00161, Roma, Italy e-mail: [email protected] \({}^{2}\) Graduate School of Mathematical Sciences, The University of Tokyo, Komaba, Meguro, Tokyo 153-8914, Japan \({}^{3}\) Honorary Member of Academy of Romanian Scientists, Ilfov, nr. 3, Bucuresti, Romania \({}^{4}\) Correspondence member of Accademia Peloritana dei Pericolanti, Palazzo Universita, Piazza S. Pugliatti 1 98122 Messina Italy e-mail: [email protected] Here we assume that \(a_{ij}=a_{ji}\in C^{1}(\overline{\Omega})\), \(b_{j},c\in C(\overline{\Omega})\) for \(1\leq i,j\leq d\) and \(-A\) is uniformly elliptic. By \(H^{\alpha}(0,T)\), we denote the Sobolev-Slobodecki space (e.g., Adams [1]). Next, for \(1<\alpha<2\), we define a Banach space \(H_{\alpha}(0,T)\) as follows: \[H_{\alpha}(0,T):=\begin{cases}&\left\{v\in H^{\alpha}(0,T);\,v(0)=0\right\}, \quad 1<\alpha<\frac{3}{2},\\ &\left\{v\in H^{\frac{3}{2}}(0,T);\,\int_{0}^{T}t^{-1}\left|\frac{dv}{dt}(t) \right|^{2}dt<\infty\right\},\quad\alpha=\frac{3}{2},\\ &\left\{v\in H^{\alpha}(0,T);\,v(0)=\frac{dv}{dt}(0)=0\right\},\quad\frac{3}{ 2}<\alpha<2\end{cases}\] with the norms \(\|v\|_{H_{\alpha}(0,T)}=\|v\|_{H^{\alpha}(0,T)}\) for \(\alpha\neq\frac{3}{2}\) and \(\|v\|_{H^{\frac{3}{2}}(0,T)}=\left(\|v\|_{H^{\frac{3}{2}}(0,T)}^{2}+\int_{0}^{ T}t^{-1}\left|\frac{dv}{dt}(t)\right|^{2}dt\right)^{\frac{1}{2}}\). We define the Riemann-Liouville fractional integral operator \(J^{\alpha}\) by \(J^{\alpha}v(t):=\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}v(s)ds\) for \(v\in L^{2}(0,T)\). Then we can prove (e.g., Yamamoto [12]) that \(J^{\alpha}:L^{2}(0,T)\longrightarrow H_{\alpha}(0,T)\) is isomorphism for \(1<\alpha<2\) and see also Gorenflo, Luchko and Yamamoto [3], Kubica, Ryszewska and Yamamoto [8]. We define \(\partial_{t}^{\alpha}:=(J^{\alpha})^{-1}\) with the domain \(\mathcal{D}(\partial_{t}^{\alpha})=H_{\alpha}(0,T)\), and we can verify (e.g., [12]) that \(\partial_{t}^{\alpha}v=d_{t}^{\alpha}v\) if \(v,\frac{dv}{dt},\frac{d^{2}v}{dt^{2}}\in L^{1}(0,T)\) and \(v(0)=\frac{dv}{dt}(0)=0\). Thus we can interpret that \(\partial_{t}^{\alpha}\) is an extension of \(d_{t}^{\alpha}\), and moreover the formulation with \(\partial_{t}^{\alpha}u\) admits us to prove that for \(a\in H_{0}^{1}(\Omega)\) and \(b\in L^{2}(\Omega)\), there exists a unique solution \(u\in H^{1}(0,T;L^{2}(\Omega))\cap L^{\infty}(0,T;H_{0}^{1}(\Omega))\) to (1.1) such that \[u-a-bt\in H_{\alpha}(0,T;H^{-1}(\Omega)) \tag{1.3}\] (e.g., Huang and Yamamoto [4]). Here by \(H^{-1}(\Omega)\), we denote the dual of \(H_{0}^{1}(\Omega)\), identifying \(L^{2}(\Omega)\) with itself. Throughout this article, we consider solutions to (1.1) within this class. The main subject of this article is **Uniqueness**. _Let \(\omega\) be an arbitrarily chosen subdomain of \(\Omega\). Does we conclude that \(a=b=0\) in \(\Omega\) if the solution \(u\) to (1.1) satisfies \(u=0\) in \(\omega\times(0,T)\)?_ Although we do not discuss, the uniqueness is closely related to the approximate controllability through the duality. The main purpose is to establish an affirmative answer: **Theorem 1**.: _Let \(a\in H_{0}^{1}(\Omega)\) and \(b\in L^{2}(\Omega)\). If a solution \(u\) to (1.1) satisfies \(u=0\) in \(\omega\times(0,T)\), then \(u=0\) in \(\Omega\times(0,T)\)._ We emphasize that we do not require any geometric condition on \(\omega\), which is essentially different from the case of the wave equation \(\alpha=2\). **Remark.** In this article, for initial boundary value problem, we adopt the formulation (1.1). The classical formulation is based on the equation \(d_{t}^{\alpha}u(x,t)=-Au(x,t)\). However, especially for non-symmetric case (1.2) with \(a\in H_{0}^{1}(\Omega)\) and \(b\in L^{2}(\Omega)\), it is not direct to verify that \(\partial_{t}^{2}u(x,\cdot)\in L^{1}(0,T)\) for fixed \(x\in\Omega\), which can justify the definition of \(d_{t}^{\alpha}u\) with \(1<\alpha<2\). Thus we adopt formulation (1.1). There are other possible approaches for the well-posedness, and here we are restricted to Zacher [13]. In the case of a symmetric operator, see Loreti and Sforza [10]. Furthermore we can prove **Lemma 1**.: _Let \(u(x,t)\) be the solution to (1.1) with \(a\in H^{1}_{0}(\Omega)\) and \(b\in L^{2}(\Omega)\). (i) The solution \(u(x,z)\) to (1.1) is holomorphically extended to \(Re\ z>0\) for arbitrarily fixed \(x\in\Omega\). (ii) There exists a constant such that \(\|u(\cdot,t)\|_{L^{2}(\Omega)}\leq Ce^{Ct}(\|a\|_{H^{1}_{0}(\Omega)}+\|b\|_{L^{ 2}(\Omega)})\) for all \(a\in H^{1}_{0}(\Omega)\), \(b\in L^{2}(\Omega)\) and all \(t>0\)._ The proof is similar, for example, to Li, Imanuvilov and Yamamoto [9], and we omit the details of the proof. There are existing works on the uniqueness of solution by extra data, and here we refer only to most related articles, not intending any comprehensive references. * **Case \(0<\alpha<1\)**. We can refer to many works and see for example, Sakamoto and Yamamoto [11], Jiang, Li, Liu and Yamamoto [6], Jiang, Li, Pauron and Yamamoto [7]. * **Case \(1<\alpha<2\)**. we can not find many works and refer to [6]. Moreover except for [6] and [7], all the works discuss symmetric A, that is, \(b_{j}=0\) for \(1\leq j\leq d\) and rely on the eigenfunction expansion, which requires the symmetry. The non-symmetric term \(\sum_{j=1}^{d}b_{j}\partial_{j}u\) represents an advection, and is meaningful physically. For the non-symmetric A, we cannot utilize eigenfunction expansions. The work [6] treats non-symmetric case for \(0<\alpha<1\), and transforms the problem to the corresponding parabolic equation through the Laplace transform, and the method in [1] can work for \(1<\alpha<2\), but the problem is reduced to a wave equation, and for the uniqueness we have to assume some geometric condition on \(\omega\). Here to the case \(1<\alpha<2\), we modify the argument of [7] and eliminate any geometric condition on \(\omega\) in [6]. We can prove the uniqueness with extra data \(\partial_{\nu_{A}}u|_{\gamma\times(0,T)}\) with arbitrarily chosen sub-boundary \(\gamma\) and \((u(\cdot,\mu_{k})_{L^{2}(\Omega)})_{1\leq k\leq m}\) with weight functions \(\mu_{1},...,\mu_{m}\) satisfying suitable condition. We can argue similarly when we replace the zero Dirichlet boundary condition by other boundary conditions in (1.1). This article is composed of three sections. In Section 2, we show some lemmata for the proof of Theorem 1. In Section 3, we complete the proof of Theorem 1. ## 2. Preliminaries ### Laplace transform of \(\partial^{\alpha}_{t}u\) We set \((Lw)(p):=\widehat{w}(p):=\int_{0}^{\infty}e^{-pt}w(t)dt\), provided that the integral converges for \(Re\ p>p_{0}\): some positive constant. Then **Lemma 2**.: _Let \(w(x,\cdot)-a(x)-b(x)t\in H_{\alpha}(0,T)\) for any \(T>0\) and almost all \(x\in\Omega\) and \(|w(x,\cdot)|e^{-pt}\in L^{1}(0,\infty)\) for \(p>p_{0}\) and almost all \(x\in\Omega\). Then \(L(\partial_{t}^{\alpha}(u-a-bt))(x,p)=p^{\alpha}(Lu)(p)-p^{\alpha-1}a(x)-p^{ \alpha-2}b(x),\quad Rep>p_{0},\,x\in\Omega\)._ The formula itself is well-known as \(\widehat{d_{t}^{\alpha}}(x,p)=p^{\alpha}\widehat{u}(p)-p^{\alpha-1}u(x,0)-p^{ \alpha-2}\partial_{t}u(x,0)\) for \(p>\mathop{\rm Re}\nolimits p_{0}\) and \(x\in\Omega\), but this requires the justification of the traces \(u(x,0)\) and \(\partial_{t}u(x,0)\) in a suitable sense, which is not straightforward. **Proof.** First we remark: let \(w\in H_{\alpha}(0,T)\) for any \(T>0\) and \(|w|e^{-pt}\in L^{1}(0,\infty)\) for \(p>p_{0}\). Then \[L(\partial_{t}^{\alpha}w)(p)=p^{\alpha}\widehat{w}(p)\quad\text{for }\mathop{\rm Re }\nolimits\,p>p_{0}. \tag{2.1}\] The proof is found in Kubica, Ryszewska and Yamamoto [8] and Yamamoto [12]. Now we will complete the proof of Lemma 2. Henceforth, by \(\langle\cdot,\cdot\rangle\) we denote the duality pairing \(\langle u,\varphi\rangle=\,_{H^{-1}(\Omega)}\langle u,\varphi\rangle_{H_{0}^{1 }(\Omega)}\) for \(u\in H^{-1}(\Omega)\) and \(\varphi\in H_{0}^{1}(\Omega)\). We arbitrarily choose \(\psi\in C_{0}^{\infty}(\Omega)\). Setting \(u_{\psi}(t):=\langle u(\cdot,t)-a-bt,\ \psi\rangle\), by (1.3) and Lemma 1 (ii), since \(\partial_{t}^{\alpha}u_{\psi}(t)=\langle\partial_{t}^{\alpha}(u(\cdot,t)-a-bt), \,\psi\rangle\) for \(t>0\), we see that \(u_{\psi}\in H_{\alpha}(0,T)\) for any \(T>0\) and we can find a constant \(C>0\) such that \(|u_{\psi}(t)|\leq Ce^{Ct}\) for all \(t>0\). Therefore we can apply (2.1), so that \[L(\langle\partial_{t}^{\alpha}(u-a-bt)\rangle,\,\psi\rangle)(p)= L(\partial_{t}^{\alpha}u_{\psi})(p)=p^{\alpha}L(u_{\psi})(p)\] \[= p^{\alpha}L(\langle u(\cdot,t),\psi\rangle-\langle a,\psi \rangle-\langle bt,\psi\rangle)(p)=p^{\alpha}(L(\langle u(\cdot,t),\psi \rangle)(p)-L(\langle a,\psi\rangle)(p)-L(t\langle b,\psi\rangle)(p))\] \[= p^{\alpha}\langle(Lu(\cdot,p)-p^{-1}q-p^{-2}b),\,\psi\rangle \quad\text{for }\mathop{\rm Re}\nolimits\,p>p_{0}.\] Therefore \(\langle L(\partial_{t}^{\alpha}(u-a-bt)),\,\psi\rangle(p)=\langle(p^{\alpha}Lu (\cdot,p)-p^{-1}a-p^{-2}b),\,\psi\rangle\) for \(\mathop{\rm Re}\nolimits\,p>p_{0}\) and all \(\psi\in C_{0}^{\infty}(\Omega)\). Thus the proof of Lemma 2 is complete. \(\blacksquare\) **2.2. Spectral properties of \(A\)** By attaching the operator \(A\) in (1.2) with the domain \(\mathcal{D}(A)=H^{2}(\Omega)\cap H_{0}^{1}(\Omega)\), we define an operator in \(L^{2}(\Omega)\), which is denoted by the same notation \(A\). Then it is known (e.g., Agmon [2]) that \(\sigma(A)\), the spectrum of \(A\) is composed entirely of eigenvalues: \(\sigma(A):=\{\lambda_{n}\}_{n\in\mathbb{N}}\subset\mathbb{C}\). Taking sufficiently small circle \(\gamma_{n}\) which is centered at \(\lambda_{n}\) and does not include \(\gamma_{m}\) for \(m\neq n\), we can define \[P_{n}a=\frac{1}{2\pi\sqrt{-1}}\int_{\gamma_{n}}(z-A)^{-1}adz,D_{n}a=\frac{1}{ 2\pi\sqrt{-1}}\int_{\gamma_{n}}(z-\lambda_{n})(z-A)^{-1}adz\] for \(a\in L^{2}(\Omega)\) (e.g., Kato [5]). Then by [2] and [5], we see that \(P_{n},D_{n}\) are bounded linear operators on \(P_{n}L^{2}(\Omega)\) to itself, and \(d_{n}:=\dim P_{n}L^{2}(\Omega)<\infty\), and \[P_{n}^{2}=P_{n},\quad D_{n}=(A-\lambda_{n})P_{n},\quad D_{n}P_{n}=P_{n}D_{n}, \quad D_{n}^{d_{n}}P_{n}=0. \tag{2.2}\] Then we can show **Lemma 3**.: _We assume that \(D_{n}^{k_{0}}P_{n}\varphi=0\) with some \(k_{0}\in\mathbb{N}\). Then \((A-\lambda_{n})D_{n}^{k_{0}-1}P_{n}\varphi=0\)._ **Proof.** By (2.2), we have \(P_{n}D_{n}^{k_{0}-1}P_{n}\varphi=D_{n}^{k_{0}-1}P_{n}^{2}\varphi=D_{n}^{k_{0}-1}P_ {n}\varphi\). Then, using (2.2) again, we obtain \((A-\lambda_{n})D_{n}^{k_{0}-1}P_{n}\varphi=((A-\lambda_{n})P_{n})D_{n}^{k_{0}- 1}P_{n}\varphi=D_{n}D_{n}^{k_{0}-1}P_{n}\varphi=D_{n}^{k_{0}}P_{n}\varphi=0\). \(\blacksquare\) ## 3. Proof of Theorem 1 We write \(\widehat{u}(p)=\widehat{u}(\cdot,p)\), etc. **First Step.** By (1.1), applying Lemma 2, we have \(p^{\alpha}\widehat{u}(p)-p^{\alpha-1}a-p^{\alpha-2}b=-A\widehat{u}(p)\), that is, \[\widehat{u}(p)=(p^{\alpha}+A)^{-1}(p^{\alpha-1}a+p^{\alpha-2}b)\quad\text{for }\text{Re }p>p_{0}.\] Lemma 1 (i) yields \(\widehat{u}(p)=0\quad\text{in }\omega\text{ for }\text{Re }p>p_{0}\), that is, \(p^{\alpha-1}(p^{\alpha}+A)^{-1}a=-p^{\alpha-2}(p^{\alpha}+A)^{-1}b\) in \(\omega\) for \(\text{Re }p>p_{0}\). Setting \(\eta:=p^{\alpha}\), we have \(\eta^{\frac{1}{\alpha}}(A+\eta)^{-1}a=-(A+\eta)^{-1}b\) in \(\omega\) for \(\text{Re }\eta>\eta_{0}\), where \(\eta_{0}\) is some positive constant. We choose \(\psi\in C_{0}^{\infty}(\Omega)\) arbitrarily. Then \[(-\eta)^{\frac{1}{\alpha}}((A-\eta)^{-1}a,\,\psi)_{L^{2}(\omega)}=-((A-\eta)^ {-1}b,\,\psi)_{L^{2}(\omega)} \tag{3.1}\] for \(\text{Re }\eta<\eta_{0}\). We set \(f_{\eta}(\eta):=((A-\eta)^{-1}a,\psi)_{L^{2}(\omega)}\) and \(g_{\eta}(\eta):=((A-\eta)^{-1}b,\psi)_{L^{2}(\omega)}\) for \(\eta\in\rho(A):=\mathbb{C}\setminus\sigma(A)\). We will prove \[f_{\psi}(\eta)=g_{\psi}(\eta)=0\quad\text{for all }\eta\in\rho(A). \tag{3.2}\] We note that \(f_{\psi}(\eta)\) and \(g_{\psi}(\eta)\) are holomorphic in \(\eta\in\rho(A)\). First let the zero set \(Z_{\psi}:=\{\eta\in\rho(A);\,f_{\psi}(\eta)=0\}\) has an accumulation point. Then the holomorphicity yields (3.2) in this case. Second let \(Z_{\psi}\) have no accumulation points. Then, by (3.1), noting that \(\rho(A)\subset\{\text{Re}\,z<q_{0}\}\) with some constant \(q_{0}\), we see that \((-\eta)^{\frac{1}{\alpha}}=-\frac{g_{\psi}(\eta)}{f_{\psi}(\eta)}\) if \(\text{Re }\eta<\eta_{0}\) and \(\eta\not\in Z_{\psi}\). Since \(\sigma(A)\subset\{\text{Re}\,z>q_{0}\}\), \(\sigma(A)\) is a countably infinite and discrete set in \(\mathbb{C}\), and \(-\frac{g_{\psi}(\eta)}{f_{\psi}(\eta)}\) is holomorphic in \(\eta\in\rho(A)\setminus Z_{\psi}\), we see that \(-\frac{g_{\psi}(\eta)}{f_{\psi}(\eta)}\) can be holomorphically continued to some open neighborhood of \(\{\eta>0;\,\eta\in\rho(A)\}\), which means that \((-\eta)^{\frac{1}{\alpha}}\) is holomorphically continued to some open neighborhood of \(\{\eta>0;\,\eta\in\rho(A)\}\), and in particular, the function \((-\eta)^{\frac{1}{\alpha}}\) has the same limit as \(\eta\) approaches a positive number from the uppe and the lower upper complex plane. Since \(0<\frac{1}{\alpha}<1\) by \(1<\alpha<2\), this is impossible. Therefore \(Z_{\psi}\) has an accumulation point, so that (3.2) is seen also in this case. Thus we complete the proof of (3.2). \(\blacksquare\) Hence \(((A-z)^{-1}a,\,\psi)_{L^{2}(\omega)}=((A-z)^{-1}b,\,\psi)_{L^{2}(\omega)}=0\) for \(z\in\rho(A)\). Since \(\psi\in C_{0}^{\infty}(\omega)\) is arbitrary, we reach \((A-z)^{-1}a=(A-z)^{-1}b=0\) in \(\omega\) for all \(z\in\rho(A)\). **Second Step.** We will prove: If \(a\in L^{2}(\Omega)\) and \((A-z)^{-1}a=0\) in \(\omega\) for all \(z\in\rho(A)\), then \(a=0\) in \(\Omega\). Indeed, we directly see \[D_{n}^{\ell}P_{n}a=\frac{1}{2\pi\sqrt{-1}}\int_{\gamma_{n}}(z-\lambda_{n})^{\ell }(z-A)^{-1}adz=0\quad\text{in $\omega$ for $\ell\in\mathbb{N}$}. \tag{3.3}\] Since \(D_{n}^{d_{n}}P_{n}a=0\) by (2.2), we have \((A-\lambda_{n})(D_{n}^{d_{n}-1}P_{n}a)=0\) in \(\Omega\) by Lemma 2. Moreover \(D_{n}^{d_{n}-1}P_{n}a=0\) in \(\omega\) by (3.3). The unique continuation for the elliptic operator \(A-\lambda_{n}\) yields \(D_{n}^{d_{n}-1}P_{n}a=0\) in \(\Omega\). Next we see that \((A-\lambda_{n})(D_{n}^{d_{n}-2}P_{n}a)=D_{n}^{d_{n}-1}P_{n}a=0\) in \(\Omega\). By (3.3) with \(\ell=d_{n}-2\), we have \(D_{n}^{d_{n}-2}P_{n}a=0\) in \(\omega\). Therefore, we apply the unique continuation to obtain \(D_{n}^{d_{n}-2}P_{n}a=0\) in \(\Omega\). Continuing this argument, we reach \(P_{n}a=0\) in \(\Omega\) fo each \(n\in\mathbb{N}\). Since \(\text{Span }\Big{(}\bigcup_{n=1}^{N}P_{n}L^{2}(\Omega)\Big{)}\) is dense in \(L^{2}(\Omega)\) (e.g., Agmon [2]), we see that \(a=0\) in \(\Omega\). Similarly we can prove \(b=0\). Thus the proof of Theorem 1 is complete. \(\blacksquare\) **Acknowledgements.** This article was completed during the stay of the third authoe at Sapienza University of Rome in January-February 2023. The work was supported by Grant-in-Aid for Scientific Research (A) 20H00117 of Japan Society for the Promotion of Science.
2305.08502
MeeQA: Natural Questions in Meeting Transcripts
We present MeeQA, a dataset for natural-language question answering over meeting transcripts. It includes real questions asked during meetings by its participants. The dataset contains 48K question-answer pairs, extracted from 422 meeting transcripts, spanning multiple domains. Questions in transcripts pose a special challenge as they are not always clear, and considerable context may be required in order to provide an answer. Further, many questions asked during meetings are left unanswered. To improve baseline model performance on this type of questions, we also propose a novel loss function, \emph{Flat Hierarchical Loss}, designed to enhance performance over questions with no answer in the text. Our experiments demonstrate the advantage of using our approach over standard QA models.
Reut Apel, Tom Braude, Amir Kantor, Eyal Kolman
2023-05-15T10:02:47Z
http://arxiv.org/abs/2305.08502v1
# MeeQA: Natural Questions in Meeting Transcripts ###### Abstract We present MeeQA, a dataset for natural-language question answering over meeting transcripts. It includes real questions asked during meetings by its participants. The dataset contains 48K question-answer pairs, extracted from 422 meeting transcripts, spanning multiple domains. Questions in transcripts pose a special challenge as they are not always clear, and considerable context may be required in order to provide an answer. Further, many questions asked during meetings are left unanswered. To improve baseline model performance on this type of questions, we also propose a novel loss function, _Flat Hierarchical Loss_, designed to enhance performance over questions with no answer in the text. Our experiments demonstrate the advantage of using our approach over standard QA models.1 Footnote 1: Our code and data are available at: [https://github.com/reutapel/MeeQA](https://github.com/reutapel/MeeQA). ## 1 Introduction More than a million meetings are held every day in the USA, and employees spend six hours a week attending them on average (Mroz et al., 2018; Zhong et al., 2021). Moreover, due to COVID-19 social distancing constraints, remote work has become standard and virtual meetings are now as common as face-to-face (Spataro, 2020). Due to this situation, we are bombarded with information, making it more difficult to sort through and distill what is important. In this regard, the rapid improvement of _Automatic Speech Recognition_ (ASR) can help. Modern ASR incorporates punctuation in the text, such as comma, full stops, and importantly, question marks. The ability to automatically detect questions, coupled with a model capable of extracting the answers to them could help millions of users in alleviating the mental strain and easily distill valuable information from these transcripts. _Question-answering_ (QA) is a task where a machine reads a text passage and then answers questions regarding it. In _conversational question-answer_ (CQA), the questions appear in a dialog, and in _spoken question-answer_ (SQA), the content, question or both are in spoken form. To advance the capabilities of machines in these tasks, several public datasets have been proposed (see Sec. 2). Still, to the best of our knowledge, there are no datasets based on real and natural questions from meeting transcripts. In this paper, we present a new task, _Meeting Transcripts Question Answering_ (MTQA), where a model is required to detect answers to natural questions asked during a meeting, given the subsequent utterances of the conversation. For enabling research on the MTQA task, we present MeeQA, a large-scale dataset for Meetings Question Answering. MeeQA contains 48K question-answer pairs from 422 meeting transcripts discussing diverse Figure 1: An example of part of a meeting transcript from MeeQA. Five questions were asked, two (green text) are unanswered, the others (red text) are answered within the conversation (blue text). domains. Each transcript is a set of utterances that were said by multiple speakers. In Fig. 1 we show an example of part of a meeting transcript from our dataset with four participants and five questions. As our dataset contains text from meeting transcripts with multiple speakers, it has some distinctive characteristics that make it challenging and interesting. First, the text in MeeQA is spoken language. As such, it contains unique language phenomena such as word repetitions, mumblings, use of informal content, as well as informal sentence structure. For instance, in Fig. 1 the question in utterance #5 has an informal structure; utterance #6 contains the word "oh", and utterance #12 contains word repetition, all of which are typical in spoken language but rare in written text. Further analysis of these phenomena is provided in Sec. 4.1. Second, MeeQA contains meetings from various domains, including board and science groups meetings, each with multiple speakers. Consequently, MeeQA contains diverse questions expressed in a variety of styles. Finally, an essential characteristic of our dataset is that its questions are real and were asked during meetings. As a result, the questions are "natural" in the sense that they represent an actual need for information, as opposed to most relevant datasets, where the questions are artificially asked as part of the data collection process. During real meetings, questions are often left unanswered. Models proposed by previous works to solve QA tasks generally focused on answerable questions and often failed in cases where the answer was not available. Several studies suggested using verification mechanisms during the evaluation process Devlin et al. (2019); others proposed models that jointly learn the answer and question answerability in multi-task learning setup Hu et al. (2019), and ensemble models were also proposed for this purpose Zhang et al. (2021). In this paper, we propose a single model to directly handle unanswered questions, based on a pre-trained language model and a novel loss function, named _flat-hierarchical loss_ (FHL). FHL, described in Sec. 5, ties two simple losses, the first for answer span and the other for question answerability prediction. This is accomplished, by leveraging the answerability prediction and ground truth to weight the answer span loss. Based on our results (see Sec. 6), FHL yields models with improved performance on unanswerable questions and comparable performance when the answer is available, which overall provide superior results over the entire dataset. Still, human performance on the unanswerable questions is much better than our best model, highlighting the need for further improvement. ## 2 Related Work ### Question Answering Datasets Question answering is an active research area for which many kinds of tasks have been proposed. In _reading comprehension_ (RC), for example, a model is required to prove its understanding of a given document by answering one or more related questions. In contrast, in _open-domain question answering_ (QA), a model is required to provide answers to questions given one extensive collection of documents (_e.g._, Wikipedia). Many different datasets have been curated to advance our understanding of such tasks. RC datasets such as SQuAD Rajpurkar et al. (2016) and NarrativeQA Kocisky et al. (2018) consist of questions, relevant documents, and answers which were highlighted in the document. In these \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & **Meeting** & **Spoken** & **Multi** & **Multi** & **Natural** & **Answer** \\ & **Transcript** & **Language** & **Turn** & **Party** & **Questions** & **Type** \\ \hline QAST Turmo et al. (2009) & \(\vartriangle\) & \(\vartriangle\) & \(\vartriangle\) & \(\vartriangle\) & ✗ & E \\ SQuAD 2.0 Rajpurkar et al. (2018) & ✗ & ✗ & ✗ & ✗ & ✗ & E, U \\ QuAC Choi et al. (2018) & ✗ & ✗ & \(\vartriangle\) & ✗ & ✗ & E, U \\ Spoken SQuAD Li et al. (2018) & ✗ & \(\vartriangle\) & ✗ & ✗ & ✗ & E \\ NQ Kwiatkowski et al. (2019) & ✗ & ✗ & ✗ & ✗ & ✗ & E, U, MS \\ CoQA Reddy et al. (2019) & ✗ & ✗ & ✗ & ✗ & ✗ & A*,U \\ TopiOCQA Adlakha et al. (2021) & ✗ & ✗ & ✗ & ✗ & ✗ & A, U \\ \hline MeeQA (Ours) & ✗ & ✗ & ✗ & ✗ & ✗ & E, U, MS \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of MeeQA with other QA datasets. MeeQA contains natural questions and answers from meeting transcript with multiple speakers. Answer types are: Abstractive (A), Abstractive with extractive rationale (A*), Extractive (E), Unanswerable (U), Multi-Span (MS). \(\vartriangle\) represents that only a proportion of the dataset satisfies the property. datasets, annotators write questions after reading a short text containing the answer. For the Natural Questions Kwiatkowski et al. (2019) (NQ) and the MS Marco Nguyen et al. (2016) datasets, the authors collected real queries and responses from Google and Bing respectively with the goal of creating a dataset which is more human-like in nature. Another kind of task relates to the challenge of question answering in a conversational setup commonly named _conversational question answering (CQA)_. In CQA, a model is required to provide answers in a multi-turn QA setup given a context. Datasets for this task CoQA Reddy et al. (2019) and QuAC Choi et al. (2018)) contain conversations between a _questioner_ and an _answerer_ regarding a given external passage. The questioner asks questions about the passage, and the answerer provides the answer. Recent research focuses on another variant of the question answering family of tasks, namely, _open-domain CQA_, in which the parties are free to discuss various topics during a conversation, drawn from an extensive set of documents. QReCC Anantha et al. (2021) is a large-scale open-domain CQA and question rewriting dataset, which incorporates an information retrieval subtask. Top-iOCQA Adlakha et al. (2021) is another open-domain CQA dataset, consisting of questions and free-form text answers, with topic switches, based on the Wikipedia corpus. Previous researches also applied QA methods to speech transcripts. In _question answering on speech transcripts_ (QAST) tasks Turmo et al. (2007, 2008, 2009) passages were taken from transcripts. Then, English native speakers created factual questions about named entities and definitional questions, and the answers were extracted from the transcripts. Spoken SquaAD Li et al. (2018) is a SQA dataset, in which the documents are the spoken versions of the articles in SQuAD, the questions are original SQuAD questions, and the answer to each question is a span in the document. Mirkin et al. (2018) presented a _machine listening comprehension_ (MLC) dataset consisting of recorded spontaneous arguments and formulated questions to confirm or reject these arguments. In this work, we propose a new task, which we call _Meeting Transcripts Question Answering_ (MTQA) (see Sec. 1), and provide a dataset for it. In this task, the question, the context, and the answers are part of a meeting transcript. Similar to CQA datasets, our setup is multi-party, and similar to QA over speech transcripts, our text is from spoken transcripts. However, our questions and answers are in spoken language and were asked during real meetings by its participants. This combination makes differentiates MeQA from the aforementioned datasets. Furthermore, the questions are asked by different participants and are not only factual, leading to a much more diverse set of questions. To the best of our knowledge, this is the first dataset consisting of natural questions and answers from multi-speakers meetings that were asked in spoken language and converted to text (see Tab. 1). This aspect of our dataset led to much longer questions and answers as well as multi-span answers, which are not common in prior work. In conclusion, the special characteristics described above differ our dataset from other datasets, thus increasing the challenge posed by previous QA tasks. ### Question Answering Models Most datasets for RC and QA, as well as ours, are characterized by an extractive setup; _i.e._, detecting the relevant answer within the document. Previous studies introduced various models to solve these tasks, aiming at extracting single-span answers by jointly learning the start and end tokens of the answer span Seo et al. (2017); Yu et al. (2018); Devlin et al. (2019); Yang et al. (2019). The majority of QA research focuses on answering questions but neglects unanswerable ones. Devlin et al. (2019) treats questions without answers as having an answer span that starts and ends at the commonly used [CLS] token. Zhang et al. (2021) proposed a retrospective reader that consists of a sketchy reader predicting question _answerability_, followed by an intensive reader that extracts answers and combines them with the sketchy prediction to yield a final answer. In contrast, we propose a method to directly handle unanswerable questions through a unique loss function described in Sec. 5.1. ## 3 MeeQA Dataset Our dataset is focused on natural questions and answers that were extracted from professional meetings. Each instance in our dataset contains a question asked during a meeting, three utterances before the question, and 60 utterances after it. The answers are spans within the utterances that followed the question. Data collection was performed in two stages described below: Aggregating meeting transcripts from multiple sources and question extraction and annotation. ### Data Collection MeeQA is composed of different meeting types, such as product design, financial, and board meetings. To achieve this, we aggregated meeting transcripts from multiple sources: **Augmented Multi-party Interaction (AMI) meeting corpus**Carletta et al. (2006) - a dataset of 100 hours from 159 recorded meetings about product design in an industrial setting. Some of the meetings are naturally occurring, and some are elicited in which a design team, taking a design project from kick-off to completion over a day. It contains the manually annotated meeting transcripts and their meeting summaries. **ICSI meeting corpus**Janin et al. (2003) - an academic meeting dataset that consists of 75 weekly group meetings at the International Computer Science Institute (ICSI) in Berkeley. **Board Meetings** - another interesting domain of meetings which contains formal discussions about a wide range of topics and are publicly available. We include board meetings transcripts of the Legal Services Corporation (LSC)2 as well as other transcribed board meetings that were extracted from available public resources.3 Footnote 2: Legal Services Corporation: America’s Partner for Equal Justice Footnote 3: We use meetings from the California Citizens Redistricing Commission (CCRC) corpus ([https://wedrawthelines.ca.gov/transcripts/](https://wedrawthelines.ca.gov/transcripts/)), Federal Open Market Committee (FOMC) corpus ([https://www.federalreserve.gov/monetarypolicy/fcom_historical_year.htm](https://www.federalreserve.gov/monetarypolicy/fcom_historical_year.htm)), and Professional Learning Community (PLC) corpus. ### Annotation Pipeline Questions are common in meeting transcripts. Specifically, about 10% of all the utterances considered for MeeQA include at least one sentence ending in a question mark, and only such sentences were considered question candidates. The collected corpus was divided into tasks, each task includes 20 question candidates from the same meeting. Each candidate was initially presented to the judges with ten utterances before and ten utterances after it. If needed, judges were also given the option to expand the number of displayed utterances. Examples of the annotation interface are provided in the Appendix. Each task was annotated by three different judges following a three steps methodology: **Question Verification** - judges are asked if the question candidate requires an answer. If the judges respond positively, they proceed to the next step, otherwise, they proceed to the next candidate. **Question Type Labeling** - The goal here is to enrich our dataset with information about the question. In this step, the judge is asked four questions regarding the question itself. We ask the judges about the question comprehension, whether it is self-contained, is it a yes/no question, and whether it is relevant and informative after the meeting. **Answer Marking** - judges are asked to mark the answer in the transcript. Importantly, though we did not limit the judges to highlighting only within the utterances after the question, we did exclude answers located in the utterances before the question from our final dataset. The judges were also not limited to highlighting complete words; therefore, we completed partially highlighted words as part of our cleaning process. ## 4 Dataset Statistics and Analysis We present an analysis MeeQA in Tab. 2, composed of 19,142 unique questions, labeled by 3 judges each, from 422 different meetings discussing various topics. Thus, the total number of question-answer pairs is 48,009. The dataset is partitioned randomly into a training, validation and test set, by an approximate 70/15/15 split (approximated as we constrain questions from the same meeting to belong to the same split to avoid data leakage). In this section we describe MeeQA properties, and demonstrate the unique challenge it introduces for QA tasks. Figure 2: Datasets comparison in four dimensions: the average question length (vertical axis), the percentage of non-Wh-questions (horizontal axis), the percentage of present tense questions out of all present and past tense questions (size), and the percentage of second-person questions out of all the second and third-person questions (color). ### Spoken Language As MeeQA is based on spoken language, it presents unique characteristics not found on most datasets. We conduct an analysis of the prevalence of different phenomenon based on sampling two question-answer pairs from each meeting in the development set, followed by manual annotation. _Informal Content_ (_e.g_., use of "yeah" or "gonna") is the most frequent phenomenon which constitutes 29.2% of the question-answer pairs. Other aspects of spoken language include _Filler Words_ (_e.g_., "hmm" or "huh"), found in 25% of the sampled sentences and _Word Repetition_ (_e.g_., "what is what is the cost?") present in 14.2%. Moreover, some sentences are not formal due to the lack of punctuation or incorrect sentence structure. Our analysis shows that 32.5% of the question-answer pairs contain at least one informal sentence, classified as _Informal Structure_. In total, 59.2% of the pairs include at least one of the aforementioned phenomena, where each example can be annotated with more than one phenomenon. ### Question Properties The utterances that are auto-transcribed during meetings are not always comprehensible. Specifically, 7% of the questions in MeeQA were labeled as non-comprehensible by our judges. These questions might cause difficulties for answer detection for both humans and machines. Interestingly, 37% of the questions are not self-contained, _i.e_., they do not contain all the required information to understand the question and answer it. This characteristic is also common in CQA datasets such as CoQA Reddy et al. (2019), as there are dependencies between the question and the previous utterances. Further, 56% of the questions were tagged as relevant also after the meeting, and 48% of them were tagged as yes/no questions. Apart from the differences between MeeQA and other datasets mentioned in Sec. 2, the questions on MeeQA have several unique properties. Specifically, we focus on four characteristics and compare MeeQA to four large-scale QA and RC datasets.4 Fig. 2 illustrates this comparison in four dimensions. The vertical axis represents the average question length, and the horizontal axis represents the percentage of non-_Wh_-questions. While other datasets contain more than half _Wh_-questions with an average length of at most 11 tokens, almost 70% of the questions in our datasets are not _Wh_-questions with an average length of more than 12 tokens. The other two characteristics are represented by the size and color of each data point. Specifically, the size represents the percentage of present tense questions out of all present and past tense questions. The color represents the percentage of second-person questions out of all the second and third-person questions. To calculate these two measures, we use the Stanford NLP POS tagger Qi et al. (2018) to tag each token with its person and tense, and finally tag each question based on the majority of the person and tense over its words. Fig. 2 shows that compared to other datasets, MeeQA contains significantly more present tense and second person questions, which fits the nature of natural and non-factual questions. For example, "do you" is a common first bigram in MeeQA questions. Footnote 4: publicly available at [https://huggingface.co/datasets](https://huggingface.co/datasets) ### Answer Properties We analyze answers in the development set which contains 6,416 questions with answers, comprising over 70% of the development set. Transcripts are partitioned to utterances, answers in MeeQA span 2.13 such utterances on average, where 60% of them are contained in a single one. Furthermore, 69% of the answers include, but are not limited to, the first utterance after the question, which is expected in the context of meetings and differentiates MeeQA from other QA and RC datasets. Nevertheless, the agreement between judges is partial. We calculate Krippendorff's reliability measure, \(\alpha\) Hayes and Krippendorff (2007) to formally measure judge agreement. When considering annotations at the level of individual words in each transcript, the overall score for all examples and all annotators is \(\alpha=0.555\). Low agreement score is expected as transcripts are inherently noisy thus judgement is challenging even for humans. Additionally, answers on MeeQA are significantly longer with an average of 31.1 tokens compared to 3.2 in SQuAD Rajpurkar et al. (2016), and 2.7 in CoQA Reddy et al. (2019). These \begin{table} \begin{tabular}{l l l l|l} \hline \hline & **Train** & **Dev** & **Test** & **Overall** \\ \hline \# Meetings & 293 & 61 & 68 & 422 \\ \# Unique Questions & 13,426 & 2,892 & 2,824 & 19,142 \\ \# Q/A Pairs & 33,690 & 7,188 & 7,131 & 48,009 \\ \% Unanswerable & 23.0 & 25.4 & 24.7 & 23.6 \\ \hline \hline \end{tabular} \end{table} Table 2: Data statistics. Number of meetings, number of unique candidate questions, and number of questions that were tagged with and without answers. differences are not surprising as most of the answers in these datasets are either entities, numbers, or Yes/No answers (for CoQA). Interestingly, our average answer length is over twice that of QuAC [1] (31.1 _vs_. 15). Compared to MeeQA, which derives answers from natural speech of meeting participants, answers in QuAC are derived from specific passages, and therefore can be less detailed. ## 5 Method As discussed in Sec. 2.2, many models that solve QA tasks focus on answerable questions and might falter in their predictions for the unanswerable ones. As questions asked during meetings are many times left unanswered, a desired capability of a model in our setup is detecting when an answer is not available. To that end, we propose a novel loss function that combines two tasks, span prediction and detecting if an answer is available. In the following section we formally introduce the _MTQA_ task and define our _flat-hierarchical loss_. ### Task Definition Let \(U=\{u_{m}|1\leq m\leq M\}\) be a set of \(M\) ordered utterances \(u_{m}\) which belong to a single meeting. Note that an utterance can contain multiple sentences. Let \(Q\) be a sentence in utterance \(u_{q}\), which ends with a question mark, hereby \(q\) is the index of the utterance to which question \(Q\) belongs. Additionally, let \(u_{q_{-}}\) denote the prefix of \(u_{q}\) ending at the start of question \(Q\) and \(u_{q_{+}}\) the suffix of \(u_{q}\) after removing \(u_{q-}\) and \(Q\). Thus, \(u_{q}=u_{q_{-}}Qu_{q_{+}}\), _i.e._, the concatenation of the aforementioned strings. Given a question \(Q\), \(k\) utterances before the question \(U_{B}=(u_{q-k},\ldots,u_{q-1},u_{q-})\), and \(l\) utterances after it \(U_{A}=(u_{q_{+}},u_{q+1},\ldots,u_{q+l})\), two goals are required. First, predict whether an answer exists for \(Q\), hereby \(y_{HA}\in\{0,1\}\) denotes the label for answerability (_i.e._, Has Answer). Second, find two indices \(y_{S}\) and \(y_{E}\) corresponding to Start and End tokens of a span in the concatenation of \(U_{A}\) and \(y_{S}\leq y_{E}\). Base ModelFollowing the practice introduced by BERT [1], we concatenate \(Q\), \(U_{A}\), and \(U_{B}\) into a single input sequence \(X\). We first define two strings, \(S_{B}\) - text up-to and including \(Q\), _i.e._, \(u_{q-k}\ldots u_{q-1}u_{q_{-}}Q\); and \(S_{A}\) - text proceeding \(Q\), _i.e._, \(u_{q_{+}}u_{q+1}\ldots u_{q+l}\). The final input \(X\) is obtained as follows: [CLS]\(S_{B}\)[SEP]\(S_{A}\)[SEP]. In essence, we use the standard [CLS] and [SEP] tokens, and split the meeting utterances up-to and after the question \(Q\). We append a special token "&" denoting the start of the utterance, followed by a speaker token to each utterance \(u_{t}\) and then tokenize the resulting string. To normalize across the different data sources, we convert the speaker roles and names to "SPEAKER_Z," where \(Z\) is an index of the speaker, and the questioner is always SPEAKER_0. We first embed the sequence \(X\) via a pre-trained transformer encoder (_e.g._ BERT). The resulting representation is then fed through three independent linear classifiers. The first, \(f_{S}\), outputs the probability of each word in the sequence to be the start of the answer span. Similarly, \(f_{E}\) predicts the end of the answer span. Finally, \(f_{HA}\) uses the first special token, [CLS], to predict the question _answerability_, _i.e._, is the question answerable given the input \(X\). Formally, we denote the transformer encoder output for each token in the sequence by \(T_{1}\),..., \(T_{n}\in\mathbb{R}^{d}\), and for the token [CLS] by \(C\in\mathbb{R}^{d}\). Given \(T=(T_{1},...,T_{n})\) and \(C\) we get: \[\hat{y}_{S} \propto\text{SoftMax}(W_{S}\cdot T) \tag{1}\] \[\hat{y}_{E} \propto\text{SoftMax}(W_{E}\cdot T)\] (2) \[\hat{y}_{HA} \propto\text{SoftMax}(W_{HA}\cdot C) \tag{3}\] where \(W_{S}\), \(W_{E}\), \(W_{HA}\in\mathbb{R}^{d}\) are trainable weights, \(\hat{y}_{S}\) and \(\hat{y}_{E}\) are the model prediction for the start and end tokens of the answer span, and \(\hat{y}_{HA}\) is the model prediction for the question answerability. The dimension of all the \(\hat{y}\)'s is two. Flat-Hierarchical Loss FunctionWhen people are asked to highlight an answer to a question in a given passage, they first read the question and passage and decide if an answer is found in the text Figure 3: Computation of Flat-Hierarchical loss. (Guthrie and Mosenthal, 1987; Zheng et al., 2019). Only if they deem that it is, they proceed to marking the relevant span. Mimicking this hierarchical logic, we propose a novel loss function named _flat-hierarchical loss_ (FHL), illustrated in Fig. 3. To calculate FHL we first compute two standard losses. The first, \(\mathcal{L}_{SE}\), for start-end prediction computed as the average of two cross-entropy losses of the start and end span classifiers. The second, \(\mathcal{L}_{HA}\), for Has-Answer prediction, by the cross-entropy loss for the answerability classifier. Once the two terms are computed, we compute the final loss, \(\mathcal{L}\), as follows: \[\begin{split}\mathcal{L}=\frac{1}{N}\sum_{i=1}^{N}\biggl{(}& \alpha\cdot\mathcal{L}_{HA}(y_{HA},\hat{y}_{HA})+\\ \beta\cdot\hat{y}_{HA}\cdot\mathcal{L}_{SE}(y_{S},\hat{y}_{S},y_ {E},\hat{y}_{E})+\\ \gamma\cdot y_{HA}\cdot\mathcal{L}_{SE}(y_{S},\hat{y}_{S},y_{E}, \hat{y}_{E})\biggr{)}\end{split} \tag{4}\] where \(\alpha\), \(\beta\), and \(\gamma\) are weights that calibrate the importance of each component in the function, and are selected to maximize F1 over the development set. \(N\) is the number of examples, \(\hat{y}_{HA}\) and \(y_{HA}\) are the model answerability prediction, and ground truth labels respectively, and \(\hat{y}_{S}\), \(y_{S}\), \(\hat{y}_{E}\) and \(y_{E}\) are model predictions and labels for start and end indices. Intuitively, if the model calculates a low probability of answerability and if the question is indeed unanswerable, the weights of \(\mathcal{L}_{SE}\) would be diminished. That is, if there is no answer, the prediction of the Start and End heads is less important. Using FHL, the model trains to predict the span answer conditioned on probability it assigns for the existence of an answer. Moreover, this formulation disentangles the role of the [CLS] token from span prediction. Threshold-based Answer DecisionOur model provides two independent predictions: whether the question has an answer and answer span candidates. To produce the final prediction, we propose a threshold-based answer decision method that combines these two predictions, inspired by previous studies Devlin et al. (2019); Zhang et al. (2021). In particular, we define the score of each candidate span from index \(i\) to index \(j\) as follows: \(\mathit{score}_{i,j}=W_{S}\cdot T_{i}+W_{E}\cdot T_{j}\). We then compute the probability of each candidate that is no longer than \(m\) tokens being the answer span by a softmax over all the candidates, and \(P_{best}=\max_{j\geq i}\frac{e^{score_{i,j}}}{\sum e^{score_{m,n}}}\) is the probability of the final answer span. Finally, we predict that there is no answer when \(\hat{y}_{HA}\leq\tau_{1}\wedge P_{best}\leq\tau_{2}\), where the thresholds \(\tau_{1}\) and \(\tau_{2}\), and the number of tokens \(m\) are selected to maximize F1 over the development set. ## 6 Experiments ### Implementation Details **Data Representation** For all experiments in this section we merge consecutive utterances of the same speaker to a single utterance. For each question we consider 1 utterance before and 60 utterances after the question (_i.e._, \(k=1\) and \(l=60\) in Sec. 5.1). The maximum input length in all experiments is 512 tokens. We truncate long sequences and pad shorter ones to the maximum input length. An analysis of various parameters and merging strategies is provided in the Appendix. **Training Setup** We use the HuggingFace Transformers library Wolf et al. (2020) to implement our models. We consider three pre-trained models: \begin{table} \begin{tabular}{c|c c c c c|c c c c c c} \hline \hline & \multicolumn{5}{c|}{**Dev**} & \multicolumn{5}{c}{**Test**} \\ \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**All Data**} & \multicolumn{3}{c|}{**HasAns**} & \multicolumn{3}{c|}{**NoAns**} & \multicolumn{3}{c}{**All Data**} & \multicolumn{3}{c}{**HasAns**} & \multicolumn{3}{c}{**NoAns**} \\ \cline{2-11} & **EM** & **F1** & **EM** & **F1** & **EM** & **F1** & **EM** & **F1** & **EM** & **F1** & **EM** & **F1** \\ \hline **Human Performance** & 53.0 & 67.3 & 46.4 & 62.8 & 88.3 & 88.9 & 53.9 & 68.3 & 47.8 & 64.3 & 88.2 & 88.6 \\ \hline **First Utterance** & 40.2 & 54.7 & 46.2 & 60.4 & 3.1 & 19.7 & 41.1 & 55.7 & 47.3 & 61.2 & 2.4 & 21.2 \\ \hline **BERT** & 61.9 & 75.2 & 59.9 & **74.8** & 73.9 & 77.7 & 62.0 & 74.7 & 60.6 & **74.5** & 70.7 & 76.1 \\ **BERT + FHL** & **62.9** & **75.7** & **60.2** & 74.6 & **79.6** & **82.5** & **62.3** & 74.7 & **60.8** & 74.4 & **72.2** & **76.9** \\ \hline **DeBERTa** & 51.8 & 75.5 & 48.5 & **75.1** & 72.1 & 77.9 & 53.8 & **75.9** & 51.2 & **75.9** & 69.8 & 76.0 \\ **DeBERTa + FHL** & **53.9** & **76.0** & **50.0** & 74.8 & **77.9** & **83.2** & **55.2** & 75.2 & **52.0** & 74.3 & **75.5** & **80.9** \\ \hline **ALBERT** & 63.1 & 75.9 & **60.5** & **75.0** & 79.2 & 82.0 & **63.6** & **75.9** & **61.4** & **75.0** & 77.5 & 81.5 \\ **ALBERT + FHL** & **64.0** & **76.4** & 60.3 & 74.5 & **86.5** & **88.0** & 63.2 & 75.5 & 60.4 & 74.1 & **80.7** & **84.4** \\ \hline \hline \end{tabular} \end{table} Table 3: Baseline comparison results. Models (standard span prediction objective _vs_. FHL), baselines and human performance on the development and the test data. We consider a question to be unanswerable if at least half of the judges tagged it as such. BERT (Devlin et al., 2019), DeBERTa (He et al., 2020), and ALBERT (Lan et al., 2019), to which we stack three linear classifiers (see Eq. (1), (2) and (3)), and implement the loss function as described in Eq. (4). All models are trained for two epochs and optimized via the ADAMW optimizer (Kingma and Ba, 2015). We employ a batch size of 8 and a learning rate of \(3\cdot 10^{-5}\). After training, we tune \(\alpha\), \(\beta\) and \(\gamma\) parameters (See Sec. 5.1) considering the values \(\{0.7,0.8\}\), \(\{0.2,0.3\}\), and \(\{0.7,0.8\}\) respectively. For the answer decision step, we consider the values \(\{0.6,0.7\}\) and \(\{0.8,0.9\}\) for \(\tau_{1}\) and \(\tau_{2}\) respectively. For all models, we consider the values \(\{200,250\}\) for the maximum answer length \(m\). All tuning steps described above are performed on our development set and then applied to the test set during evaluation. ### Evaluation Metrics Following SQuAD (Rajpurkar et al., 2016), we evaluate our performance by two metrics; Exact Match (EM) and macro-average F1 score. The answers in our dataset are characterized by long sentences (as opposed to short evidence); thus, we opt for calculating F1-score at the level of individual (ordered) words, instead of considering a bag-of-words. Recall and precision are computed by considering the overlap between predicted word indices and ground truth answers. Questions without an answer are assigned an F1 score of 1 if the model predicted "no answer" and 0 otherwise. In SQuAD, each model prediction is compared to \(n\) annotated answers for computing these metrics, and the maximum score is chosen as the question score. However, for computing human performance, each human annotation is compared to the other \(n-1\) annotations. Thus, the human performance and the model results are not comparable. To moderate this bias, we compare the model results to \(n\) subsets of \(n-1\) annotated answers, and the final scores are calculated as the average of the maximum scores of all subsets, similar to Reddy et al. (2019) and Choi et al. (2018). ### FHL Evaluation Baseline Comparison AnalysisWe conduct a comparative evaluation to assess the significance of our novel FHL loss (See Sec. 5.1). For each of the three baseline models (See Sec. 6.1) we compare the performance when optimizing for the standard span prediction objective against the performance when optimizing using FHL. Results of these experiments are reported in Tab. 3. _First Utterance_ refers to a naive model where the first utterance after the question is considered as the answer. As can be seen in Tab. 3, using the FHL objective, significantly improves performance when no answer is available (_e.g._ 75.5 _vs._ 69.6 in EM and 80.9 _vs._ 76.0 in F1 with DeBERTa). When the answer is available, results remain mostly unchanged. The results observed demonstrate the effectiveness of FHL in improving performance of any model when the data contains a significant proportion of unanswerable questions as is the case with MeeQA. Note that overall results are much worse compared to other standard QA datasets like SQuAD which leaves opportunities for improvement. Importantly, humans are still much better at detecting unanswerable questions, as our results suggest. Ablation AnalysisFor this experiment we consider three variants of FHL, obtained by removing one of the terms defined in (4) (each term is preceded by a scalar \(\alpha\), \(\beta\) or \(\gamma\) in (4)). Result of the ablation analysis are reported in Tab. 4. Importantly, removal of the loss term responsible for _answerability_ prediction, _i.e._, "No-HA", drops performance by over 25% in F1 and EM. Hence, incorporating it into FHL is crucial. ## 7 Conclusions In this paper, we introduce MeeQA, a large-scale dataset for extracting answers to natural questions in meeting transcripts. Unlike existing RC and QA datasets, MeeQA contains natural questions from real meetings, along with their answers, if they exist. Furthermore, to improve existing model performances on unanswerable questions, we propose a novel loss function designed to directly handle these type of questions. Our experiments show that using FHL yield models that outperform others, especially on unanswerable questions. We hope to explore other use cases for this type of loss in the future. However, human performance on these questions is still better than our model. We hope this work will encourage other researchers to explore this task, which is critical for extracting meaningful information from meetings. \begin{table} \begin{tabular}{c|c c} \hline \hline **Model** & **EM** & **F1** \\ \hline **No-HA** & 47.0 & 54.1 \\ **No-PSE** & 62.7 & 74.9 \\ **No-LSE** & 63.4 & 75.2 \\ \hline **Full model** & 63.2 & 75.5 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation model results for the test data with ALBERT ## Ethics Consideration We propose a novel meeting transcripts question answering task, which is accompanied by a dataset MeeQA. Both the transcripts of the meetings and the annotators recruited touch on the intellectual property rights and privacy rights of the original authors. As part of the dataset construction process, we ensure that the intellectual property and privacy rights of the original authors of the meetings are protected. All the meeting transcriptions we collected are publicly available. Additionally, the annotation process complies with the intellectual property and privacy rights of the recruited annotators.
2309.01195
A dispersive estimate of the $a_0(980)$ contribution to hadronic light-by-light scattering in $(g-2)_μ$
A dispersive implementation of the $a_0(980)$ resonance to $(g-2)_\mu$ requires the knowledge of the double-virtual $S$-wave $\gamma^*\gamma^*\to\pi\eta / K\bar{K}_{I=1}$ amplitudes. To obtain these amplitudes we used a modified coupled-channel Muskhelischvili-Omn\`es formalism, with the input from the left-hand cuts and the hadronic Omn\`es function. The latter were obtained using a data-driven $N/D$ method in which the fits were performed to the different sets of experimental data on two-photon fusion processes with $\pi\eta$ and $K\bar{K}$ final states. This yields the preliminary dispersive estimate $a_\mu^{HLbL}[a_0(980)]_{resc.}=-0.46(2)\times 10^{-11}$.
Oleksandra Deineka, Igor Danilkin, Marc Vanderhaeghen
2023-09-03T14:59:01Z
http://arxiv.org/abs/2309.01195v1
A dispersive estimate of the \(a_{0}(980)\) contribution to hadronic light-by-light scattering in \((g-2)_{\mu}\) ###### Abstract A dispersive implementation of the \(a_{0}(980)\) resonance to \((g-2)_{\mu}\) requires the knowledge of the double-virtual \(S\)-wave \(\gamma^{*}\gamma^{*}\to\pi\eta/K\bar{K}_{I=1}\) amplitudes. To obtain these amplitudes we used a modified coupled-channel Muskhelishvili-Omnes formalism, with the input from the left-hand cuts and the hadronic Omnes function. The latter were obtained using a data-driven \(N/D\) method in which the fits were performed to the different sets of experimental data on two-photon fusion processes with \(\pi\eta\) and \(K\bar{K}\) final states. This yields the preliminary dispersive estimate \(a_{\mu}^{\rm HLbL}[a_{0}(980)]_{\rm resc.}=-0.46(2)\times 10^{-11}\). ## 1 Introduction The tension between the presently ultra-precise measurements of the anomalous magnetic moment of the muon \((g-2)_{\mu}\) and the theoretical calculations amounts to around \(5.0\,\sigma\) difference [1] when compared to the theoretical value from the 2020 White Paper [2]. The source of the current theoretical error solely arises from contributions from hadronic vacuum polarization (HVP) and hadronic light-by-light scattering (HLbL). Apart from the pseudo-scalar pole contributions, further nontrivial contributions to HLbL arise from the two-particle intermediate states such as \(\pi\pi\), \(\pi\eta\), and \(K\bar{K}\). Currently, only the contributions from the \(\pi\pi_{I=0,2}\) and \(K\bar{K}_{I=0}\) channels have been considered in a dispersive manner [3; 4]. The isospin-0 part of this result can be understood as a model-independent implementation of the contribution from the \(f_{0}(500)\) and \(f_{0}(980)\) resonances. The contribution from the \(a_{0}(980)\) resonance arises from the rescattering of the \(\pi\eta/K\bar{K}_{I=1}\) states and necessitates knowledge of the double-virtual processes \(\gamma^{*}\gamma^{*}\to\pi\eta/K\bar{K}_{I=1}\). On the experimental side, currently, data is only available for the real photon case from the Belle Collaboration [5; 6]. The measurement of the photon-fusion processes with a single tagged photon is a part of the two-photon physics program of the BESIII Collaboration [7]. To describe the currently available data and provide theoretical predictions for the single- and double-virtual processes, we opt for the dispersive approach, which adheres to the fundamental properties of the \(S\)-matrix, namely, analyticity and coupled-channel unitarity. ## 2 Formalism To compute the HLbL contribution of \(a_{0}(980)\) to \((g-2)_{\mu}\), we adopt the formalism outlined in [3]. This approach yields the following master formula: \[a_{\mu}^{HLbL}=\frac{2\alpha^{3}}{3\pi^{2}}\int\limits_{0}^{\infty}dQ_{1}\int \limits_{0}^{\infty}dQ_{2}\int\limits_{-1}^{1}d\tau\sqrt{1-\tau^{2}}\,Q_{1}^{3 }\,Q_{2}^{3}\sum\limits_{i=1}^{12}T_{i}(Q_{1},Q_{2},Q_{3})\,\bar{\Pi}_{i}(Q_{1},Q_{2},Q_{3})\,, \tag{1}\] where \(\bar{\Pi}_{i}\) are scalar functions containing the dynamics of the HLbL amplitude, \(T_{i}\) denote known kernel functions, and \(\tau\) is defined as \(Q_{3}^{2}=Q_{1}^{2}+2Q_{1}Q_{2}\tau+Q_{2}^{2}\). For the \(S\)-wave, the only contributing scalar functions can be written as \[\bar{\Pi}_{3}^{J=0} =\frac{1}{\pi}\int\limits_{s_{\rm in}}^{\infty}ds^{\prime}\frac{ -2}{\lambda_{12}(s^{\prime})(s^{\prime}+Q_{3}^{2})^{2}}\left(4s^{\prime}{\rm Im }\bar{h}^{(0)}_{++,++}(s^{\prime})-(s^{\prime}-Q_{1}^{2}+Q_{2}^{2})(s^{\prime} +Q_{1}^{2}-Q_{2}^{2}){\rm Im}\bar{h}^{(0)}_{00,++}(s^{\prime})\right)\,,\] \[\bar{\Pi}_{9}^{J=0} =\frac{1}{\pi}\int\limits_{s_{\rm in}}^{\infty}ds^{\prime}\frac{ 4}{\lambda_{12}(s^{\prime})(s^{\prime}+Q_{3}^{2})^{2}}\left(2\,{\rm Im}\bar{h} ^{(0)}_{++,++}(s^{\prime})-(s^{\prime}+Q_{1}^{2}+Q_{2}^{2})\,{\rm Im}\bar{h}^{ (0)}_{00,++}(s^{\prime})\right)\,, \tag{2}\] plus crossed versions. Here \(\lambda_{12}(s)\equiv\lambda(s,Q_{1}^{2},Q_{2}^{2})\) is a Kallen triangle function. Since \(a_{0}(980)\) is known to have a dynamical coupled-channel \(\pi\eta/K\bar{K}\) origin, the inclusion of \(K\bar{K}\) intermediate states is necessary. In this case, the unitarity relation implies \[{\rm Im}\bar{h}^{(0)}_{1,\lambda_{1}\lambda_{2},\lambda_{3}\lambda_{4}}(s)= \bar{h}^{(0)}_{1,\lambda_{1}\lambda_{2}}(s)\rho_{\pi\eta}(s)\,\bar{h}^{(0)*}_{ 1,\lambda_{3}\lambda_{4}}(s)+\bar{k}^{(0)}_{1,\lambda_{1}\lambda_{2}}(s)\rho_{ KK}(s)\,\bar{k}^{(0)*}_{1,\lambda_{3}\lambda_{4}}(s)\,, \tag{3}\] where \(\rho_{\pi\eta}(\rho_{KK})\) is the phase space factor of \(\pi\eta\,(K\bar{K})\) system, and \(\bar{h}^{(0)}_{1,\lambda\lambda^{\prime}}\) (\(\bar{k}^{(0)}_{1,\lambda\lambda^{\prime}}\)) denotes the \(I=1\), \(J=0\) Born subtracted (e.g. \(\bar{k}\equiv k-k^{\rm\ Born}\)) partial-wave (p.w.) amplitude of the \(\gamma^{*}(Q_{1}^{2})\gamma^{*}(Q_{2}^{2})\to\pi\eta\,(K\bar{K})\) process. These p.w. amplitudes contain kinematic constraints and therefore it is important to find a transformation to a new basis of amplitudes which can be used in a modified Muskhelishvili-Omnes (MO) method [8]. For the \(S\)-wave, the amplitudes which are free from kinematic constraints can be written as [3]1 Footnote 1: To maintain consistency with Eq.(2) we follow the conventions from [3], which slightly differ from those in [9]. \[\bar{h}^{(0)}_{i=1,2}=\frac{\bar{h}^{(0)}_{++}\mp Q_{1}Q_{2}\bar{h}^{(0)}_{00} }{s-s^{(\pm)}_{\rm kin}}\,,\quad s^{(\pm)}_{\rm kin}\equiv-(Q_{1}\pm Q_{2})^{ 2}\,, \tag{4}\] with \(Q_{i}\equiv\sqrt{Q_{i}^{2}}\). In Eq.(4) we omitted the isospin index for simplicity. In the case of a single virtual or real photons, this constraint arises from the requirement of the soft-photon theorem. Similarly to \(\gamma^{*}\gamma^{*}\to\pi\pi/K\bar{K}\) process [9; 10], the coupled-channel dispersion relation for the \(\gamma^{*}\gamma^{*}\to\pi\eta/K\bar{K}\) process with \(J=0\), \(I=1\) can be written as follows \[\left(\begin{array}{c}h^{(0)}_{i}(s)\\ k^{(0)}_{i}(s)\end{array}\right)=\left(\begin{array}{c}0\\ k^{(0),\,{\rm Born}}_{i}(s)\end{array}\right)+\Omega^{(0)}(s)\Bigg{[}-\int \limits_{s_{\rm kin}}^{\infty}\frac{ds^{\prime}}{\pi}\,\,\frac{{\rm Disc}( \Omega^{(0)}(s^{\prime}))^{-1}}{s^{\prime}-s}\left(\begin{array}{c}0\\ k^{(0),\,{\rm Born}}_{i}(s^{\prime})\end{array}\right)\Bigg{]}\,, \tag{5}\] where only kaon-pole left-hand cut is currently taken into account. The generalization of the kaon-pole left-hand contribution \(k^{(0),{\rm Born}}_{i}\) to the case involving off-shell photons is achieved by the product of the scalar QED result with the electromagnetic kaon form factors [11]. The latter is parameterized using the VMD model. We have verified that within the range, which is crucial for the \(a_{\mu}\) calculation, VMD is consistent with a simple monopole fit to the existing data and the dispersive estimation from [12]. To obtain the Omnes matrix \(\Omega^{(0)}(s)\), which encodes the hadronic \(\pi\eta/K\bar{K}\) rescattering effects, we utilize the coupled-channel dispersion relation for the partial wave amplitude. The latter is numerically solved using the \(N/D\) ansatz [13], with input from the left-hand cuts. When bound states or Castillejo-Dalitz-Dyson (CDD) poles are absent, the Omnes matrix is the inverse of the \(D\)-matrix. We parameterize the left-hand cuts in a model-independent manner, expressing them as an expansion in a suitably constructed conformal mapping variable [14; 15], which is chosen to map the left-hand cut plane onto the unit circle. In the absence of experimental \(\pi\eta/K\bar{K}\) data, the coefficients of this conformal expansion can be estimated theoretically from \(\chi\)PT, as demonstrated in [16; 17; 18]. However, for the \(\pi\eta/K\bar{K}\) system, it is necessary to rely on the slowly convergent \(SU(3)\,\chi\)PT. Instead, we directly determine the unknown coefficients by fitting to \(\gamma\gamma\to\pi\eta/K_{S}\,K_{S}\) data [5; 6] and use \(\chi\)PT predictions only as additional constraints. Particularly, for the \(\pi\eta\to K\bar{K}\) channel, we impose an Adler zero and ensure that the \(\pi\eta\to K\bar{K}\) amplitude remains consistent with \(\chi\)PT at \(s_{th}=(m_{\pi}+m_{\eta})^{2}\). Furthermore, for the \(\pi\eta\to\pi\eta\) channel, we employ the \(\chi\)PT scattering length as a constraint. In all cases, the NLO result with low-energy coefficients from [19] is considered as the central value, with an error range defined by the spread between LO and NLO results. ## 3 Results and Outlook To reconstruct the physical \(\gamma\gamma\to K_{S}\,K_{S}\) cross section, the input for \(I=0\), \(S\)-wave amplitude \(k_{0,++}^{(0)}(s)\) is taken from the coupled-channel \(\pi\pi/K\bar{K}_{I=0}\) analysis [20]. Since we are aiming to describe \(\gamma\gamma\to\pi\eta/K_{S}\,K_{S}\) data in the region from threshold up to 1.4 GeV, we also incorporate the \(D\)-wave resonances \(f_{2}(1270)\) and \(a_{2}(1320)\) using the Breit-Wigner parametrization, similar to the approach in [21]. We find that with as few as \((2,2,2)\)\(S\)-wave parameters in \((11,12,22)\) channels (\(1=\pi\eta,2=K\bar{K}\)) we obtain the fit with \(\chi^{2}/\mathrm{d.o.f.}=0.83\). The resulting total cross sections for \(\gamma\gamma\to\pi\eta/K_{S}\,K_{S}\) processes are illustrated in Fig. 1. Through analytical continuation into the complex plane we find the pole on the Riemann sheet II, corresponding to the \(a_{0}(980)\) resonance with \(\sqrt{s_{a_{0}(980)}}=1.06-i0.058\) GeV. With the obtained \(\gamma^{*}\gamma^{*}\to\pi\eta/K\bar{K}\) amplitudes in hand, we can now proceed to calculate the \(a_{0}(980)\) contribution to the HLbL in \((g-2)\). The preliminary result is \[a_{\mu}^{\rm HLbL}[a_{0}(980)]_{\rm rescatering}=-0.46(2)\times 10^{-11}\,, \tag{6}\] where the uncertainty currently covers only the sum-rule violation (reflecting the choice of the HLbL basis [3]). It is useful to compare the obtained dispersive result with the outcome from the narrow width approximation \(a_{\mu}^{\rm HLbL}[a_{0}(980)]_{\rm NWA}=-\left([0.3,0.6]^{+0.2}_{-0.1} \right)\times 10^{-11}\)[4], where the range reflects the variation in the scale of transition form factor parametrisation taken from the quark model [22]. It is planned to further add new experimental data into the current analysis, in particular, \(\gamma\gamma\to K^{+}K^{-}\) data from BESIII [23]. In addition, the hadronic \(\pi\eta/K\bar{K}\) rescattering will be further constrained by including the existing data for the \(\phi\to\gamma\pi\eta\)[24] and \(\eta^{\prime}\to\pi\pi\eta\)[25] decays.
2307.09065
Learning Adaptive Neighborhoods for Graph Neural Networks
Graph convolutional networks (GCNs) enable end-to-end learning on graph structured data. However, many works assume a given graph structure. When the input graph is noisy or unavailable, one approach is to construct or learn a latent graph structure. These methods typically fix the choice of node degree for the entire graph, which is suboptimal. Instead, we propose a novel end-to-end differentiable graph generator which builds graph topologies where each node selects both its neighborhood and its size. Our module can be readily integrated into existing pipelines involving graph convolution operations, replacing the predetermined or existing adjacency matrix with one that is learned, and optimized, as part of the general objective. As such it is applicable to any GCN. We integrate our module into trajectory prediction, point cloud classification and node classification pipelines resulting in improved accuracy over other structure-learning methods across a wide range of datasets and GCN backbones.
Avishkar Saha, Oscar Mendez, Chris Russell, Richard Bowden
2023-07-18T08:37:25Z
http://arxiv.org/abs/2307.09065v1
# Learning Adaptive Neighborhoods for Graph Neural Networks ###### Abstract Graph convolutional networks (GCNs) enable end-to-end learning on graph structured data. However, many works assume a given graph structure. When the input graph is noisy or unavailable, one approach is to construct or learn a latent graph structure. These methods typically fix the choice of node degree for the entire graph, which is suboptimal. Instead, we propose a novel end-to-end differentiable graph generator which builds graph topologies where each node selects both its neighborhood and its size. Our module can be readily integrated into existing pipelines involving graph convolution operations, replacing the predetermined or existing adjacency matrix with one that is learned, and optimized, as part of the general objective. As such it is applicable to any GCN. We integrate our module into trajectory prediction, point cloud classification and node classification pipelines resulting in improved accuracy over other structure-learning methods across a wide range of datasets and GCN backbones. We will release the code. ## 1 Introduction The success of Graph Neural Networks (GNNs) [6, 1, 24], has led to a surge in graph-based representation learning. GNNs provide an efficient framework to learn from graph-structured data, making them widely applicable where data can be represented as a relation or interaction system. They have been effectively applied in a wide range of tasks [25], [33] including particle physics [4] and protein science [10]. In a GNN, each node iteratively updates its state by interacting with its neighbors, typically through message passing. However, a fundamental limitation of such architectures is the assumption that the underlying graph is provided. While node or edge features may be updated during message passing, the graph topology remains fixed, and its choice may be suboptimal for various reasons. For instance, when classifying nodes on a citation network, an edge connecting nodes of different classes can diminish classification accuracy. These edges can degrade performance by propagating irrelevant information across the graph. When no graph is explicitly provided, such domain knowledge can be exploited to learn structures optimized for the task at hand [8, 3, 9, 7]. However, in tasks where knowledge of the optimal graph structure is unknown, one common practice is to generate a \(k\)-nearest neighbor (\(k\)-NN) graph. In such cases, \(k\) is a hyperparameter and tuned to find the model with the best performance. For many applications, fixing \(k\) is overly restrictive as the optimal choice of \(k\) may vary for each node in the graph. While there has been an emergence of approaches which learn the graph structure for use in downstream GNNs [43, 13, 15], all of them treat the node degree \(k\) as a fixed hyperparameter. We propose a general differentiable graph-generator (DGG) module for learning graph topology with or without an initial edge structure. Rather than learning graphs with fixed node degrees \(k\), our module generates local topologies with an adaptive neighborhood size. This module can be placed within any graph convolutional network, and jointly optimized with the rest of the network's parameters, learning topologies which favor the downstream task without hyper-parameter selection or indeed any additional training signal. The primary contributions of this paper are as follows: 1. We propose a novel, differentiable graph-generator (DGG) module which jointly optimizes both the neighborhood size, and the edges that should belong to each neighborhood. Note a key limitation of existing approaches [43, 15, 13, 8, 3, 7, 37] is their inability to learn neighborhood sizes. 2. Our DGG module is directly integrable into any pipeline involving graph convolutions, where either the given adjacency matrix is noisy, or unavailable and must be determined heuristically. In both cases, our DGG generates the adjacency matrix as part of the GNN training and can be trained end-to-end to optimize performance on the downstream task. Should a good graph structure be known, the generated adjacency matrix can be learned to remain close to it while optimizing performance. 3. To demonstrate the power of the approach, we integrate our DGG into a range of SOTA pipelines -- without modification -- across different datasets in trajectory prediction, point cloud classification and node classification and show improvements in model accuracy. ## 2 Related work **Graph Representation Learning:** GNNs [1] are a broad class of neural architectures for modelling data which can be represented as a set of nodes and relations (edges). Most use message-passing to build node representations by aggregating neighborhood information. A common formulation is the Graph Convolution Network (GCNs) which generalizes the convolution operation to graphs [16, 5, 38, 11]. More recently, the Graph Attention Network (GAT) [35] utilizes a self-attention mechanism to aggregate neighborhood information. However, these works assumed that the underlying graph structure is fixed in advance, with the graph convolutions learning features that describe pre-existing nodes and edges. In contrast, we simultaneously learn the graph structure while using our generated adjacency matrix in downstream graph convolutions. The generated graph topology of our module is jointly optimized alongside other network parameters with feedback signals from the downstream task. **Graph Structure Learning:** In many applications, the optimal graph is unknown, and a graph is constructed before training a GNN. One question to ask is: "Why isn't a fully-connected graph suitable?" Constructing adjacency matrices weighted by distance or even an attention mechanism [35] over a fully-connected graph incorporates many task-irrelevant edges, even if their weights are small. While an attention mechanism can zero these out -- i.e., discover a subgraph within the complete graph -- discovering this subgraph is challenging given the combinatorial complexity of graphs. A common remedy is to sparsify a complete graph by selecting the \(k\)-nearest neighbors (\(k\)-NN). Although this can prevent the propagation of irrelevant information between nodes, the topology of the constructed graph may have no relation to the downstream task. Not only can irrelevant edges still exist, but pairs of relevant nodes may remain unconnected and can lead GCNs to learn representations with poor generalization [43]. To overcome this, recent works constructed bespoke frameworks which learn the graph's adjacency matrix for specific tasks. For instance, in human pose estimation, some methods [31, 20] treat the elements of the adjacency matrix as a set of learnable weights. However, as each element is treated as a learnable parameter, the learned adjacency matrix is unlinked to the representation space and can only be used in tasks where there is a known correspondence between training and test nodes. This is not the case for many vision and graph tasks. Others have [15, 7, 17] employed variational inference frameworks to sample the entire adjacency matrix. Franceschi _et al._[9] jointly learned the graph structure and the parameters of a GCN by approximately solving a bilevel program. NodeFormer [37] and IDGL [3] instead learned latent topologies using multi-head attention [34]. There are two key differences between these methods and ours. First, we simplify optimization by factorizing the adjacency matrix distribution from which we sample the neighborhood for each node, as opposed to sampling the entire matrix. Second, these methods are bespoke frameworks specifically designed for node and graph classification. They leverage knowledge of the task in their loss functions, such as graph smoothness and sparsity [3]. As these methods are tailored to graph-based tasks only, they cannot be dropped into any GCN without modification, limiting their applicability to non-graph tasks like vision. In contrast, our module is both GCN and task-agnostic, and can be integrated into any GCN pipeline and trained using the downstream task loss. In contrast to the bespoke frameworks above, recent methods [43, 21, 13] took a more module-based approach similar to ours. As these approaches learned the graph structure entirely from the downstream task loss, there is less domain knowledge to leverage compared to methods constructed for specific tasks. Consequently, sparsity is often induced through a \(k\)-NN graph. Here, \(k\) is a scalar hyperparameter selected to control the learned graph's node degree. Unlike these works, we generate neighborhoods of varying size by learning a distribution over the edges _and_ over the node degree \(k\). Each node samples its top-\(k\) neighbors (where \(k\) is now a continuous variable), allowing it to individually select its neighborhood and the edges that should belong to it, in a differentiable manner. Additionally, a known 'ideal' graph structure can be used as intermediate supervision to further constrain the latent space. ## 3 Method Here, we provide details of our differentiable graph generation (DGG) module. We begin with notation and the statistical learning framework guiding its design, before describing the module, and how it is combined with graph convolutional backbone architectures. **Notation** We represent a graph of \(N\) nodes as \(G=(V,E)\): where \(V\) is the set of nodes or vertices, and \(E\) the edge set. A graph's structure can be described by its adjacency matrix \(A\), with \(a_{ij}=1\) if an edge connects nodes \(i\) and \(j\) and \(a_{ij}=0\) otherwise. This binary adjacency matrix \(A\) is directed, and potentially asymmetrical. **Problem definition** We reformulate the baseline prediction task based on a fixed graph with an adaptive variant where the graph is learned. Typically, such baseline tasks make learned predictions \(Y\) given a set of input features \(X\) and a graph structure \(A\) of node degree \(k\): \[Y=Q_{\phi}(X,A(k)), \tag{1}\] where \(Q_{\phi}\) is an end-to-end neural network parameterized by learnable weights \(\phi\). These formulations require a pre determined graph-structure \(A(k)\), typically based on choice of node degree \(k\), and take \(A(k)\) as additional input to the model. In contrast, we _learn_ both \(A\) and \(k\) in an end-to-end manner, and use them to make predictions \(Y\). As graphs are inherently binary, with edges either present or absent, they are not directly optimizable using gradient descent. Instead, we consider a distribution of graphs, \(\mathcal{G}\), which then induce a distribution of labels, \(\mathcal{Y}\), in the downstream task. This distribution takes the factorized form: \[P(Y|X)=\sum_{A\in\mathcal{G}}\sum_{k\in\mathbb{N}^{|V|}}Q_{\phi}(X,A)P(A|X,k)P( k|X), \tag{2}\] where \(P(k|X)\) is the distribution of node degree \(k\) given \(X\) (i.e., the choice of \(k\) in \(k-\)NN), \(P(A|X,k)\) the distribution of graph structures \(A\) conditioned on the learned \(k\) and input \(X\), and \(P(Y|X)\) is the downstream distribution of labels conditioned on data \(X\). For clarity, the adjacency \(A\) represents a subgraph of a complete graph over \(X\), and \(k\) is a multidimensional variable controlling the number of top-\(k\) neighbors for each node individually. To avoid learning individual probabilities for each possible graph \(A\) in an exponential state space, we further assume that \(P(A|X,k)\) has a factorized distribution where each neighborhood is sampled independently, i.e. \(P(A|X,k)=\prod_{i\in V}P(a_{i}|X,k)\). We model the distributions over adjacencies \(A\) and \(k\) with tractable functions: \[P(Y|X)\approx\sum_{A}\sum_{k}Q_{\phi}(X,A)Q_{\theta}(A|X,k)Q_{\rho}(k|X), \tag{3}\] where \(Q_{\theta}\) and \(Q_{\rho}\) are functions parameterized by \(\theta\) and \(\rho\) to approximate \(P(A|X,k)\) and \(P(k|X)\), respectively. In Fig. 1, we illustrate the functions of our method compared to the typical prediction task in Eq. 1. Using this formulation, we train the entire system end-to-end to minimize the expected loss when sampling \(Y\). This can be efficiently performed using stochastic gradient descent. In the forward pass, we first sample a subgraph/set of nodes \(X\) from the space of datapoints, and conditioning on \(X\) we sample \(A\) and compute the associated label \(Y\). When computing the gradient step, we update \(Q_{\phi}(X,A)\) as normal and update the distributions using two standard reparametrization tricks: one for discrete variables [12] such that \(Q_{\theta}(A|X,k)\) can generate differentiable graph samples \(A^{\prime}\), and another for continuous variables [14] of \(k^{\prime}\) drawn from \(Q_{\rho}(k|X)\): \[\begin{split} P(Y|X)\approx\sum_{A^{\prime}}\sum_{k^{\prime}}Q_{ \phi}(X,A^{\prime}),\\ \text{where }A^{\prime}\sim Q_{\theta}(A|X,k^{\prime})\text{ and }k^{ \prime}\sim Q_{\rho}(k|X).\end{split} \tag{4}\] As both the graph structure \(A^{\prime}\) and variable \(k^{\prime}\) samplers are differentiable, our DGG module can be readily integrated into pipelines involving graph convolutions and jointly trained end-to-end. ### Differentiable Graph Generation Our differentiable graph-generator (DGG) takes a set of nodes \(V=\{v_{1},...,v_{N}\}\) with \(d\)-dimensional features \(\mathbf{X}\in\mathbb{R}^{N\times d}\) and generates a (potentially) asymmetric adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\). This adjacency matrix can be used directly in any downstream graph convolution operation (see Module Instantiation below). As illustrated by Fig. 2, the DGG module consists of four components: 1. **Node encoding:** this component projects the input node features \(\mathbf{X}\in\mathbb{R}^{N\times d}\) to a latent representation \(\mathbf{\hat{X}}\in\mathbb{R}^{N\times d^{\prime}}\), which forms the primary representation space of the model. 2. **Edge ranking**: this takes the latent node features \(\mathbf{\hat{X}}\in\mathbb{R}^{N\times d^{\prime}}\) and generates a matrix representing a stochastic ordering of edges \(\mathbf{E}\in\mathbb{R}^{N\times N}\) drawn from a learned distribution over the edge-probabilities (\(A^{\prime}\sim Q_{\theta}(A|X,k^{\prime})\) from Eq. 4). 3. **Degree estimation**: this component estimates the number of neighbors each individual node is connected to. It takes as input the latent node features \(\mathbf{\hat{X}}\in\mathbb{R}^{N\times d^{\prime}}\) and generates random samples \(k\in\mathbb{R}^{N}\) drawn from a learned distribution over the node degree (\(k^{\prime}\sim Q_{\rho}(k|X)\) from Eq. 4). 4. **Differentiable top-\(k\) edge selector**: takes \(k\) and the edge-samples \(e\) and performs a soft thresholding that probabilistically selects the most important elements, Figure 1: (Left) A typical prediction task using graphs \(Y=Q_{\phi}(X,A)\) where \(A\) and \(k\) are predetermined. (Right) Our reformulation \(P(Y|X)\approx\sum_{A}\sum_{k}Q_{\phi}(X,A)Q_{\theta}(A|X,k)Q_{\rho}(k|X)\) which learns a distribution over \(A\) and \(k\) alongside the downstream task. Figure 2: Our differentiable graph-generator (DGG) takes input nodes \(\mathbf{X}\) and generates an adjacency matrix \(\mathbf{A}\). It consists of: (1) **Degree-estimator**: generates samples of \(k_{i}\) for each node, (2) **Edge-ranker**: generates edge samples \(\mathbf{e}_{i}\) for each node and (3) **Top-k selector**: takes \(k_{i}\) and edge samples \(\mathbf{e}_{i}\) and selects top-k elements in a differentiable manner to output a final adjacency \(\mathbf{A}\). based on the output of the Edge-ranking to output an adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\). We now explain these steps in more detail: **Node encoding** We construct a single latent space from the input node features, and use it for edge ranking and degree estimation. We first map input node features \(\mathbf{X}\in\mathbb{R}^{N\times d}\) into latent features \(\mathbf{\hat{X}}\in\mathbb{R}^{N\times d^{\prime}}\) using a multi-layer perceptron (MLP) \(N_{\phi}\) with weights \(\phi\): \(\mathbf{\hat{X}}=N_{\phi}(\mathbf{X})\). These latent features form the input for the rest of the DGG. Furthermore, they are output by the DGG and passed to the GCN downstream to prevent vanishing gradients. **Edge ranking** The edge ranking returns an implicit distribution of edge orderings, from which we sample the neighborhood for each node. For each node \(v_{i}\), it draws a set of scores \(\mathbf{e}_{i}=\{e_{ij}\}_{j}^{N}\) quantifying its relevance to all nodes \(v_{j}\in V\), including itself. To generate differentiable edge samples \(\mathbf{e}_{i}\), we use the Gumbel-Softmax [12]. Before locally scoring each edge embedding \(e_{ij}\in\mathbf{e}_{i}\) for node \(v_{i}\), we implement a global stage which constructs edge embeddings with both local and global dependencies: 1. Using latent node features \(\hat{\mathbf{x}}_{i}\in\hat{\mathbf{X}}\), determine local edge embeddings \(\hat{\mathbf{c}}_{ij}\in\mathbb{R}^{d^{\prime}}\) by passing each pair of node features through an MLP \(l_{\phi}\): \(\hat{\mathbf{c}}_{ij}=l_{\phi}(\hat{\mathbf{x}}_{i},\hat{\mathbf{x}}_{j})\). These embeddings now form a complete graph \(\mathcal{G}\) over the nodes, with each edge attributed \(\hat{\mathbf{c}}_{ij}\). 2. As each edge embedding \(\hat{\mathbf{c}}_{ij}\in\mathbf{C}\) is calculated independently of the others, we refine it to account for its dependencies to adjacent edges. We do this through edge-to-edge message passing. However, we avoid computing dependencies between all edges of the complete graph for two reasons: first, some edges may not have any common nodes, so passing messages between them could propagate irrelevant information, and secondly, it could be prohibitedly expensive. To restrict message-passing between adjacent edges only, we first compute the adjoint graph \(\mathcal{H}\) of the complete graph \(\mathcal{G}\). In the adjoint \(\mathcal{H}\), each edge is associated with a node, and two nodes are connected if and only if their corresponding edges in \(\mathcal{G}\) have a node in common. The adjoint's adjacency \(A^{\mathcal{H}}\) can be calculated using its incidence matrix \(L\), \(A^{\mathcal{H}}=L^{T}L-2I\). In the adjoint, each node embedding \(\hat{\mathbf{c}}_{i}\) is then updated using an average of its neighboring nodes \(\hat{\mathbf{c}}_{j}\) and passed through an MLP \(h_{\phi}\): \[\hat{\mathbf{c}}_{i}^{\prime}=\sum_{j\in\mathcal{N}(i)}h_{\phi}(\hat{\mathbf{c }}_{i}\parallel\mathbf{c}_{i}-\mathbf{c}_{j})\] (5) Having computed edge embeddings \(\mathbf{C}\in\mathbb{R}^{N\times N\times d^{\prime}}\) with global dependencies, we rank these edges for each node. Without loss of generality, we focus on a single node \(v_{i}\in V\), with latent features \(\hat{\mathbf{x}}_{i}\in\mathbb{R}^{d}\). We implement the approximation function \(Q_{\theta}(A|X,k)\) of the Edge-ranker as follows: 1. Using edge embeddings \(\hat{\mathbf{c}}_{ij}\in\mathbb{R}^{d^{\prime}}\), calculate edge probabilities \(\mathbf{p}_{i}\in\mathbb{R}^{N}\) for node \(v_{i}\) using an MLP \(m_{\theta}\): \[\mathbf{p}_{i}=\{m_{\theta}(\hat{\mathbf{c}}_{ij})|\forall j\in N\}.\] (6) Each element \(p_{ij}\in\mathbf{p}_{i}\) represents a similarity measure between the latent features of node \(v_{i}\) and \(v_{j}\). In practice, any distance measure can be used here. 2. Using Gumbel-Softmax over the edge probabilities \(\mathbf{p}_{i}\in\mathbb{R}^{N}\), we generate differentiable samples \(\mathbf{e}_{i}\in\mathbb{R}^{N}\) with Gumbel noise \(g\): \[\mathbf{e}_{i}=\left\{\frac{\exp((\log(p_{ij})+g_{i})+\tau)}{\sum_ {j}\exp((\log(p_{ij})+g_{i})+\tau)}\Big{|}\forall j\in N\right\},\] (7) \[g_{i}\sim\mathrm{Gumbel}(0,1)\] where \(\tau\) is a temperature hyperparameter controlling the interpolation between a discrete one-hot categorical distribution and a continuous categorical density. When \(\tau\to 0\), the edge energies \(e_{ij}\in\mathbf{e}_{i}\) approach a degenerate distribution. The temperature \(\tau\) is important for inducing sparsity, but given the exponential function, this results in a single element in \(\mathbf{e}_{i}\) given much more weighting than the rest, i.e., it approaches a one-hot argmax over \(\mathbf{e}_{i}\). As we want a variable number of edges to be given higher importance and others to be close to zero, we select a higher temperature and use the top-\(k\) selection procedure (detailed below) to induce sparsity. This additionally avoids the high-variance gradients induced by lower temperatures. **Degree estimation** A key limitation of existing graph generation methods [13, 15, 43] is their use of a fixed node degree \(k\) across the entire graph. This can be suboptimal as mentioned previously. In our approach, rather than fixing \(k\) for the entire graph, we sample it per node from a learned distribution. Focusing on a single node as before, the approximation function \(Q_{\rho}(k|X)\) of the Degree-estimator works as follows: 1. We approximate the distribution of latent node features \(\hat{\mathbf{x}}_{i}\in\mathbb{R}^{d}\) following a VAE-like formulation [14]. We encode its mean \(\mathbf{\mu}_{i}\in\mathbb{R}^{d}\) and variance \(\mathbf{\sigma}_{i}\in\mathbb{R}^{d}\) using two MLPs \(M_{\rho}\) and \(S_{\rho}\), and then reparametrize with noise \(\epsilon\) to obtain latent variable \(\mathbf{z}_{i}\in\mathbb{R}^{d}\): \[\begin{split}\mathbf{\mu}_{i},\mathbf{\sigma}_{i}&=M_{\rho} (\hat{\mathbf{x}}_{i}),S_{\rho}(\hat{\mathbf{x}}_{i}),\\ \mathbf{z}_{i}&=\mathbf{\mu}_{i}+\mathbf{\epsilon}_{i}\mathbf{ \sigma}_{i},\epsilon_{i}\sim\mathcal{N}(0,1).\end{split}\] (8) 2. Finally, we concatenate each latent variable \(\mathbf{z}_{i}\in\mathbb{R}^{d}\) with the L1-norm of the edge samples \(\mathbf{h}_{i}=||\mathbf{e}_{i}||_{1}\) and decode it into a scalar \(k_{i}\in\mathbb{R}\) with another MLP \(D_{\rho}\), representing a continuous relaxation of the neighborhood size for node \(v_{i}\): \[k_{i}=D_{\rho}(\mathbf{z}_{i})+\mathbf{h}_{i}.\] (9) Since \(\mathbf{h}_{i}\) is a summation of a node's edge probabilities, it can be understood as representing an initial estimate of the node degree which is then improved by combining with a second node representation \(\mathbf{z}_{i}\) based entirely on the node's features. Using the edge samples to estimate the node degree links these representation spaces back to the primary latent space of node features \(\hat{\mathbf{X}}\). **Top-\(k\) Edge-Selector** Having sampled edge weights, and node degrees \(k\), this function selects the top-\(k\) edges for each node. The top-\(k\) operation, i.e. finding the indices corresponding to the \(k\) largest elements in a set of values, is a piecewise constant function and cannot be directly used in gradient-based optimization. Previous work [40] framed the top-\(k\) operation as an optimal transport problem, providing a smoothed top-\(k\) approximator. However, as their function is only defined for discrete values of \(k\) it cannot be optimized with gradient descent. As an alternative that is differentiable with respect to \(k\), we relax the discrete constraint on \(k\), and instead use it to control the \(x\)-axis value of the inflection point on a smoothed-Heaviside function (Fig. 3). For a node \(v_{i}\in V\), of smoothed degree \(k_{i}\in\mathbb{R}\) and edges \(\mathbf{e}_{i}\in\mathbb{R}^{N}\), our Top-\(k\) Edge Selector outputs an adjacency vector \(\mathbf{a}_{i}\in\mathbb{R}^{N}\) where the \(k\) largest elements from \(\mathbf{e}_{i}\) are close to \(1\), and the rest close to \(0\). Focusing on a single node \(v_{i}\) as before, the implementation is as follows: 1. Draw 1D input points \(\mathbf{d}_{i}=\{1,...,N\}\) where \(N\) is the number of nodes in \(V\). 2. Pass \(\mathbf{d}_{i}\) through a hyperbolic tangent (tanh) which serves as a smooth approximation of the Heaviside function: \[\mathbf{h}_{i}=1-0.5*\left\{1+\tanh(\lambda^{-1}d_{i}-\lambda^{-1}k_{i}) \right\},\] (10) here \(\lambda>0\) is a temperature parameter controlling the gradient of the function's inflection point. As \(\lambda\to 0\), the smooth function approaches the Heaviside step function. The first-\(k\) values in \(\mathbf{h}_{i}=\{h_{ij}\}_{j}^{N}\) will now be closer to 1, while the rest closer to 0. 3. Finally, for each node \(i\) we sort its edge-energies \(\mathbf{e}_{i}=\{e_{ij}\}_{j}^{N}\) in descending order, multiply by \(\mathbf{h}_{i}=\{h_{ij}\}_{j}^{N}\) and then restore the original order to obtain the final adjacency vector \(\mathbf{a}_{i}=\{a_{ij}\}_{j}^{N}\). Stacking \(\mathbf{a}_{i}\) over all nodes \(v_{i}\in V\) creates the final adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\). **Symmetric adjacency matrix** If the adjacency matrix \(A\) must be symmetric, this can be enforced by replacing it with \(A_{sym}\) where: \(\mathbf{A}_{sym}=(\mathbf{A}+\mathbf{A}^{T})/2\). **Straight through Top-\(k\) Edge Selector** To make our final adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\) discrete, we follow the trick used in the Straight-Through Gumbel Softmax [12]: we output the discretized version of \(\mathbf{A}\) in the forward pass and the continuous version in the backwards pass. For the discretized version in the forward pass, we replace the smooth-Heaviside function in Eq. 10 with a step function. **Module Instantiation:** The DGG module can be easily combined with any graph convolution operation. A typical graph convolution [16] is defined as follows: \(\mathbf{X}^{\prime}=\hat{\mathbf{D}}^{-1/2}\hat{\mathbf{A}}\hat{\mathbf{D}}^{ -1/2}X\mathbf{\Theta}\). Here, \(\hat{\mathbf{A}}=\mathbf{A}+\mathbf{I}\) denotes the adjacency matrix with inserted self-loops, \(\hat{\mathbf{D}}\) its diagonal degree matrix and \(\mathbf{\Theta}\) its weights. To use this graph convolution with the DGG, we simply use our module to generate the adjacency matrix \(\hat{\mathbf{A}}\). ## 4 Experiments We evaluate our DGG on node classification, point cloud classification and trajectory prediction. We chose these tasks as they demonstrate the wide applicability of our module: (1) graphs for node classification require models that can generate edge structures from noisy input graphs, (2) point cloud classification tasks have no input graph structures and (3) trajectory prediction additionally requires models which can handle a variable number of nodes per batch. We compare against state-of-the-art structure learning methods in each domain. As far as we know, our structure-learning approach is the only one that can be easily applied without modification to any GCN pipeline in such a range of tasks. ### Node classification Beginning with node classification, we conduct ablations examining the behavior of different parts of the DGG, followed by comparisons to other state-of-the-art structure learning approaches. In the supplementary we include experiments investigating the effect of the DGG on downstream models under the addition of noisy edges to input graphs. We perform these experiments under both transductive and inductive scenarios, as well as semi-supervised and fully-supervised settings. **Datasets** In the transductive setting, we evaluate on three citation benchmark datasets Cora, Citeseer and Pubmed [26] introduced by [41]. In an inductive setting, we evaluate on Reddit [42] and PPI [11]. Further dataset details can be found in the supplementary. **Baselines and Implementation** As our DGG is a GCN-agnostic module that can be integrated alongside any graph convolution operation, we compare its performance to both other GCN-agnostic approaches and bespoke structure-learning architectures. To compare against other GCN-agnostic methods, we int egrate our DGG into four representative GCN backbones: GCN [16], GraphSage [11], GAT [35] and GCNII [2]. On these backbones, we compare against other GCN-agnostic structure learning methods: DropEdge [29], NeuralSparse [43], PTDNet [21]. Then we compare against bespoke architectures IDGL [3], LDS [9], SLAPS [8], NodeFormer [37] and VGCN [7]. To make our comparison fair against these bespoke architectures which learn the structure specifically for node classification, we integrate our DGG into a GCN backbone that is comparable to the bespoke architecture in design. Please see the supplementary for implementation details. **Training details** A node classification model partitions the latent space of node embeddings into separate classes. However, when message-passing, there is one phenomenon of the input graph that can limit classification accuracy: two nodes with different classes but similar features and an edge connecting them. Classifying these nodes is challenging as their feature similarity can be compounded by passing messages between them. The goal of the DGG is to move such nodes apart in the latent space such that there is no edge and communication between them. However, traversing the loss landscape from the initial random initialization of the network to one where the model is able to discriminate between these nodes can take several iterations using only the downstream classification loss. To speed up training, we add an intermediate loss to further partition the latent space. We do this by supervising the adjacency matrix generated by the DGG to remove all edges between classes and only maintain those within a class. We then anneal this loss over the training cycle, eventually leaving only the downstream classification loss. We provide more details in the supplementary. #### 4.1.1 Ablations In Table 1, we explore the effect of disabling different components of our DGG module when integrated into a GCN [16] for node classification: 1. _DGG without Degree Estimation and Differentiable Top-\(k\) Edge Selection_ -- we remove the Degree Estimator and instead fix \(k\) to select the top-\(k\) stochastically ordered edges. 2. _DGG with deterministic Edge Ranking_ -- we remove the noise in Eq. 7 of the Edge Ranker. 3. _DGG with deterministic Degree Estimation_ -- we remove the noise in Eq. 8 of the Degree Estimator. We perform these on Cora [41] and omit the annealed intermediate loss during training. Table 1 shows the benefit of learning a distribution over the node degree. When learning it deterministically, the accuracy decreases by 0.5%. This becomes significantly worse when the node degree is fixed for the entire graph rather than learned per node. Note also, the sensitivity with respect to choice of \(k\). A fixed node degree of \(k=10\) or \(k=1\) reduces accuracy by almost 30% vs a graph of 5. This is due to the graph convolution operation: as it has no adaptive weighting mechanism for a node's neighborhood, each of the neighbors is given the same weight. Naturally, this leads to information sharing between unrelated nodes, reducing the quality of node representation after message-passing. In contrast, by learning a distribution over the node degree we are able to select only the most relevant neighbors, even though these are then weighted equally in the graph convolution. Finally, the inclusion of noise in any of the DGG components does increase accuracy, but only by approximately 0.5% -- demonstrating both its benefit and the robustness of the DGG without it. #### 4.1.2 Results **Comparison to GCN-agnostic modules** In Table 2 we compare against GCN-agnostic structure learning methods. For fair comparison, we present two versions of our method: DGG-wl trained with the downstream loss only and DGG* trained with both the downstream and intermediate loss. DGG improves performance across all baselines and datasets. Against other approaches, DGG-wl generally outperforms the state-of-the-art NeuralSparse and PTDNet-wl (both trained with only the downstream loss). This can be attributed to our method for modelling sparsity, which explicitly lets each node to select the size of its neighborhood based on the downstream training signal. This training signal helps partition the node representation space, while the estimated node-degree additionally prevents communication \begin{table} \begin{tabular}{c c} \hline Model & Accuracy \\ \hline Fixed node degree, k = \(\{1,5,10,100\}\) & \(\{49.7,789,550,37.0\}\) \\ With deterministic Edge Ranking and Degree Estimation & 82.4 \\ With deterministic Edge Ranking & 82.7 \\ With deterministic Degree Estimation & 82.8 \\ DGG & **83.2** \\ \hline \end{tabular} \end{table} Table 1: Ablation study. DGG integrated into a GCN for node classification on Cora [41]. Figure 3: The differentiable Top-\(k\) Edge Selector. This component uses the node degree \(k_{i}\) output by the Degree Estimator to control the inflection point on a smooth-Heaviside function and uses it to select the top edges from \(\mathbf{e}_{i}\) output by the Edge Ranker. This produces an adjacency vector \(\mathbf{a}_{i}\) for each node, and stacking \(\mathbf{a}_{i}\) across all nodes produces the final adjacency matrix \(\mathbf{A}\). between distant nodes. Although PTDNet-wl does this implicitly through its attention mechanism, discovering this sparse subgraph of the input graph is challenging given its complexity. NeuralSparse on the other hand selects \(k\) for its entire generated subgraph, which is both suboptimal and requires additional hyperparameter tuning. Comparing methods which enforce additional constraints on the adjacency matrix, DGG* demonstrates larger accuracy gains than PTDNet*. PTDNet* regularizes its adjacency matrix to be of low-rank, as previous work [30] has shown that the rank of an adjacency matrix can reflect the number of clusters. This regularizer reasons about the graph's topology globally. While this may aid generalization, the accuracy difference may then be attributed to our intermediate loss providing stronger signals to discriminate between nodes with similar features but different classes (and therefore remove the edges between them). Furthermore, their regularizer uses the sum of the top-\(k\) singular values during training, where \(k\) again is a hyperparameter tuned to each dataset individually. Our method requires no additional parameters to be chosen. Finally in Table 3 we compare the low-rank constraint of PTDNet with our intermediate annealed loss. Our intermediate loss ('DGG-wl + int. loss') outperforms the low-rank constraint ('DGG-wl + low rank'). However, using both constraints ('DGG-wl + int. loss + low rank') increases classification accuracy further, suggesting the edges removed by both methods are complementary. **Comparison with bespoke architectures** In Table 4 we compare against bespoke architectures specifically designed for node classification. As each of these methods uses different experiment settings, we train our DGG-integrated architecture separately for each. See the supplementary for details on each setting and reasons for our choice of backbone. Our performance gains here can generally be attributed to factors: (1) our intermediate loss on the adjacency matrix and (2) our adjacency matrix factorizations where we learn the neighborhood for each node. Our intermediate loss particularly benefits from the experimental settings adopted by the other methods as they use larger training splits involving half the validation graph. Additionally, constructing the adjacency matrix by learning nodewise neighborhoods restricts the graph search space, making optimization easier. However, we note that some of these other methods are designed for node-classification on graphs which are orders of magnitude larger than Cora and Citeseer. In such cases, factorizing the adjacency per node, as we do, may be unfeasible. consider four datasets covering a range of scenarios from basketball to crowded urban environments. On each, we integrate our DGG into a SOTA GCN trajectory prediction pipeline and compare results to another task-agnostic structure learning approach, DGM [13]. **Datasets** We evaluate on four trajectory prediction benchmarks. 1. ETH [27] and UCY [18] -- 5 subsets of widely used real-world pedestrian trajectories. 2. STATS SportVU [32] -- multiple NBA seasons tracking trajectories of basketball players over a game. Stanford Drone Dataset (SDD) [28] -- top-down scenes across multiple areas at Stanford University. Further details on these datasets can be found in the supplementary. **Baselines and Implementation** We integrate our DGG module into two state-of-the-art trajectory prediction pipelines: Social-STGCNN [22] and DAGNet [23]. Our DGG is placed within both networks to generate the adjacency matrix on the fly and forms part of its forward and backward pass. Please see the supplementary for implementation details. **Evaluation metrics.** Model performance is measured with Average Displacement Error (ADE) and Final Displacement Error (FDE). ADE measures the average Euclidean distance along the entire predicted trajectory, while the FDE is that of the last timestep only. **Results** In Table 5, the integration of our DGG into Social-STGCNN reduces ADE/FDE compared to both the baseline and the integration of DGM. In Table 5 and 6 we demonstrate similar gains over DGM when integrated into DAGNet. First, this shows the benefit of inducing sparsity when message-passing over a distance weighted adjacency matrix like Social-STGCNN or even an attention-mechanism like DAGNet. The larger error reduction of our DGG compared to DGM may be attributed to DGM's use of a fixed node-degree \(k\) across its learned graph. While this can prevent the propagation of irrelevant information across the graph in some cases, in others it might limit the context available to certain nodes. We provide qualitative analysis in the supplementary. ### Point Cloud Classification We evaluate on another vision task of point cloud classification for models which use GCNs. This task differs from the previous two as predictions are made for the entire graph as opposed to node-wise. As with our trajectory prediction experiments, we integrate our DGG into SOTA classification architectures and compare against the other task-agnostic graph-learning module DGM [13]. **Datasets** We evaluate on ModelNet40 [39], consisting of CAD models for a variety of object categories. **Baselines and Implementation** We integrate our DGG into a SOTA ResGCN [19] and DGCNN [36]. Both models use a \(k\)-NN sampling scheme to construct its graph. We simply replace this sampler with our DGG and keep the rest of the network and training protocol the same. **Results** Our results in Table 7 demonstrate the benefits of learning an adaptive neighborhood size across the latent graph. DGM [13] learns a fully-connected latent graph and then imposes a fixed node degree of \(k=20\) across it (i.e. selecting the top 20 neighbors for each node). This marginally improves upon the baselines ResGCN [19] and DGCNN[36], which both also used fixed node-degrees \(k\). In contrast, we learn a distribution over the node degree from which we sample each node's neighborhood size. As shown in Table 7, the node degree varies in our models with a standard deviation of around 5-7 across both baselines. Our accuracy gains over the baseline and DGM can be attributed to this variance in neighborhood sizes across the graph. These gains can be understood when viewing an input point cloud as a composition of object parts. Building semantic representations for different parts may naturally require varying amounts of contextual points. For instance, the wheels of a car might be identifiable with a smaller neighborhood than the car's body. This may suggest why an adaptive neighborhood size is helpful in this case. ## 5 Conclusion We have presented a novel approach for learning graph topologies, and shown how it obtains state-of-the-art performance across multiple baselines and datasets for trajectory prediction, point cloud classification and node classification. The principal advantage of our approach is that it can be combined with any existing graph convolution layer, under the presence of noisy, incomplete or unavailable edge structures. ## Acknowledgements This project was supported by the EPSRC project ROSSINI (EP/S016317/1) and studentship 2327211 (EP/T517616/1). \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Original**} & \multicolumn{2}{c}{**DGM [13]** Gain (\%)} & \multicolumn{2}{c}{**DGG Gain (\%)**} \\ \multirow{2}{*}{**Split**} & \multirow{2}{*}{**Team**} & \multicolumn{1}{c}{ADE} & \multicolumn{1}{c}{FDE} & \multicolumn{1}{c}{ADE} & \multicolumn{1}{c}{FDE} & \multicolumn{1}{c}{ADE} & \multicolumn{1}{c}{FDE} \\ \cline{2-7} & & \multicolumn{1}{c}{ATE} & \multicolumn{1}{c}{4.29} & \multicolumn{1}{c}{-0.4\%} & \multicolumn{1}{c}{-0.2\%} & \multicolumn{1}{c}{**6.7\%**} & \multicolumn{1}{c}{**5.1\%**} \\ & & \multicolumn{1}{c}{DEF} & \multicolumn{1}{c}{2.09} & \multicolumn{1}{c}{2.97} & \multicolumn{1}{c}{-0.5\%} & \multicolumn{1}{c}{-0.1\%} & \multicolumn{1}{c}{**9.7\%**} & \multicolumn{1}{c}{**6.4\%**} \\ \cline{2-7} & \multicolumn{1}{c}{ATE} & \multicolumn{1}{c}{2.03} & \multicolumn{1}{c}{3.98} & \multicolumn{1}{c}{0.1\%} & \multicolumn{1}{c}{0.1\%} & \multicolumn{1}{c}{**7.2\%**} & \multicolumn{1}{c}{**8.2\%**} \\ \cline{2-7} & \multicolumn{1}{c}{DEF} & \multicolumn{1}{c}{1.53} & \multicolumn{1}{c}{3.07} & \multicolumn{1}{c}{0.2\%} & \multicolumn{1}{c}{0.3\%} & \multicolumn{1}{c}{**21.4\%**} & \multicolumn{1}{c}{**19.1\%**} \\ \hline \multirow{2}{*}{40-10} & \multicolumn{1}{c}{ATE} & \multicolumn{1}{c}{0.81} & \multicolumn{1}{c}{1.71} & \multicolumn{1}{c}{1.3\%} & \multicolumn{1}{c}{0.9\%} & \multicolumn{1}{c}{**15.5\%**} & \multicolumn{1}{c}{**17.0\%**} \\ & \multicolumn{1}{c}{DEF} & \multicolumn{1}{c}{0.72} & \multicolumn{1}{c}{1.49} & \multicolumn{1}{c}{0.8\%} & \multicolumn{1}{c}{0.8\%} & \multicolumn{1}{c}{**10.9\%**} & \multicolumn{1}{c}{**16.2\%**} \\ \hline \multicolumn{2}{l}{Mean} & \multicolumn{1}{c}{—} & \multicolumn{1}{c}{1.65} & \multicolumn{1}{c}{2.92} & \multicolumn{1}{c}{0.3\%} & \multicolumn{1}{c}{0.3\%} & \multicolumn{1}{c}{**11.9\%**} & \multicolumn{1}{c}{**12.0\%**} \\ \hline \hline \end{tabular} \end{table} Table 6: ADE/FDE metrics on the SportVU Basketball dataset using DAGNet. For DGM [13], \(k=3\). \begin{table} \begin{tabular}{l|l|c c|c} \hline \hline **Baseline** & **Method** & **Mean degree** & **S.D. degree** & **Accuracy** \\ \hline ResGCN [19] & Original & 9 & 0 & 93.3 \\ & DGM [13] & 20 & 0 & 93.5 \\ & DGG & 14.8 & 7.4 & **94.4** \\ \hline DGCNN [36] & Original & 40 & 0 & 92.9 \\ & DGM [13] & 20 & 0 & 93.3 \\ & DGG & 19.3 & 5.2 & **93.8** \\ \hline \hline \end{tabular} \end{table} Table 7: Point Cloud classification on ModelNet40 with our module and DGM [13] integrated into two different point cloud labelling architectures.
2304.10064
$\mathcal{PT}$-Symmetry breaking in quantum spin chains with exceptional non-Hermiticities
Since the realization of quantum systems described by non-Hermitian Hamiltonians with parity-time (PT) symmetry, interest in non-Hermitian, quantum many-body models has steadily grown. Most studies to-date map to traditional quantum spin models with a non-Hermiticity that arises from making the model parameters complex or purely imaginary. Here, we present a new set of models with non-Hermiticity generated by splitting a Hermitian term into two Jordan-normal form parts, and the perturbations are confined to one or two sites. We present exact diagonalization results for the PT-threshold in such models, and provide an analytical approach for understanding the numerical results. Surprisingly, with non-Hermitian potentials confined to two or even a single site, we find a robust PT threshold that seems insensitive to the size of the quantum spin chain. Our results provide a pathway to experimentally feasible non-Hermitian quantum spin chains where the confluence of many-body effects and non-Hermiticity effects can be observed.
Jacob Muldoon, Yogesh N. Joglekar
2023-04-20T03:03:58Z
http://arxiv.org/abs/2304.10064v2
# \(\mathcal{PT}\)-Symmetry breaking in quantum spin chains with exceptional non-Hermiticities ###### Abstract Since the realization of quantum systems described by non-Hermitian Hamiltonians with parity-time (\(\mathcal{PT}\)) symmetry, interest in non-Hermitian, quantum many-body models has steadily grown. Most studies to-date map to traditional quantum spin models with a non-Hermiticity that arises from making the model parameters complex or purely imaginary. Here, we present a new class of models with non-Hermiticity generated by splitting a Hermitian term into two Jordan-normal form parts. We present exact diagonalization results for the \(\mathcal{PT}\)-threshold in such models, and provide an analytical approach for understanding the numerical results. Surprisingly, with non-Hermitian potentials confined to two or even a single site, we find a robust \(\mathcal{PT}\) threshold that is insensitive to the size of the quantum spin chain. Our results provide a pathway to experimentally feasible non-Hermitian quantum spin chains where the confluence of many-body effects and non-Hermiticity effects can be observed. ## I Introduction Since the seminal discovery of Bender and co-workers 25 years ago [1], the field of non-Hermitian systems has dramatically flourished. Research initially focused on continuum, non-relativistic Schrodinger equations with complex (often, purely imaginary) potentials that were invariant under combined operations of parity and time-reversal, i.e. \(V(x)=V^{*}(-x)\)[2; 3; 4]. Such \(\mathcal{PT}\)-symmetric Hamiltonians showed purely real spectra at small non-Hermiticity, going over to complex-conjugate spectra at large non-Hermiticity [5; 6]. Experiments in wave systems (optics [7; 8; 9], acoustics [10], and the like [11; 12] with balanced, spatially separated gain and loss, provided a simple physical interpretation for \(\mathcal{PT}\)-symmetric Hamiltonians as effective models for open systems [13; 14]. From this vantage point, the \(\mathcal{PT}\)-symmetry breaking transition marks the concomitant emergence of amplifying and decaying modes in an open system. Thus, in the classical domain, \(\mathcal{PT}\)-symmetric Hamiltonians are often modeled with purely anti-Hermitian potentials that signify local amplification or absorption. Over the years, these ideas have been generalized to time-periodic models [15; 16; 17], non-Markovian models [18; 19], and synthetic degrees of freedom [20; 21], all in the classical domain. In the quantum domain, creation of balanced gain and loss potentials is precluded by thermal fluctuations associated with the dissipation [22], and even at zero temperature, the quantum noise associated with linear amplifiers [23; 24]. Instead, the coherent, non-unitary dynamics generated by \(\mathcal{PT}\)-symmetric Hamiltonians is simulated by mode-selective losses [25; 26], Hamiltonian dilation [27], or unitary dilation [28] methods. Most recently, it was realized that a Lindbladian, minimal quantum system [29; 30; 31], when post-selected on trajectories that do not undergo quantum jumps [32; 33], is described by a non-Hermitian, \(\mathcal{PT}\)-symmetric Hamiltonian with state-dependent, trace-preserving non-linearity [34]. This technique has enabled the exploration of non-Hermitian Hamiltonians in quantum, two-level systems [35; 36; 37]. With this approach, an exponentially decaying no-jump probability caps the duration of the coherent, non-unitary dynamics, and post-selection gives rise to anti-Hermitian potentials. Therefore, theoretical studies of non-Hermitian, quantum many-body models have typically commenced by changing parameter(s) in their Hermitian counterparts from real to complex [38; 39; 40; 41; 42; 43; 44; 45; 46]. Here we present a new set of models with non-Hermiticity created by splitting a Hermitian potential into two, Jordan-form terms and then spatially separating them. For example, in a transverse field quantum Ising model, this means \(\gamma\sigma_{m}^{\pi}\rightarrow\gamma(\sigma_{m-n}^{+}+\sigma_{m+n}^{-})\) where \(\sigma_{m}^{\pm}\) represents the relevant Pauli operator on site \(m\). Note that \(\sigma^{\pm}\equiv(\sigma^{x}\pm i\sigma^{y})/2\) are rank-1, Jordan normal form matrices; they represent single-qubit Hamiltonian at an exceptional point. In quantum spin systems with finite number of levels, the mapping between raising/lowering operators \(\sigma^{\pm}\) and gain/loss is ambiguous due to the presence of a ceiling in the spectrum. On the contrary, in bosonic models such as two coupled oscillators, this splitting procedure will generate non-Hermitian gain/loss potentials such as \(\gamma(a_{1}^{\dagger}+a_{2})\). We emphasize that the operators \(a_{1}^{\dagger},a_{2}\) are terms in the Hamiltonian, not dissipators routinely used in Lindblad dynamics to model spontaneous emission and absorption; the latter give rise to anti-Hermitian potentials [47]. In this paper, we investigate the \(\mathcal{PT}\)-symmetry breaking threshold in transverse field Ising models with finite number of spins \(N\) and its dependence on parameters using exact diagonalization method. Other traditional techniques such as perturbation theory or tensor networks are ideal for probing a small, ground-state-proximate subspace of the exponentially-large Hilbert space. Determining \(\mathcal{PT}\)-breaking threshold--where the Hamiltonian first develops complex-conjugate eigenvalues -- requires knowledge of the entire spectrum, since the states that develop complex eigenvalues are typically not at the bottom (or the top) of the band [48]. The plan of the paper is as follows. In Sec. II we introduce the canonical quantum Ising chain and its non-Hermitian variations. The non-Hermitian variations on it consist of perturbations on one or two sites. In addition to the \(\mathcal{PT}\)-threshold, we also present the flow of eigen values across the \(\mathcal{PT}\)-symmetry breaking transition. In Sec. III we present a simple analytical approach that explains the surprisingly robust \(\mathcal{PT}\) threshold results from Sec. II. We conclude the paper in Sec. IV with higher-spin generalizations, brief feasibility analysis, and summary. The \(\mathcal{PT}\)-threshold results are valid for chains with \(N>2\) where the bulk-vs-edge sites and periodic-vs-open boundary conditions are unambiguously defined, but do not depend on \(N\). ## II Non-Hermitian quantum Ising models The canonical quantum Ising model with \(N\) sites is described by the Hamiltonian \[H_{0}(J,h_{z})=-\frac{J}{4}\sum_{i=1}\sigma_{i}^{x}\sigma_{i+1}^{x}-\frac{h_{z} }{2}\sum_{i=1}\sigma_{i}^{z} \tag{1}\] where \(J>0\) is the ferromagnetic coupling between adjacent spins, the uniform transverse field is along the \(z\)-axis, and the boundary term \(\sigma_{N}^{x}\sigma_{1}^{x}\) is included when periodic-boundary conditions are required [49; 50]. This exactly solvable model undergoes a quantum phase transition from a spontaneously-broken \(\mathbb{Z}_{2}\)-symmetry phase to a paramagnetic phase with short-range correlations as the transverse field strength crosses \(h_{z}=J/2\)[51; 52]. In this section, we investigate its varied non-Hermitian extensions. ### Two-site Perturbations with Hermitian or anti-Hermitian limit (\(h_{z}=0\)) Consider the non-Hermitian extension \[H_{\text{eff}}(J,h_{z}|\gamma) =H_{0}(J,h_{z})+\Gamma_{pq}^{+}(\gamma), \tag{2}\] \[\Gamma_{pq}^{+} =\gamma(\sigma_{p}^{+}+\sigma_{q}^{-})\neq\Gamma_{pq}^{+\dagger}, \tag{3}\] where \(\gamma>0\) is the strength of the exceptional perturbations \(\sigma^{\pm}\), and \(1\leq p,q\leq N\) denote their locations along the chain. When \(p=q\), the perturbation Eq.(3) is trivially Hermitian and the system has no threshold. Since \(H_{\text{eff}}\) has real entries, its characteristic polynomial has real coefficients and its eigenvalues are real or complex conjugates [53]. Figure 1 summarizes the \(\mathcal{PT}\)-threshold phase diagram of such quantum spin chain in the absence of transverse field. It involves calculating the spectrum of \(H_{\text{eff}}(J,0|\gamma)\) by exact diagonalization, and then recursively increasing the strength of \(\Gamma^{+}\) until complex-conjugate eigenvalues emerge at the threshold \(\gamma_{\text{PT}}\). Figure 1(a) shows the dimensionless threshold \(\gamma_{\text{PT}}/J\) for an \(N=7\) open chain as a function of \((p,q)\), but the results remain the same for any chain size \(N>2\). Ignoring the trivial Hermitian case (\(p=q\); black, filled circles), the threshold results can be grouped into three categories: Adjacent sites (\(|p-q|=1\)) : \(\gamma_{\text{PT}}=0\) (red circles); (4) Edge sites (\(|p-q|>1\)) : \(\gamma_{\text{PT}}=J/4\) (blue circles); (5) Bulk sites (\(|p-q|>1\)) : \(\gamma_{\text{PT}}=J/2\) (green circles). (6) When periodic boundary conditions are imposed on Eq.(1), the "edge sites" category, Eq.(5), disappears; the threshold is zero when the perturbations \(\sigma^{\pm}\) are on adjacent sites and \(\gamma_{\text{PT}}=J/2\) when they do not share a bond. These results are robust with respect to the number of spins (\(N>2\)), (open or periodic) boundary conditions, or the distance \(|p-q|\geq 2\) and the locations of the two sites along the chain. This surprising nonzero threshold implies that the \(\mathcal{P}\)-operator is not the spatial reflection, \(k\leftrightarrow N+1-k\). Indeed, since the Hamiltonian \(H_{\text{eff}}(J,h_{z}|\gamma)\) is purely real, its antilinear symmetry can be chosen as \(\mathcal{P}=\mathbbm{1}_{N}\) and \(\mathcal{T}=*\) (complex conjugation). To understand the mechanism of \(\mathcal{PT}\)-symmetry breaking under exceptional perturbations, we show the flow of eigenvalues \(\text{Re}(E)\) (blue lines) and \(\text{Im}(E)\) (red lines) as a function of \(\gamma/J\) in Figs. 1(b)-(f). Since the eigenvalues occur in complex-conjugate pairs, it is sufficient to plot \(\text{Im}(E)>0\). When the potentials are maximally separated, \((p,q)=(1,N)\), starting from \(N\) bands with varying degeneracies, a set of central bands undergo level attraction and develop imaginary parts at \(\gamma=J/4\), Fig. 1(b). The ground state (or its particle-hole symmetric counterpart) does not participate in \(\mathcal{PT}\)-symmetry breaking. Figure 1(c), with \((p,q)=(6,4)\) shows that for bulk, non-adjacent sites, again, \(\mathcal{PT}\)-symmetry breaks with multitude of bands across the energy-level spectrum at \(\gamma=J/2\). The trivial case of a Hermitian perturbation, \(p=1=q\) shows expected linear level-splitting, Fig. 1(d). When the perturbation sites share a bond, \((p,q)=(2,1)\) linearly increasing \(\text{Im}(E)\) signal the zero threshold, Fig 1(e). We note that the bands developing complex eigenvalues are neither particle-hole symmetric nor at the bottom or the top. Lastly, when the edge perturbation sites are not maximally separated, \((p,q)=(6,1)\), the flow of eigenvalues is different, Fig. 1(f), from the results in Fig. 1(a). Next, we replace the \(\Gamma_{pq}^{+}\) potential by \[\Gamma_{pq}^{-}=\gamma(\sigma_{p}^{+}-\sigma_{q}^{-}) \tag{7}\] which reduces to an anti-Hermitian term \(\Gamma_{pp}^{-}=i\gamma\sigma_{y}\) when \(p=q\). When \(p\neq q\), the \(\mathcal{PT}\)-threshold \(\gamma_{\text{PT}}(p,q)=\gamma_{\text{PT}}(q,p)\) is given by Eqs.(4)-(6). When \(p=q\), the resulting threshold \(\gamma_{\text{PT}}(p,p)=0\) for a bulk site, whereas \(\gamma_{\text{PT}}=J/4\) for an edge site. Once again, these results are robust against the number of spins \(N>2\), the distance \(|p-q|>1\) between the perturbations, and nature of boundary conditions. Here, too, since Eq.(7) has purely real entries, \(\mathcal{PT}=\mathbbm{1}_{N}*\) gives the corresponding anti-linear symmetry. The simple expressions for the \(\mathcal{PT}\)-threshold, Eqs.(4)-(6), hint at an analytical solution. At this point, it is important to recall that the spectrum of the Hamiltonian Eq.(1) is traditionally obtained by using the Jordan-Wigner transformation to map the problem onto non-interaction fermions [51; 52]. Under this mapping, however, the exceptional perturbations \(\sigma_{p}^{+},\sigma_{q}^{-}\) create non-Hermitian, fermionic string operators, thereby rendering such an approach useless. ### Single-site Perturbations (\(h_{z}=0\)) Inspired by the repeating structure of bands in Fig. 1, and the finite \(\mathcal{PT}\)-threshold obtained in the anti-Hermitian limit of Eq.(7), we now consider the Ising spin chain with a single-site perturbation, \[\Gamma_{p}(\gamma_{+},\gamma_{-})=\gamma_{+}\sigma_{p}^{+}+\gamma_{-}\sigma_{p }^{-}, \tag{8}\] where \(\gamma_{\pm}\in\mathbb{R}\) denote the strengths of (exceptional) non-Hermiticities \(\sigma_{p}^{\pm}\) that act on the spin at site \(p\). Starting with the case \(\gamma_{-}=0\), the \(\mathcal{PT}\)-breaking threshold for the Hamiltonian \(H_{0}+\Gamma_{p}(\gamma,0)\) is given by \[\gamma_{\mathrm{PT}}(p)=\left\{\begin{array}{ll}J/4&\text{Edge case},\\ J/2&\text{Bulk case}.\end{array}\right. \tag{9}\] Figure 2 shows the evolution of the energy-spectra of an \(N=8\) chain as a function of \(\gamma\) when the sole-perturbation \(\sigma^{+}\) is on the edge site (a) and the bulk site, \(p=2\) (b). These results have many features common with the eigenvalue flows in Fig. 1. Specifically, we see that starting with \(N\) particle-hole symmetric bands at \(\gamma=0\) that the \(\mathcal{PT}\) breaking occurs at a threshold equal to \(J/4\) or \(J/2\) respectively, but the ground-state eigenvalue does not become complex. Since a unitary basis-change can map \(\sigma_{y}\to-\sigma_{y}\) without changing interaction Figure 1: \(\mathcal{PT}\)-breaking threshold for a 7-spins chain with \(h_{z}=0\) and non-Hermiticity \(\Gamma_{p,q}^{+}\), Eq.(3). (a) Apart from the Hermitian case at \(p=q\) (black circles), threshold takes three possible values: zero for adjacent sites (red circles); \(\gamma_{\mathrm{PT}}(p,q)=J/4\) when at least one site is at the edge (blue circles); and \(\gamma_{\mathrm{PT}}(p,q)=J/2\) when both sites are in the bulk. (b)-(f) show the flow of eigenvalues \(\mathrm{Re}(E)(\gamma)\) (blue lines) and \(\mathrm{Im}(E)(\lambda)>0\) (red lines) for \(p,q\) locations marked in (a). At \(\gamma=0\), the system has 7 particle-hole symmetric bands with varying degeneracies spanning the \(2^{7}=128\) eigenvalues; at a finite \(\gamma\), the particle-hole symmetry is generally broken. Ground-state band typically does not participate in the \(\mathcal{PT}\)-symmetry breaking transition. Therefore, variational or perturbative methods that focus on the lowest-lying states cannot be used to determine the \(\mathcal{PT}\)-symmetry breaking threshold. term in \(H_{0}\), Eq.(1), the threshold results for a \(\Gamma_{p}(0,\gamma)\)-perturbation are the same as in Eq.(9). Lastly, we consider the case where both \(\gamma_{\pm}\) are varied. The non-Hermitian, purely real Hamiltonian is given by \[H_{\rm eff}=H_{0}+(\gamma_{+}+\gamma_{-})\sigma_{p}^{x}+i(\gamma_{+}-\gamma_{-} )\sigma_{p}^{y}. \tag{10}\] We characterize the \(\mathcal{PT}\) phase diagram in the \((\gamma_{+},\gamma_{-})\) plane by plotting the largest imaginary part of the eigenvalues of \(H_{\rm eff}(\gamma_{+},\gamma_{-})\) obtained via exact diagonalization (Fig. 2). It indicates whether the system is in the \(\mathcal{PT}\)-symmetric phase (\(\max{\rm Im}(E)=0\); deep blue) or \(\mathcal{PT}\)-symmetry broken phase (\(\max{\rm Im}(E)>0\); other colors), and quantifies the amplification rate for the \(\mathcal{PT}\)-broken eigenstates. Along the diagonal \(\gamma_{+}=\gamma_{-}\), \(H_{\rm eff}\) is Hermitian and the spectrum is always real. Along the other diagonal, given by \(\gamma_{+}+\gamma_{-}=0\), the perturbation (8) is anti-Hermitian. In this case, we obtain a positive threshold for the edge case (Fig. 2(a)), while the threshold is zero for the bulk case (Fig. 2(b)), as seen in Sec. II.1. The \(\mathcal{PT}\) phase-diagram in Fig. 3 is symmetric under individual reflections across the two diagonals. Since \({\rm Im}(E)(\lambda_{+},\lambda_{-})\) is an even function of the strength of the \(i\sigma^{y}\) term in Eq.(10), reflection symmetry along the main diagonal is expected. Reflection symmetry along Figure 3: \(\mathcal{PT}\)-phase diagram of a 6-spins chain in \((\gamma_{+},\gamma_{-})\) plane. Color denotes maximum imaginary part \({\rm Im}(E)/J\) of the eigenvalues of Hamiltonian Eq.(10). Deep blue regions indicate \(\mathcal{PT}\)-symmetric phase. (a) When the perturbation site is at the edge (\(p=1\)), \(\gamma_{\rm PT}\) is positive along the anti-diagonal. (b) When the site is in the bulk (\(p=2\)), the threshold is zero. Figure 2: Flow of eigenvalues \({\rm Re}(E)(\lambda)\) (blue lines) and \({\rm Im}(E)(\lambda)\) (red lines) for an 8-spins chain with potential \(\gamma\sigma^{+}\) on one site. At \(\gamma=0\), the system has 8 particle-hole symmetric bands with varying degeneracies that account for the total \(2^{8}=256\) eigenvalues. (a) When the site is at the edge, \({\rm Im}(E)>0\) emerge past the threshold \(\gamma_{\rm PT}=J/4\). (b) For a bulk site, \(p=3\), the complex-conjugate eigenvalues occur past the threshold \(\gamma_{\rm PT}=J/2\). the anti-diagonal, on the other hand, arises because the Hermitian term in \(\Gamma_{p}\), Eq.(8), commutes with the Ising interaction term \(H_{0}\). ### Effect of nonzero transverse field (\(h_{z}\neq 0\)) When the transverse field \(h_{z}\) is introduced, the Hamiltonian \(H_{\rm eff}\) contains three mutually non-commuting pieces, one for each Pauli matrix. Since the \(\mathcal{PT}\)-threshold results depend only on \(h_{z}\), without loss of generality, we choose \(h_{z}>0\). First, we consider the fate of Hamiltonian Eq.(2) where potentials \(\sigma^{\pm}\) are introduced on sites \(p,q\) respectively. Apart from the trivial, Hermitian case (\(p=q\)), the behavior of the threshold \(\gamma_{\rm PT}\) can be categorized as Adjacent sites: \[\gamma_{\rm PT}(h_{z})=A_{1}h_{z},\] (11) Both edge sites: \[\gamma_{\rm PT}(h_{z})=(J/4)+A_{2}h_{z},\] (12) One edge site: \[\gamma_{\rm PT}(h_{z})=(J/4)\delta_{h,0}+A_{3}h_{z},\] (13) Bulk sites: \[\gamma_{\rm PT}(h_{z})=(J/2)\delta_{h,0}+A_{4}h_{z},\] (14) where \(A_{k}\) are configuration-dependent parameters. This behavior is also robust when the non-Hermitian perturbation is changed to \[\Gamma^{{}^{\prime}}_{pq}(\gamma)\equiv\gamma(\sigma^{+}_{p}+\sigma^{+}_{q}). \tag{15}\] Figure 4 shows the typical dependence of \(\max{\rm Im}(E)(\gamma)\) on the transverse field \(h_{z}\) for Hamiltonian \(H_{\rm eff}=H_{0}+\Gamma^{{}^{\prime}}_{pq}\). For adjacent sites, \(p=q\pm 1\), the zero threshold at \(h_{z}=0\) is lifted to values proportional to \(h_{z}\). This is indicated by broadening of the region where \(\max{\rm Im}(E)=0\) as \(\gamma\) is increased from zero (Fig. 4)(a). For non-adjacent cases, if one of the sites is at the edge, the \(h_{z}=0\) threshold is given by \(\gamma_{\rm PT}=J/4\). But it is suppressed to zero with the introduction of the transverse field. As \(h_{z}\) increases, the threshold also increases. Similar behavior is observed for \(\max{\rm Im}(E)(\gamma)\) when both sites are in the bulk (Fig. 4). Lastly, we investigate \(h_{z}\)-dependence of \(\gamma_{\rm PT}\) for the single-site perturbation model Eq.(10) by tracking the maximum imaginary part of its eigenvalues, \(\max{\rm Im}(E)(\gamma,h_{z})\). Fig. 5(a) shows that for an edge-perturbation, starting from \(J/4\) the threshold continuously increases with \(h_{z}\). In contrast, when the exceptional potential \(\gamma\sigma^{+}\) is on an interior site, the threshold \(J/2\) at \(h_{z}=0\) is suppressed to vanishingly small values for \(h_{z}\to 0\) before increasing linearly with \(h_{z}\) (Fig. 5(b)). When the edge-site potential is purely anti-Hermitian, starting from \(J/4\), the threshold further increases continuously with \(h_{z}\) (Fig. 5(c)). Figure 5(d) shows that when the anti-Hermitian potential \(i\gamma\sigma^{y}\) is in the bulk, the zero threshold at \(h_{z}=0\) is linearly lifted. Thus, the transverse field can strengthen or weaken the \(\mathcal{PT}\)-symmetric phase. Figure 4: Threshold \(\gamma_{\rm PT}(h_{z})\) for Hamiltonian Eq.(2) with \(N=9\) spins. (a) For perturbation \(\Gamma^{+}_{pq}\), adjacent sites with zero threshold develop a finite threshold \(\propto|h_{z}|\). This is signified by \(\max{\rm Im}(E)=0\) regions that emerge at small \(\gamma\) when \(h_{z}\neq 0\). (b) For edge sites, the \(h_{z}=0\) threshold at \(\gamma/J=0.25\), Eq.(5), is suppressed to vanishingly small values when \(h_{z}=0^{+}\) and increases with \(h_{z}\) thereafter (solid lines with symbols). For bulk sites, the threshold at \(\gamma/J=0.5\), Eq.(6), is also suppressed to zero for \(h_{z}=0^{+}\) and increases with \(h_{z}\) (dot-dashed lines). Figure 5: Evolution \(\max{\rm Im}(E)(\gamma,h_{z})\) of the Hamiltonian Eq.(10 with \(N=7\) spins. Deep blue regions (\(\max{\rm Im}(E)=0\)) indicate \(\mathcal{PT}\)-symmetric phase. (a) For edge-site perturbation \(\gamma\sigma^{+}_{1}\), the threshold increases from \(J/4\) with increasing \(h_{z}\). (b) For the same perturbation in the bulk, threshold increases from \(\gamma_{\rm PT}(h=0^{+})=0\) while its value is \(J/2\) at \(h_{z}=0\). (c) same as (a) for anti-Hermitian, edge perturbation. (d) same as (b) for anti-Hermitian, bulk-site perturbation, where the no-field threshold is zero. ## III Symmetries and the \(\mathcal{PT}\)-breaking threshold The simple, \(N\)-independent results for the \(\mathcal{PT}\)-breaking threshold for a quantum Ising chain in the absence of a transverse field hint at an analytical solution. The robustness of that threshold \(\gamma_{\text{PT}}\) points to the possibility of investigating the interplay between the \(h_{z}/J\)-driven quantum phase transition and the \(\gamma/J\)-driven \(\mathcal{PT}\) symmetry breaking transition. Here, we discuss the analytical solution for \(h_{z}=0\). Consider the zero-field model with a single-site perturbation \(\Gamma_{p}(\gamma,0)\), Eq.(8). The eigenstates of \(H_{0}\), Eq.(1), can be written as \(\ket{\psi}=\ket{\pm_{1}}\otimes\ket{\pm_{2}}\cdots\ket{\pm_{N}}\) where \(\sigma_{m}^{x}\ket{\pm_{m}}=\pm\ket{\pm_{m}}\) are the symmetric (anti-symmetric) eigenstates at site \(m\). For a perturbation on site \(p\), we consider an eigenstate ansatz as \[\ket{\phi}\equiv\ket{\pm_{1}}\cdots\ket{\hat{n}_{p}}\cdots\ket{\pm_{N}}, \tag{16}\] where \(\ket{\hat{n}_{p}}\) denotes the spin state at the perturbation site. The eigenvalue equation satisfied by the state \(\ket{\phi}\) becomes \[H_{p}\ket{\hat{n}_{p}} =\left[h_{x}\sigma^{x}+i\frac{\gamma}{2}\sigma^{y}\right]\ket{ \hat{n}_{p}}=E_{p}\ket{\hat{n}_{p}}, \tag{17}\] \[h_{x} =-J\bra{\phi}\sigma_{p-1}^{x}+\sigma_{p+1}^{x}\ket{\phi}+\frac{ \gamma}{2}\sigma^{x}, \tag{18}\] where one of the \(p\pm 1\) terms is absent when the location \(p\) is at the edge. The \(2\times 2\) Hamiltonian \(H_{p}\) Eq.(17) undergoes \(\mathcal{PT}\)-symmetry breaking when the strength of the imaginary field is equal to that of the real field, i.e \(h_{x}=\pm\gamma/2\). This gives Eq.(9) as the threshold result. A similar analysis can be carried out for other exceptional potentials, including two-site potentials, Eq.(3), when the two sites are not adjacent. When the two sites are adjacent, a similar reduction to a \(4\times 4\) Hamiltonian gives the zero threshold, Eq.(4). Note that although the two-site perturbation was motivated by splitting a Hermitian term into two Jordan-normal-form terms, symmetries in the \(h_{z}=0\) case map \(\sigma^{+}\leftrightarrow\sigma^{-}\) under a local, unitary transformation on the site of the potential. This equivalence between the two potentials is another reminder that in systems with bounded eigenvalue spectrum, "gain" and "loss" are not equivalent to raising and lowering operators. Additional unitary-equivalent terms such as \(\gamma\sigma^{+}\leftrightarrow-\gamma\sigma^{+}\) or \(h_{z}\leftrightarrow-h_{z}\) were already taken into account when obtaining the \(\mathcal{PT}\) threshold results. ## IV Discussion In this paper, we have developed a new class of \(\mathcal{PT}\)-symmetric quantum Ising models with \(N\) spins, where the non-Hermitian potentials are confined to one or two sites and the \(\mathcal{PT}\)-breaking threshold is independent of \(N>2\). In most traditional models, the non-Hermiticity is introduced by changing model parameters from real to complex; that means the number of sites with non-Hermiticity is proportional to \(N\). Such complex extensions retain desirable features of the underlying models such as translational invariance, Jordan-Wigner integrability, etc. However, since each non-Hermitian qubit potential requires successful post-selection, implementing non-Hermitian potential on every site would require both local addressability and an exponential-in-\(N\) suppressed success-probability for post-selection. Both are challenging barriers. Our models show that introducing a single non-Hermitian qubit in a Hermitian, quantum Ising chain gives rise to \(\gamma_{\text{PT}}\) that can be varied with the transverse field. With full control required over only the non-Hermitian qubit, our models provide a pathway to investigate the interplay between interaction and non-Hermitian properties. Our results remain qualitatively unchanged when the Hermitian Hamiltonian is changed from a quantum Ising model to its integer-spin counterpart or Heisenberg model with or without anisotropies. The spin-1 case, for example, is made richer by the possibility of different exceptional perturbations such as \(S^{+}=(S^{x}+iS^{y})/2\) and \(S^{+2}\neq 0\). An exact diagonalization analysis is required to obtain the general threshold \(\gamma_{PT}(J_{xx},J_{yy},J_{zz};\mathbf{h})\) as a function of the multiple, possible non-Hermiticities, and its exhaustive characterization is an open problem. ###### Acknowledgements. This work is supported by ONR Grant No. N00014-21-1-2630. We thank P. Durganandini and Kater Murch for discussions.
2305.07631
Vision and Control for Grasping Clear Plastic Bags
We develop two novel vision methods for planning effective grasps for clear plastic bags, as well as a control method to enable a Sawyer arm with a parallel gripper to execute the grasps. The first vision method is based on classical image processing and heuristics (e.g., Canny edge detection) to select a grasp target and angle. The second uses a deep-learning model trained on a human-labeled data set to mimic human grasp decisions. A clustering algorithm is used to de-noise the outputs of each vision method. Subsequently, a workspace PD control method is used to execute each grasp. Of the two vision methods, we find the deep-learning based method to be more effective.
Joohwan Seo, Jackson Wagner, Anuj Raicura, Jake Kim
2023-05-12T17:36:22Z
http://arxiv.org/abs/2305.07631v1
# Vision and Control for Grasping Clear Plastic Bags ###### Abstract We develop two novel vision methods for planning effective grasps for clear plastic bags, as well as a control method to enable a Sawyer arm with a parallel gripper to execute the grasps. The first vision method is based on classical image processing and heuristics (e.g., Canny edge detection) to select a grasp target and angle. The uses a deep-learning model trained on a human-labeled data set to mimic human grasp decisions. A clustering algorithm is used to de-noise the outputs of each vision method. Subsequently, a workspace PD control method is used to execute each grasp. Of the two vision methods, we find the deep-learning based method to be more effective. ## I Introduction & Related Work State-of-the-art grasping methods in robotics are very sophisticated and robust to many tasks. For example, the paper "Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics" by Mahler et al introduces a grasping method that uses a CNN to evaluate grasp quality and plan an effective grasp given a point cloud and depth image of an object. The method achieves a success rate of 93% on eight known objects, and achieves 99% precision on a dataset of 40 novel household objects [1]. However, even advanced grasping methods like this one struggle with clear plastic bags. Clear plastic bags present a number of challenges to robotic systems. First, vision tasks for these objects are exceedingly difficult. The transparent material introduces specularities that depend on lighting, as well as regions that lack reflective signal, both of which confuse sensors and processing systems. Further, clear plastic bags are deformable in unpredictable ways, making it difficult to estimate the quality of a planned grasp before execution. To make matters more challenging, the 2021 paper, "Modeling, learning, perception, and control methods for deformable object manipulation", by Yin et al., suggests that perception techniques used to determine how an object will deform still lack performance [2]. Nonetheless, picking up clear plastic bags remains a pressing problem in the industry. For example, today in Amazon Warehouses, a significant proportion of the goods purchased by consumers are packaged in clear plastic. This makes the picking and sorting task difficult for robots, so humans do this task in many cases. There are several related works that help develop industrial solutions for manipulating plastic bags, but they work with opaque bags and do not address the vision problem associated with clear plastic bags. For example, the 2021 paper "Initial Results on Grasping and Lifting Deformable Bags with a Bimanual Robot" by Seita et al. found grasping plastic bags at the leftmost and rightmost points yielded a high success rate in the task of grasping and lifting a bag to contain items, suggesting that basic heuristics can be effective in addressing the challenging dynamics of this task [3]. Further, in the 2023 paper, "AutoBag: Learning to Open Plastic Bags and Insert Objects" by Chen et al., the researchers propose a self-supervised learning framework where a dual-arm robot learns to recognize the handles of plastic bags using UV-fluorescent markings that are not used at execution time [4]. Here, this team shows that a deep-learning approach can work for learning to pick effective grasp locations on a plastic bag, but they avoid the additional sensory challenges of clear plastic bags by using opaque plastic bags. Other works have indirectly shown that visual recognition of clear plastic objects is achievable with deep learning. For example, the 2019 paper "Deep Learning Based Robot for Automatically Picking up Garbage on the Grass" by Bai et al. proposes a method capable of recognizing garbage with 95% accuracy and includes examples of clear bottles properly detected [5]. In our work, we leverage three key insights from these papers: a) simple heuristics can work in the context of picking up bags, b) deep learning can be used to identify good grasp locations on plastic bags and c) deep learning might be able to learn how to decipher images of clear plastic objects. As follows, we develop a classical grasp planning algorithm that uses edge detection and a "distance from object in bag" heuristic to estimate where contours in clear plastic bags might exist that could be used as grasp targets. We also develop a deep-learning model trained using human-labeled data to process RGB and depth images and return grasp targets and angles. Finally, we develop a de-noising method and a workspace PD controller to execute grasps returned by these vision modules. ## II Methods ### _Problem setup & criteria for success_ To develop grasping solutions for clear plastic bags, we simplified the problem to that of picking up a ping-pong ball in a large ziplock bag with low-quality parallel grippers. The task is such that the system will need to determine and execute a grasp with the grippers pointing downwards and some rotation around the \(z\)-axis. Given this setup, we conjecture that approaches developed here can be extended to a variety of objects (heavier, different shapes) in similar packaging with more effective grippers and advanced object detection (in the case of heuristics-based methods). In this test setting, we evaluate grasp procedures by their success rate in picking up the bag as measured empirically through test trials.
2304.06606
Optimum transport in systems with time-dependent drive and short-ranged interactions
We study one-dimensional hardcore lattice gases, with nearest-neighbor interactions, in the presence of an external potential barrier, that moves on the periodic lattice with a constant speed. We investigate how the nature of the interaction (attractive or repulsive) affects particle transport and determine, using numerical simulations and mean-field calculations, the conditions for an optimum transport in the system. Physically, the particle current induced by the time-dependent potential is opposed by a diffusive current generated by the density inhomogeneity (a traveling wave) built up in the system, resulting in a current reversal, that crucially depends on the speed of the barrier and particle-number density. Indeed the presence of nearest-neighbor interaction has a significant impact on the current: Repulsive interaction enhances the current, whereas attractive interaction suppresses it considerably. Quite remarkably, when the number density is low, the current increases with the strength of the repulsive interaction and the maximum current is obtained for the strongest possible repulsion strength, i.e., for the nearest-neighbor exclusion. However, at high density, very strong repulsion makes particle movement difficult in an overcrowded environment and, in that case, the maximal current is achieved for weaker repulsive interaction strength.
Deepsikha Das, Punyabrata Pradhan, Sakuntala Chatterjee
2023-04-13T15:17:06Z
http://arxiv.org/abs/2304.06606v1
# Optimum transport in systems with time-dependent drive and short-ranged interactions ###### Abstract We study one-dimensional hardcore lattice gases, with nearest-neighbor interactions, in the presence of an external potential barrier, that moves on the periodic lattice with a constant speed. We investigate how the nature of the interaction (attractive or repulsive) affects particle transport and determine, using numerical simulations and mean-field calculations, the conditions for an optimum transport in the system. Physically, the particle current induced by the time-dependent potential is opposed by a diffusive current generated by the density inhomogeneity (a traveling wave) built up in the system, resulting in a current reversal, that crucially depends on the speed of the barrier and particle-number density. Indeed the presence of nearest-neighbor interaction has a significant impact on the current: Repulsive interaction enhances the current, whereas attractive interaction suppresses it considerably. Quite remarkably, when the number density is low, the current increases with the strength of the repulsive interaction and the maximum current is obtained for the strongest possible repulsion strength, i.e., for the nearest-neighbor exclusion. However, at high density, very strong repulsion makes particle movement difficult in an overcrowded environment and, in that case, the maximal current is achieved for weaker repulsive interaction strength. ## I Introduction The ability to manipulate colloidal-particle motion in narrow channels using time-varying optical potential has opened up new research avenues in driven diffusive systems [1; 2; 3; 4; 5]. These studies have provided valuable insights into a variety of important aspects of non equilibrium systems, such as verification and applicability of fluctuation relations, among other things. Experiments with colloidal particles driven by an optical trap [6; 7] have previously established the validity of the fluctuation-dissipation theorem, which predicts entropy production over finite time, and that of a generalized Einstein relation [8]; Additionally, the violation of the second law of thermodynamics has been experimentally demonstrated for small systems over short time scales [9]; see Ref. [10] for review. Recently, a particularly promising research direction that has received a lot of attention is the characterization of particle transport in a periodically driven many-particle system [11; 10]. These systems find application in a wide range of situations. For example, stochastic pumps [12; 13; 14; 15], in which the time-varying external parameters drives the systems away from equilibrium, can generate a directed particle flow; also consider the thermal ratchets [16; 17; 18], where non equilibrium fluctuations can induce a directed particle motion. Indeed, much attention has been focused on, and significant progress has been made, in understanding the underlying mechanism of directed flow in thermal ratchets and molecular pumps [12]. Notably, the characterization of particle transport in time-varying external potential is important also in the context of driven fluids in confined geometry, leading to the identification of number of unexpected consequences, such as negative differential resistance and absolute negative mobility, etc., among others [19; 20; 21; 22; 23; 24; 25; 26]. In the past, particle transport in colloidal suspensions in narrow channels have motivated studies of noninteracting particles, driven by a moving potential barrier, using dynamic density functional theory [19; 20]. Eventually, several many-particle models were also put forth in an effort to theoretically understand the role of hardcore interactions in these systems [27]. One particularly important question in this context is whether the system can support a nonzero dc (time-averaged) current when it is driven by a time-periodic driving force. Although the presence of an external forcing would typically suggest the presence of a current in the system, the periodic nature of the driving however means that the net force acting on the system over a time period is zero. In that case, do such systems still carry a nonzero dc current? If so, in what direction does the current flow? Another intriguing question is whether it is possible to optimize the particle current by tuning various control parameters. In order to address the above issues, a series of works [27; 28; 29] considered the paradigmatic models of simple exclusion processes [30], in which the interaction among the particles was assumed to be of the simplest possible form, i.e., hardcore exclusion. The motion of the particles were described on a lattice where particles hop from one site to a neighboring unoccupied site; in that case, the periodic external potential was simply represented by space and time dependent hopping rates. Depending on whether the time-varying hopping rates were present only on particular sites or were present throughout the system, it was shown through numerical simulations and a perturbative approach that the dc current flowing through such a system could either vanish (inversely with system size) or have a finite value. Furthermore, the dc current was found to exhibit non-monotonic dependence on the time-period of the drive. Several interesting features such as current reversal and system-size dependent transport were observed [29] in the case when time-varying potential maintains a position-dependent phase relation among sites, that results in a nonzero dc current. Subsequently, in another study of a many-particle lattice model [31], our group had developed a new and simple method of modeling a periodically moving drive in a system of hardcore particles diffusing on a ring. Motivated by moving trap or barrier used in experiments with particles in an optical potential, we studied a system with a "defect" site, which had a hopping rate different from the rest of the system. Then, in the non equilibrium setting, the defect site was considered to move on the lattice with a speed \(v\), and, for a one dimensional ring of \(L\) sites, to complete one cycle after a time period \(L/v\). Using numerical simulation and a mean-field theory, we observed that, in the time-periodic steady state, a density inhomogeneity is created around the defect, resulting in a dc current in the system that scales as \(1/L\). The direction and magnitude of the dc current was controlled by tuning the defect speed, particle density and the bulk diffusivity of particles. Moreover, in the presence of multiple defect sites [32], an interesting collective behavior was observed when the defect sites were close enough so that their respective density patterns generated by each of the defects overlap with each other. Interestingly, reversal of current has also been observed in a slightly different set up [33] in the context of a single particle, which diffuses in a two-dimensional channel of varying width and is driven by a force having a random orientation across the channel; in this case, the current reversal happens by tuning both the transverse and the longitudinal drive. So far, in the previous studies of many-particle lattice models, the only type of the interaction considered between the particles was the hardcore exclusion. However, in real systems, particles can also experience short-ranged attraction or repulsion and the interplay between external driving and inter-particle interactions are expected to give rise to nontrivial effects. In order to investigate this scenario, in the present work we consider a many-particle lattice model in which hardcore particles diffuse and interact via nearest-neighbor attractive or repulsive potential. In other words, in addition to the hardcore exclusion, a particle, in the case of repulsive (attractive) interaction, now prefers to have its neighboring site empty (occupied). Here we are primarily interested in exploring how the strength of the interaction potential affects the particle current in the system. Does the system still supports current reversal and, if so, how are the transport characteristics affected by many-particle interactions? Is there an optimum interaction strength for which magnitude of the current in either direction is largest? In this paper, by performing Monte Carlo simulations and using a modified mean-field theory, we have determined the condition of optimal transport in the system and studied how attractive or repulsive interaction among the particles affect the transport. We show that a moving defect always induces current in the negative direction, i.e., along the direction opposite to the defect movement. But, due to the density inhomogeneity produced by the defect movement, the diffusive current in the system flows in the positive direction. As a result, when the bulk diffusion in the system is negligibly small, we find current in the negative direction. However, as the bulk diffusion becomes stronger, the current changes sign and becomes positive. By varying the defect speed, particle density and the interaction strength, we determine the parameter regime, that yields the optimum current in the system in either direction. It turns out that an attractive interaction among the particles hinders transport, while a repulsive interaction enhances it. For small particle density, current is largest when the strength of the repulsive interaction assumes its highest possible value. However, for large particle density, the system is overcrowded and a very strong repulsion indeed blocks certain transitions and consequently reduces the current. In that case, the optimum transport is obtained when the interaction strength lies somewhat below the largest possible value. However, unlike repulsive interaction and irrespective of defect speed and bulk density, the current decreases monotonically with the attractive interaction strength. The organization of the paper is as follows: We describe the model in section II. In Section III, we describe the simplest case where dynamics in the bulk of the system is absent (the case where particle speed is larger compared to the corresponding rate of bulk diffusion). Analytical formalism for this particular situation is presented in subsections III A and III B while results are shown and discussed in subsection III C. In section IV, we discuss the case when bulk dynamics is also compared to the other rates in the system. Our conclusions are presented in section V. The model We consider paradigmatic models of exclusion processes involving hardcore particles with nearest-neighbor interaction [34]. We incorporate the periodically moving external potential barrier simply as a set of moving "defects" [31; 32], each of them resides at a site for duration \(\tau\) before moving to the right. The energy function for the system can be written as \[H=-\frac{J}{2}\sum_{\begin{subarray}{c}i,j\\ <i,j>\end{subarray}}\eta_{i}^{\{\alpha_{k}\}}\eta_{j}^{\{\alpha_{k}\}}+\sum_{ i}\eta_{i}^{\{\alpha_{k}\}}V_{i} \tag{1}\] where, \(\eta_{i}^{\{\alpha_{k}\}}\), \(\eta_{j}^{\{\alpha_{k}\}}\) denote occupancy of sites which can take values \(0\) or \(1\). The indices \(\{\alpha_{k}\}\equiv\{\alpha_{1},\alpha_{2},\ldots,\alpha_{N}\}\) are a set of \(N\) elements with the \(k\)-th element, \(\alpha_{k}\), denoting the position of the \(k\)-th defect and \(<i,j>\) denotes that sites \(i\) and \(j\) are the nearest neighbors, A site is called a defect site when a potential barrier is present there and it is called a bulk site otherwise. The potential at site \(i\) is \(V_{i}=\sum_{k}V_{0}\delta_{i,\alpha_{k}}\), where \(V_{0}\) represents height of the onsite potential barriers, and \(J\) denotes the interaction strength that can vary in the range \(-\infty\) to \(+\infty\). A mapping [35] can be performed from \(J\) to a dimensionless parameter \(\epsilon\), as given below \[e^{-\beta J}=\frac{(1+\epsilon)}{(1-\epsilon)} \tag{2}\] such that \(|\epsilon|\leq 1\) and thus \(0\leq e^{-\beta J}<\infty\). Note that, for a left (right) hopping to take place, the departure site has to be occupied by a particle and its immediate left (right) site has to be empty. Therefore, particle hopping towards left (right) can happen in four possible ways and all possible transition rates [35] for left and right hops are written as following: \[\begin{array}{ccc}0010\xleftarrow{c}\overbrace{\begin{subarray}{c}\text{ ce}^{-\beta V_{i}}\\ \text{c}(1-\epsilon)\end{subarray}}^{c}0100&0100\xleftarrow{c}\overbrace{ \begin{subarray}{c}\text{c}\text{c}^{-\beta V_{i}}\\ \text{c}(1-\epsilon)\end{subarray}}^{c}0010\\ 1010\xleftarrow{c(1-\epsilon)}1100&0101\xleftarrow{c(1-\epsilon)}0011\\ 0011\xleftarrow{c(1+\epsilon)}0011&1100\xleftarrow{c(1-\epsilon)}1010\\ 1011\xleftarrow{c}1101\xleftarrow{c}1011\end{array} \tag{3}\] with \(\beta V_{0}=\ln{(p/r)}\), where \(\beta\) is the inverse of temperature and (i) \(c=p/2\) when the departure site is a defect site and a particle hops out of it, (ii) \(c=q/2\) when the departure site is a bulk site, (iii) \(c=r/2\) when the departure site is a neighbor of a defect and the destination is a defect site. The transition rates and corresponding reverse rates follow detailed balance condition for defect velocity \(v=0\). For simplicity, in our study, we have considered only a single defect, which represents an infinite potential barrier, periodically moving over a lattice of length \(L\) with speed \(v\). Consequently \(r=0\) and \(p=1\) are maintained throughout the paper. ## III \(p=1\), \(r=0\), \(q=0\): No Dynamics in the Bulk ### Analytical formalism for a periodically moving defect In this section, we consider the simplest case when the inverse hopping rate is much larger compared to the typical residence time scale of the defect. In that case, we assume the bulk hopping rate \(q=0\), i.e., dynamics in the bulk is completely frozen. A particle can hop during an infinitesimal time interval \(dt\) only if its position coincides with the position of the defect denoted by \(\alpha\). Starting from an initial configuration, the system reaches a time periodic steady state, given sufficient time has passed. The density profile of the system has a form of a traveling wave moving over the lattice with the same speed \(v\) as that of the moving defect. The defect spends time \(\tau\) at a particular site before moving on to the next site where \(\tau=1/v\) is the residence time of the defect. When the defect spends one Monte-Carlo step on each lattice site, \(v\) is measured as 1. We have measured density profile at time-steps \(t=n\tau\) just before the defect moves on to the next site, after spending time \(\tau\) at the previous site, with \(n=0,1,2,....,\infty\). For time \(t=n\tau\) we write the discrete time evolution equation [31] for density \(\rho_{i}^{(\alpha)}(t)=\langle\eta_{i}^{(\alpha)}(t)\rangle\), \[\langle\rho^{(\alpha+1)}(t+\tau)|=\langle\rho^{(\alpha)}(t)|W^{(\alpha+1)} \tag{4}\] Here, \(\langle\rho^{(\alpha)}(t)\rangle|\equiv\{\rho_{1}^{(\alpha)}(t),....,\rho_{i} ^{(\alpha)}(t),....,\rho_{L}^{(\alpha)}(t)\}\) is a row vector of length \(L\), its \(i\)-th element being \(\rho_{i}^{(\alpha)}(t)\) with \(\alpha\) denoting the position of the defect. \(W^{(\alpha+1)}\) is the transition matrix with the defect site at \(\alpha\) + 1. Its structure depends on the position of the defect site. For example when \(\alpha+1=1,2\), respective transition matrices can be written as, \[W^{(1)}=\begin{bmatrix}1&0&0&\ldots&0&0\\ 0&1&0&0&\ldots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ 0&\cdots&0&0&1&0\\ 0&0&\ldots&0&0&1\end{bmatrix} \tag{5}\] Figure 1: Schematic diagram of the model. Red solid (empty) ellipses represent occupied (empty) defect sites while the blue solid (empty) circles are occupied (empty) bulk sites. In one time step a particle can jump to one of its neighboring site provided the destination site is empty. The transition rates depend on the local configurations around the departure site, as specified in Eq. (3). The \(i,j\)-th element of the transition matrix [31] can be written as, \[\begin{split} W^{\alpha+1}_{ij}&=1-a_{+}-a_{-}\qquad \text{for}\qquad i=j=\alpha+1\\ W^{\alpha+1}_{ij}&=a_{-}\qquad\text{for}\qquad i=j+1 =\alpha+1\\ W^{\alpha+1}_{ij}&=a_{+}\qquad\text{for}\qquad i=j-1 =\alpha+1\\ W^{\alpha+1}_{ij}&=1\qquad\text{for}\qquad i=j \neq\alpha+1\\ W^{\alpha+1}_{ij}&=0\qquad\text{for}\qquad i\neq \alpha+1,\,i\neq j\end{split} \tag{6}\] Here \(a_{\pm}\) are the conditional probabilities that, given the defect site is occupied, particle from the defect site moves to its unoccupied right(left) neighboring site during the residence time \(\tau\). Starting from microscopic dynamics we can have their expressions as following: \[a_{+}=\sum_{m=1}^{6}\mathcal{C}^{+}_{m}\omega^{+}_{m}\quad,\quad a_{-}=\sum_{n =1}^{6}\mathcal{C}^{-}_{n}\omega^{-}_{n} \tag{7}\] where \(\mathcal{C}^{+}_{m}\), \(\mathcal{C}^{-}_{n}\) are the conditional probabilities of different local configurations favorable for right and left hopping respectively during the residence time \(\tau\), given the defect site is occupied. \(\omega^{+}_{m}\), \(\omega^{-}_{n}\) denote transition probabilities for right and left hopping respectively from an occupied defect site during \(\tau\). For example, \[\mathcal{C}^{+}_{1}=\text{Prob.}(00\hat{1}01|\eta^{(\alpha)}_{\alpha+1}=1)= \frac{\left\langle(1-\eta^{(\alpha)}_{\alpha-1})(1-\eta^{(\alpha)}_{\alpha}) \eta^{(\alpha)}_{\alpha+1}(1-\eta^{(\alpha)}_{\alpha+2})\eta^{(\alpha)}_{ \alpha+3}\right\rangle}{\left\langle\eta^{(\alpha)}_{\alpha+1}\right\rangle} \tag{8}\] \[\mathcal{C}^{-}_{1}=\text{Prob.}(10\hat{1}00|\eta^{(\alpha)}_{\alpha+1}=1)= \frac{\left\langle\eta^{(\alpha)}_{\alpha-1}(1-\eta^{(\alpha)}_{\alpha})\eta^ {(\alpha)}_{\alpha+1}(1-\eta^{(\alpha)}_{\alpha+2})(1-\eta^{(\alpha)}_{\alpha +3})\right\rangle}{\left\langle\eta^{(\alpha)}_{\alpha+1}\right\rangle} \tag{9}\] \[\omega^{+}_{1}=\omega^{-}_{1}=\frac{1-\epsilon}{2-\epsilon}\bigg{(}1-e^{-(2- \epsilon)/4\upsilon}\bigg{)} \tag{10}\] where \(\mathcal{C}^{+}_{1}\) and \(\mathcal{C}^{-}_{1}\) represent the conditional probabilities for local configurations \(00\hat{1}01\) and \(10\hat{1}00\) respectively given the defect site is occupied. \(\omega^{+}_{1}\) and \(\omega^{-}_{1}\) denote the transition probabilities corresponding to configuration \(00\hat{1}01\) and \(10\hat{1}00\) respectively during a residence time \(\tau\) with \(\hat{1}(\hat{0})\) denoting an occupied (unoccupied) defect site (details are given in Appendix). Due to the time periodic structure of the steady state, the density profile comes back to itself each time the defect moves across the ring and completes a cycle. So, \(W^{(\alpha+1)}...W^{(L)}W^{(1)}...W^{(\alpha-1)}W^{(\alpha)}\) has an eigenvector \(\langle\rho^{(\alpha)}_{st}|\), with eigenvalue unity. The steady state density [31] at \(i\)-th site satisfies, \[\rho^{(\alpha+1)}_{st,i}=\rho^{(\alpha)}_{st,i-1} \tag{11}\] which follows from the time periodic structure of the steady state density and from Eq. (4). To solve for the density profile in a time periodic steady state, we find that at the time of measurement, the defect site \(\alpha\) registers lower density compared to the bulk as for \(r\)=0 particles can't hop into the defect site. Rather they can only hop out of the defect site. For \(q\)=0 the neighboring sites (\(\alpha\pm 1\)) can only receive particles from the defect site without any loss. The site (\(\alpha+1\)) thus has a higher density compared to that at the bulk. On the other hand, the site (\(\alpha-1\)) which was previously occupied by the defect and has already registered lower density, could only receive particle from the defect site \(\alpha\) and its density goes back to the bulk level. Therefore regarding the structure of the density profile as a function of position, we formulate an ansatz [31] in the form of a traveling density wave which moves with the defect \(\alpha\). \[\rho_{st,i}^{(\alpha)}= \rho_{-}\quad\text{for}\quad i=\alpha\] \[\rho_{st,i}^{(\alpha)}= \rho_{+}\quad\text{for}\quad i=\alpha+1 \tag{12}\] The ansatz can be used in Eqs. 4 and 11, to obtain Eqs. 13, 14. \[\rho_{+}a_{-}+\rho_{-}=\rho_{b} \tag{13}\] \[\rho_{+}a_{+}+\rho_{b}=\rho_{+} \tag{14}\] which can be solved by using particle number conservation \(\rho_{+}+\rho_{-}+(L-2)\rho_{b}=L\rho\) to get the exact densities, \[\rho_{b}\quad=\quad\frac{(1-a_{+})L}{2-a_{+}-a_{-}+(1-a_{+})(L-2)}\rho\quad \simeq\quad\rho \tag{15}\] \[\rho_{+}\quad\simeq\quad\frac{1}{1-a_{+}}\rho \tag{16}\] \[\rho_{-}\quad\simeq\quad\frac{1-a_{+}-a_{-}}{1-a_{+}}\rho \tag{17}\] as \(L>>1\). From Eqs. 16 and 17 it is evident that \(\rho_{+}>\rho\) and \(\rho_{-}<\rho\) i.e a peak and a trough are formed in front of the defect site and at the defect site respectively. The trough and the peak present in the density profile are different in size which will result in a non-zero particle current in general. Contribution to particle current comes from only two bonds adjacent to the defect site as no hopping takes place across any other bond for \(q=0\). Particle current consists of two components \(J_{+}\) and \(J_{-}\), defined to be the time rate of rightward and leftward movement of particles respectively, from the defect site. The total current is the algebraic sum of them. The defect visits a particular site with rate \(v/L\). Thus the expression for particle current can be written as \[J=J_{+}+J_{-}=\frac{v}{L}(\langle\eta_{\alpha+1}^{(\alpha)})a_{+}-\langle\eta _{\alpha+1}^{(\alpha)}\rangle a_{-}) \tag{18}\] which can be written in terms of \(\rho_{\pm}\) from Eq. 16 and 17 as \[J=\frac{v}{L}((\rho_{+}-\rho)+(\rho_{-}-\rho))=\frac{v}{L}(\rho_{+}+\rho_{-}-2\rho) \tag{19}\] where \[J_{+}=\frac{v}{L}(\langle\eta_{\alpha+1}^{(\alpha)}\rangle a_{+})=\frac{v}{L}( \rho_{+}-\rho) \tag{20}\] \[J_{-}=-\frac{v}{L}\langle\eta_{\alpha+1}^{\alpha}\rangle a_{-}=\frac{v}{L}( \rho_{-}-\rho) \tag{21}\] ### Mean-field theory The exact expression for \(a_{+}\) and \(a_{-}\) are given in Eq. 7 and in Eq. A-1 to A-17. To write \(a_{+}\) and \(a_{-}\) as explicit functions of \(\epsilon\), \(\rho\) and \(v\), we use mean-field approximation, where many-point correlations \(\mathcal{C}_{m}^{+}\), \(\mathcal{C}_{n}^{-}\) (for m, n = 1, 2,..., 6) are assumed to be factorized. Thus we have, \[\begin{split}\mathcal{C}_{1}^{+}=\mathcal{C}_{1}^{-}=\rho(1-\rho_{-})(1- \rho)^{2}\end{split} \tag{22}\] etc. and \[\begin{split} a_{+}=(1-\rho)\Bigg{[}(1-\rho)(1-\rho_{-})\rho \omega_{1}^{+}+\rho^{2}(1-\rho_{-})\omega_{2}^{+}+(1-\rho)^{2}(1-\rho_{-}) \omega_{3}^{+}\\ +\rho(1-\rho_{-})(1-\rho)\omega_{4}^{+}+\rho_{-}(1-\rho)\omega_{5 }^{+}+\rho_{-}\rho\omega_{6}^{+}\Bigg{]}\end{split} \tag{23}\] \[\begin{split} a_{-}=(1-\rho_{-})\Bigg{[}\rho(1-\rho)^{2}\omega_{1 }^{-}+\rho^{2}(1-\rho)\omega_{2}^{-}+(1-\rho)^{3}\omega_{3}^{-}\\ +\rho(1-\rho)^{2}\omega_{4}^{-}+\rho(1-\rho)\omega_{5}^{-}+\rho^ {2}\omega_{6}^{-}\Bigg{]}\end{split} \tag{24}\] Combining Eqs. 16, 17 we have the following form. \[(\rho_{-}-\rho)(1-a_{+})+a_{-}\rho=0 \tag{25}\] If expressions for \(a_{+}\) and \(a_{-}\) from Eqs. 23, 24 are put into Eq. 25 we will obtain the following quadratic equation for \(\rho_{-}\), \[\begin{split}(\rho_{-}-\rho)[1-(1-\rho)\{(1-\rho)(1-\rho_{-}) \rho\omega_{1}^{+}+\rho^{2}(1-\rho_{-})\omega_{2}^{+}+(1-\rho)^{2}(1-\rho_{-} )\omega_{3}^{+}\\ +\rho(1-\rho_{-})(1-\rho)\omega_{4}^{+}+\rho_{-}(1-\rho)\omega_{5 }^{+}+\rho_{-}\rho\omega_{6}^{+}\}]+\rho(1-\rho_{-})[\rho(1-\rho)^{2}\omega_ {1}^{-}\\ +\rho^{2}(1-\rho)\omega_{2}^{-}+(1-\rho)^{3}\omega_{3}^{-}+\rho(1 -\rho)^{2}\omega_{4}^{-}+\rho(1-\rho)\omega_{5}^{-}+\rho^{2}\omega_{6}^{-}]=0 \end{split} \tag{26}\] Solving Eq. 26 in Mathematica, retaining the physically acceptable solution (not larger than unity) we have obtained \(\rho_{+}\) and \(\rho_{-}\) against \(\epsilon\). Then meanfield expression for current can be written using Eqs. 23, 24 and Eqs.A-1 to A-17, as \[J=\frac{v}{L}\rho_{+}(\rho_{-}-\rho)\{(1-\rho)(1-e^{-(1+\epsilon)/4v})+\rho( 1-e^{-1/4v})\} \tag{27}\] In the limit of small and large \(\rho\), the solutions for \(\rho_{\pm}\) and hence for particle current \(J\) take simple forms as we write \(a_{\pm}\) in leading order of \(\rho\) and \((1-\rho)\) respectively. #### iii.1.1 Small-density approximation For small \(\rho\), we retain leading order terms in \(\rho\) in its functions. We obtain, \[a_{+}\approx\rho_{-}[\rho(3\omega_{3}-\omega_{1}-\omega_{4}-2\omega_{5}+ \omega_{6})-(\omega_{3}-\omega_{5})]+\rho(\omega_{1}-3\omega_{3}+\omega_{4})+ \omega_{3} \tag{28}\] \[a_{-}\approx\rho_{-}[\rho(3\omega_{3}-\omega_{1}-\omega_{4}-\omega_{5})- \omega_{3}]+\rho(\omega_{1}-3\omega_{3}+\omega_{4}+\omega_{5})+\omega_{3} \tag{29}\] Substituting Eqs. 28 and 29 into Eq. 25 we have, \[\rho_{-}=\frac{1-2\omega_{3}}{1-\omega_{3}}\rho \tag{30}\] \[\rho_{+}=\frac{\rho}{1-\omega_{3}} \tag{31}\] Scaled current \(JL\) takes the form, \[JL=\frac{v\omega_{5}(1-2\omega_{3})\rho^{2}}{(1-\omega_{3})^{2}} \tag{32}\] #### iii.2.2 Large-density approximation For large \(\rho\), we retain terms in leading order of \((1-\rho)\). We obtain, \[a_{+}\approx\omega_{6}(1-\rho)+(\omega_{2}-\omega_{6})(1-\rho)(1-\rho_{-}) \tag{33}\] \[a_{-}\approx\omega_{6}(1-\rho_{-})+(\omega_{2}+\omega_{5}-2\omega_{6})(1-\rho )(1-\rho_{-}) \tag{34}\] Substituting Eqs. 33 and 34 into Eq. 25 we have, \[\rho_{-}=1-\frac{(1-2\omega_{2}-2\omega_{5}+3\omega_{6})}{(1-\omega_{2}- \omega_{5}+\omega_{6})^{2}}(1-\rho) \tag{35}\] \[\rho_{+}=1-(1-\omega_{6})(1-\rho) \tag{36}\] Scaled current \(JL\) takes the form, \[JL=\frac{v\omega_{6}}{\omega_{6}-1}(1-\rho) \tag{37}\] ### Simulation results and comparison with mean-field theory We present here numerical results obtained from simulation along with analytical calculations from mean-field theory. We have used system size \(L=512\) throughout. We have studied variation of density peak \(\rho_{+}\), density trough \(\rho_{-}\) and particle current \(J\) against bulk density \(\rho\), interaction strength \(\epsilon\) and defect velocity \(v\). For \(q=0\), it has been observed that magnitude of particle current is maximum around \(v=0.16\). Since we are interested in optimum particle transport, \(\rho\) and \(\epsilon\) dependence of all the quantities have been studied for \(v=0.16\). Variation against \(\epsilon\) and \(v\) have been studied for two different densities, \(\rho=0.29\) and \(0.75\), well below and above \(\rho=0.5\) while variation against \(\rho\) and \(v\) are carried out for \(\epsilon=0.6\) and \(-0.6\), corresponding to repulsive and attractive interaction respectively. From density profiles depicted in Figs. 2a and 2b and from variation of \(\rho_{\pm}\) vs \(\rho\) depicted in Fig. 3, it can be observed that, density peak and trough become more pronounced in case of repulsive interaction. Variation of \(\rho_{-}\) with \(\rho\) is stronger compared to that of \(\rho_{+}\). Such a behavior is also supported by mean-field theory. In Fig. 4, we find that as \(\epsilon\) increases from negative to positive values, the differences \((\rho_{+}-\rho)\) and \((\rho-\rho_{-})\) also increase. The variation in both these quantities against \(\epsilon\) are non-monotonic with a peak at large positive \(\epsilon\) values. Our mean-field calculation captures this non-monotonic behavior but does not provide good quantitative agreement with the numerical data. Figure 3: \(\rho_{\pm}\) are plotted against bulk density \(\rho\). Mean-field results are presented by solid (\(\epsilon=-0.6\)), dashed (\(\epsilon=0\)) and dotted (\(\epsilon=0.6\)) lines. \(\rho_{+}\) shows a weaker dependence on \(\rho\) compared to \(\rho_{-}\). For attractive interaction \(\rho_{+}(\rho_{-})\) is noticeably smaller (greater) than that for the hardcore and repulsive interactions. Mean-field results show good agreement for \(\epsilon\geq 0\) but for \(\epsilon<0\) quantitative deviation from numerical data is observed. Figure 2: Particle density profile \(\rho(x)\) where \(x\) denotes the distance from the defect site. For all interactions the defect site has a density trough and its right neighbor has a peak. For attractive interaction the trough and peak are relatively shallower. In Figs. (a)a and (b)b we show the variation of \(\rho_{\pm}\) with \(v\) for two different \(\rho\) values. For small \(v\) a particle can almost always hop out of the defect site but as \(v\) increases such a transition may not always be possible because of short residence time of the defect [31], [32]. Therefore \(\rho_{-}\) (\(\rho_{+}\)) increases (decreases) with \(v\), finally saturating to \(\rho\) for very large \(v\). As \(\epsilon\) increases from negative to positive values, for all \(v\), \(\rho_{-}\) becomes systematically lower and \(\rho_{+}\) becomes higher, consistent with what we have shown in Fig. 3. In Fig. 6 the variation of scaled particle current \(JL\) with \(\epsilon\) has been shown for different \(\rho\) values. At \(\epsilon=-1\) because of strong attractive interaction among the particles, the system supports one single cluster containing all the particles. Therefore current vanishes in this limit. As \(\epsilon\) increases, around the defect site a density profile consisting of peak and trough as shown in Figs. (a)a, (b)b is formed and current becomes non-zero. As \(\epsilon\) increases further the density peak and trough become more pronounced (as shown in Fig 4) resulting in larger current magnitude. However, for positive \(\epsilon\), current shows qualitatively different variations for small and large \(\rho\). For small \(\rho\) values, current remains almost constant with \(\epsilon\) before showing a mild increase near \(\epsilon=1\). Figure 4: Size of the density peak and trough, \((\rho_{+}-\rho)\) and \((\rho-\rho_{-})\) are plotted against \(\epsilon\) in panels (a) and (b). Mean-field results are presented by solid and dotted lines respectively. It is evident from both the panels that depth of the trough is always greater than height of the peak. Non-monotonic variation against \(\epsilon\) can be seen in both the quantities which is more pronounced for \((\rho-\rho_{-})\) with a maximum at a large positive \(\epsilon\). Mean-field theory can qualitatively capture such behavior in the repulsive region while it fails in the region of attractive interaction. Figure 5: \(\rho_{\pm}\) are plotted against defect velocity \(v\). Mean-field results are presented by solid (\(\epsilon=-0.6\)), dotted (\(\epsilon=0\)) and dashed (\(\epsilon=0.6\)) lines which qualitatively capture the variation. For large \(v\) both these quantities approach \(\rho\) while for small \(v\) they show weak variation. Comparing the data for different \(\epsilon\) values show that for all \(v\) repulsive interaction causes highest (lowest) \(\rho_{+}(\rho_{-})\). This behavior can be explained from our data in Fig. 4a, where the difference between the two curves (red square and blue diamond), which represents the asymmetry between the sizes of density peak and trough, remains unchanged for a significant range of positive \(\epsilon\) and increases when \(\epsilon\) is close to 1. Note that this asymmetry is directly related to the current as shown in Eq. 19. For large \(\rho\), on the other hand, current shows a peak at \(\epsilon\simeq 0.7\) and decreases beyond that. This behavior is consistent with our data in Fig. 4b, where the two curves are seen farthest apart at that particular \(\epsilon\). Note that mean-field theory can qualitatively capture the peak in current for large \(\rho\) but for small \(\rho\) it is unable to reproduce the upswing shown by our data near \(\epsilon=1\). We find similar disagreement in Fig. 4a as well where mean-field theory hardly captures the variation of trough size. Fig. 7 shows the plot of scaled current \(JL\) vs bulk density \(\rho\) for various \(\epsilon\) values. In the limit \(\rho\to 0\) and \(\rho\to 1\) current vanishes for all \(\epsilon\) as expected. We have been able to analytically show (see Eqs. 32, 37) that in the small density limit current \(\sim\rho^{2}\), while in the large density limit current \(\sim(1-\rho)\). This limiting behavior agrees reasonably well with our numerical data. From Fig. 7, we see that, for an intermediate density \(\rho^{*}\), the current shows a maximum. When \(\epsilon\) takes large negative value, the overall magnitude of the current is low because of strong attractive interaction among the particles. As the Figure 6: scaled current are plotted against epsilon along with mean-field results (presented by dotted line (\(\rho=0.2\)), short-dashed line (\(\rho=0.29\)), dot-dashed line (\(\rho=0.4\)), solid line (\(\rho=0.75\)) and dashed line (\(\rho=0.85\)). Current vanishes at \(\epsilon=-1\) and remains negative elsewhere. For small and intermediate \(\rho\) current is largest for \(\epsilon=1\), while for large \(\rho\) it shows a peak at a slightly smaller \(\epsilon\) value. Figure 7: Scaled current vs density for various interaction strengths. Discrete points show simulation data and lines show mean-field calculations. We have used line-styles dotted (\(\epsilon=-0.9\)), short-dashed (\(\epsilon=-0.5\)), dashed (\(\epsilon=0\)), solid (\(\epsilon=0.5\)), dot-dashed (\(\epsilon=0.97\)) and a dot-dot-dashed line (\(\epsilon=1\)). For all epsilon values, current shows non-monotonic variation with density. For attractive interaction and moderate repulsive interaction, current shows a single peak at a density \(>1/2\). However, as repulsion becomes stronger, current shows two peaks, separated by a minimum at \(\rho=1/2\). Although mean field theory fails to capture the double peak, we offer an alternative simple explanation in the text. particle attraction weakens, the current also becomes larger and the peak at \(\rho^{*}\) gets higher. Our mean-field results successfully capture this trend although \(\rho^{*}\) shows dependence on \(\epsilon\) unlike a nearly constant \(\rho^{*}\simeq 0.75\) obtained from numerics for all \(\epsilon\). However, as \(\epsilon\) changes sign and becomes positive the repulsive interaction does not favor successive occupied sites. This gives rise to a special point at \(\rho=0.5\) and \(\epsilon=1\) when the configuration with alternate sites occupied by particles is the only allowed configuration. No transitions are possible from this configuration and hence current vanishes. This is verified from our numerics where current sharply becomes zero at \(\rho=0.5\) for \(\epsilon=1\). This generates another peak in current at a lower density \(\rho<0.5\). However, even as \(\epsilon\) falls slightly below unity, this effect weakens and the zero of current at half -filled density is replaced by a mild minimum. Unfortunately, our mean-field calculations are unable to capture this effect and predicts a single peak for current for all \(\epsilon\). Fig. 8 depicts variation of current with defect velocity \(v\) for different interaction strength and two different \(\rho\) values. In all cases \(v\to 0\) corresponds to the equilibrium limit when current vanishes. For very large \(v\) the defect movement becomes too fast for the particles to respond and current vanishes here too. An intermediate \(v\) therefore maximizes the current which can be seen both from our numerical data and mean-field calculations. As \(\epsilon\) increases from negative to positive values the peak current increases monotonically for \(\rho=0.29\) (Fig. 8a), while for \(\rho=0.75\) the peak current shows a non-monotonic variation for positive \(\epsilon\) (Fig. 8b). This is consistent with the variation observed in Fig. 6. In Figs. 6 to 8, we have plotted current as a function of one of the three variables \(\rho\), \(\epsilon\) and \(v\), keeping other two constant. To understand the condition of optimum transport, we need to identify how \(\rho\), \(\epsilon\) and \(v\) should be chosen such that the current in the system is maximum. To this end, we present heat-maps in Fig. 9 where we simultaneously vary \(\epsilon\) and \(v\) for fixed \(\rho\). Our numerical data are presented in panels (a), (b) and our mean-field calculations appear in panels (c), (d) in Fig. 9. These plots clearly show repulsive interaction facilitates particle transport. For smaller density current always increases as \(\epsilon\) increases and largest current is obtained at \(\epsilon=1\). For larger density on the other hand, very strong repulsion makes certain transitions energetically unfavorable. This hinders particle transport. Therefore in this case optimum transport is obtained at an intermediate \(\epsilon\) value. Mean-field calculations manage to reproduce this optimality correctly in Fig. 9d, but do not work so well in Fig. 9c. Note that the scale used for low density is widely different from that in the high density. This means when the density is low, a condition for optimum transport can be derived but the current is far smaller than optimum regime for high density. This is seen more clearly in Fig. 10. Figure 8: Scaled current \(JL\) plotted against defect velocity \(v\) along with mean-field results (represented by dotted (\(\epsilon=-0.6\)), short-dashed (\(\epsilon=0\)), solid (\(\epsilon=0.6\)) and dot-dashed line (\(\epsilon=1\))) in panels (a) and (b). Current vanishes in the small \(v\) and large \(v\) limit and shows a peak in between. The peak height increases as interaction changes from attraction to repulsion. Largest peak is obtained for a large positive value of \(\epsilon\). Mean field theory explains the numerical data qualitatively. Fig. 10 depicts the heat-maps for current when \(\epsilon\) and \(\rho\) are varied keeping \(v\) constant. Apart from the usual choice of \(v=0.16\), we have also presented data for \(v=1\) here. These plots show to obtain the optimum transport, \(\rho\) needs to be sufficiently high. Since in the high density regime an intermediate strength of repulsive interaction gives maximum current, the optimum transport happens away from \(\epsilon=1\). Note that even in this figure the actual value of maximum current is far higher for \(v=0.16\) compared to \(v=1\) case. Figure 9: Scaled particle current \(JL\) is plotted against \(\epsilon\) and \(v\). Panels (a), (b) represent numerical data while panels (c), (d) show mean-field results. The heat-maps help to trace out the region of \(\epsilon\) and \(v\) corresponding to the optimum transport in the system. Panel (a) shows that for small density, current is maximum for strongest repulsion \(\epsilon=1\), while panel (b) shows that for large \(\rho\) a positive \(\epsilon<1\) optimizes the transport. Note however, the scales chosen for left and right panels which show magnitude of the optimum current is much larger when \(\rho\) is large. Mean- field results work reasonably well for large \(\rho\), but fail to capture the optimum transport regime for small \(\rho\). ## IV Nonzero bulk-hopping rate: \(p=1\), \(r=0\), \(q\neq 0\) In the previous section, we had considered the case when the only possible transition in the system is particle hopping out of the defect site. In the present section we consider \(q\neq 0\) which allows movement of particles in the bulk of the system. We are interested to find out how this bulk dynamics affects the current. As expected, for very small \(q\) our results are similar to what we had presented in the previous section. But as \(q\) increases, there is a significant effect on the current. We argue below that the bulk dynamics is expected to make a positive contribution to the current. Note that for \(q\neq 0\) the density profile remains homogeneous far from the defect site and therefore the non-vanishing contribution to current comes from the dynamics around the defect site. For small enough \(q\) the density profile remains qualitatively similar to our plot in Fig. 2. A diffusive current will flow between the site with density \(\rho_{-}\) and its left neighbor with density \(\rho\). Since \(\rho_{-}<\rho\), this current will be in the positive direction. Therefore, Figure 10: Numerical results for particle current \(JL\) is plotted against \(\epsilon\) and \(\rho\) in panel (a) and (b) while mean-field results are represented in panel (c) and (d). The heat-maps trace out the optimum region for particle current in \(\epsilon-\rho\) plane. The region corresponds to large \(\rho\) and a high positive \(\epsilon\). Magnitude of optimum current is higher in left panels where intermediate \(v\) value is used. inclusion of bulk dynamics adds a positive component to the system current. We do not have a mean-field theory for this case to support the numerical data. We have considered only small and moderate \(q\) in our study. In Fig. 11 we plot \(JL\) vs \(\epsilon\) for fixed \(v\) and two different \(\rho\) values. For \(q=0.05\) the behavior is very similar to the trend observed in Fig. 6. As \(q\) increases \(JL\) becomes more positive as explained above. For small density \(JL\) even reverses sign for \(q=0.5\). For large density since \(JL\) starts from large negative values for \(q=0\) it remains negative even when \(q=0.5\) although its magnitude decreases because of larger positive contribution coming from bulk dynamics. Similar trends are observed in the variation of \(JL\) vs \(\rho\) (Fig. 12)or \(JL\) vs \(v\) (Fig. 13). To identify the parameter regime for optimal transport we show the heatmap in Fig.14. We have four relevant parameters here: \(\epsilon,v,\rho\) and \(q\). For a fixed \(\rho\) we show the variation of current in \(\epsilon-v\) plane for two different \(q\) values. For \(q=0.1\) current has both positive and negative peaks, i.e optimum current can flow in the same direction of defect movement or in the opposite direction. This can be clearly seen from Fig. 14. For \(q=0.5\) however, only positive current is possible and optimum transport always happens in the direction of defect movement. ## V Summary and concluding remarks In this paper, we have studied a class of stochastic lattice gases of hardcore particles with nearest-neighbor interaction, where the system is driven by a localized potential barrier (referred to as a "defect") moving on a ring. We find that the inter-particle interaction is crucial in controlling particle transport in the system: In the presence of an Figure 14: Numerical results for current \(JL\) is plotted against \(\epsilon\) and \(v\) at \(\rho=0.29\). For small \(q\), a positive and a negative peak in the variation of current can be observed from panel (a) while panel (b) shows that there exists a single positive peak in its variation. Such peaks occur at large positive \(\epsilon\). Figure 13: Scaled current \(JL\) is plotted against defect velocity \(v\) for different \(\epsilon\). For small \(q\) current remains almost flat at small \(v\) and reverses its direction at an intermediate \(v\), while for moderate \(q\) it remains positive throughout for all \(\epsilon\). attractive interaction, the time-averaged dc current decreases, whereas a repulsive interaction increases the current significantly, thus resulting in an interaction-dominated regime of particle transport. The moving potential barrier creates a traveling density inhomogeneity, which generates a current in the negative direction, i.e., in the direction opposite to its movement, while the bulk diffusion generates a current in the direction along the barrier movement. As a result, when the bulk hopping (diffusion) rate vanishes, i.e., when \(q=0\), the particle current is always negative and shows a negative peak as the barrier movement speed \(v\) and bulk density \(\rho\) are varied. Quite remarkably, the negative peak in the current is further enhanced when a strong repulsive interaction is present among the particles. On the other hand, for the bulk hopping rate \(q\neq 0\), as defect speed \(v\), bulk density \(\rho\) and interaction strength \(\epsilon\) are varied, the particle current shows both positive and negative peaks, which are due to the competition between the positive contribution from bulk diffusion and the negative contribution from the defect movement; however the extent of variation is weaker in this case compared to that for \(q=0\). We have been able to identify the precise parameter regime for an optimum transport, which indeed maximizes the magnitude of the current. In the case of attractive interaction, a particle prefers to have its nearest neighbor occupied, giving rise to particle clustering. The contribution in the current from the transitions which cause fragmentation of the clusters decreases as the strength of attractive interaction increases, thus resulting in a decreased current. Indeed, unlike repulsive interaction, the current decreases monotonically with the attractive interaction strength, irrespective of defect speed and bulk density. To theoretically understand the above results, we perform a modified mean-field calculation, which - for repulsive interaction, high particle density and negligible bulk diffusion - agrees reasonably well with simulations. Strong attractive interaction causes particle clustering, leading to strong spatial correlations in the system, and the mean-field theory in that case does not work well. Also, for large bulk diffusion, our mean-field theory does not show satisfactory agreement with simulations, again due to the built up of quite strong spatial correlations in the system. The role of inter-particle interactions in controlling particle transport in the presence of a time-periodic drive can be tested in experiments. A periodic potential energy landscape can be created by superimposing external rotating magnetic field on local periodic arrangement of micro-magnets [36]. With the help of this periodic potential, micron-size super paramagnetic beads can be separated from a complex mixture by transporting the beads across a substrate. Then, by tuning the rotational frequency of the external field, the mobility of a specific type of beads can be significantly reduced. the interaction among the paramagnetic colloidal particles can be directly tuned using a modulated ratchet potential [37]. In a system of paramagnetic particles dispersed in water, driven across a striped patterned magnetic garnet film, an external rotating magnetic field induces a periodic potential energy landscape and causes directed motion of the particles. Interestingly, by varying the ellipticity of the rotating magnetic field, the inter-particle interaction can be changed from attractive to repulsive. Our conclusions can be tested in this kind of a setup. Throughout this work, we have considered a class of interacting many-particle models, albeit only on a lattice where particles hop in discrete steps, and it would be quite interesting to investigate particle transport in a continuum. Indeed, in the past, there has already been some progress in this direction where the directed particle transport in continuum was found to be crucially dependent on the precise protocols of the external drive under consideration. For example, a sinusoidally varying traveling wave potential is known to generate a current always in the direction of the traveling wave for a system of particles diffusing on a one-dimensional ring [38]. However, in a previous work, from our group it was demonstrated, using numerical simulations, that a moving potential barrier can in fact generate current in either direction, depending on whether the potential moves uniformly or in discrete jumps [39]. Interestingly, some recent studies have reported multiple current reversal for Brownian particles in the presence of a traveling wave potential [40; 41; 42]. In a slightly different context, Ref. [43] numerically investigated the effect of interaction on particle transport in asymmetric channels and observed that, depending on the frequency of the external periodic drive, it is possible to enhance transport by tuning the interaction potential. For single-file diffusion of colloidal particles in an external time-varying force field, various types of interactions such as Weeks-Chandler-Andersen, Yukawa, and super paramagnetic potentials were considered [44], and anomalous transport was observed. Indeed, theoretical understanding of transport in continuum, with such realistic potentials and in the presence of a time-periodic drive, will be of significant interest in the context of obtaining the most efficient directed flow. However, it is worth mentioning here that analytical calculations in such a many-particle continuum models is quite challenging. In this scenario, theoretical studies of lattice models such as those presented here are quite relevant and useful, particularly in terms of analytically calculating the transport properties of these systems, and could initiate further research in this direction. ## Appendix : Calculation of \(a_{\pm}\) for \(q=r=0\) In the main text, \(a_{+}(a_{-})\) is defined as the conditional probability that given the defect site is occupied, a particle hops from the defect site to its empty right (left) neighbor site during the residence time \(\tau\) of the defect at a single site. Eq. (7) provides a formal mathematical definition for \(a_{\pm}\). In this appendix we outline the calculation for \(\omega_{i}^{\pm}\) with \(i=1,2...,6\) as explicit functions of \(\epsilon\) and \(v\). Let \(\hat{1}\) (\(\hat{0}\)) denote an occupied (empty) defect site. When a particle hops rightward from the defect site, there are six possible local configurations, which are : 001\(\hat{0}\)1, 10\(\hat{1}\)01, 00\(\hat{1}\)00, 10\(\hat{1}\)00, 1\(\hat{1}\)00, 1\(\hat{1}\)01. We number them as \(i=1,2,...,6\). Similarly, for leftward hopping possibilities are: 10\(\hat{1}\)00, 10\(\hat{1}\)01, 00\(\hat{1}\)00, 00\(\hat{1}\)01, 00\(\hat{1}\)1 and 10\(\hat{1}\)1. For a system of size \(L\) we divide one Monte Carlo step in \(L\) time-intervals of length \(dt=1/L\), where \(L\gg 1\). For a specific local configuration \(i\), \(\omega_{i}^{+}(\omega_{i}^{-})\) is defined as the probability that a particle hops from the defect site to its right (left) neighboring site during \(\tau\). The configurations are numbered in such a way, that \(\omega_{i}^{+}=\omega_{i}^{-}=\omega_{i}\). Below we discuss only the rightward hopping events, which can be easily generalized for leftward hopping as well. ### Calculation for \(\omega_{1}=\omega(00\hat{1}01)\) For the local configuration 00\(\hat{1}\)01 the probability that the particle hopping event takes place during the first infinitesimal time step \(dt\) is given by \((1-\epsilon)pdt/4\) (see Fig. 1). Probability that no hopping takes place in this interval is \[\bigg{(}1-\frac{pdt(1-\epsilon)}{4}-\frac{pdt}{4}\bigg{)}\] (A-1) which includes the possibilities both leftward and rightward hopping attempts was unsuccessful. The probability that the hopping event takes place after time \(2dt\) is therefore, \[\bigg{(}1-\frac{pdt(1-\epsilon)}{4}-\frac{pdt}{4}\bigg{)}\bigg{(}\frac{(1- \epsilon)pdt}{4}\bigg{)}.\] (A-2) Similarly the probability that it takes place at time \(3dt\) is \[\bigg{(}1-\frac{pdt(1-\epsilon)}{4}-\frac{pdt}{4}\bigg{)}^{2}\bigg{(}\frac{(1 -\epsilon)pdt}{4}\bigg{)}\] (A-3) and so on. So the probability \(\omega_{1}\) that the hopping happens in any of the \(\tau/dt\) time steps is \[\frac{(1-\epsilon)pdt}{4}\bigg{[}1+\bigg{(}1-\frac{(2-\epsilon)pdt}{4}\bigg{)} +\bigg{(}1-\frac{(2-\epsilon)pdt}{4}\bigg{)}^{2}+.....+\bigg{(}1-\frac{(2- \epsilon)pdt}{4}\bigg{)}^{(\tau/dt)-1}\bigg{)}\bigg{]}\] \[=\frac{(1-\epsilon)pdt}{4}\bigg{(}\frac{1-(1-(2-\epsilon)pdt/4)^{\tau/dt}}{1 -(1-(2-\epsilon)pdt/4)}\bigg{)}=\frac{1-\epsilon}{2-\epsilon}\bigg{(}1-e^{-(2 -\epsilon)/4v}\bigg{)}\] (A-4) where we have used \(\tau=1/v\) and \(dt\to 0\). ### Results for remaining \(\omega\) Following similar steps as outlined above, expressions for all other \(\omega\) can be derived. We directly present the final results here \[\omega_{2}=\omega(10\hat{1}01) = \frac{1}{2}\bigg{(}1-e^{-p(1-\epsilon)/2v}\bigg{)}\] \[\omega_{3}=\omega(00\hat{1}00) = \frac{1}{2}\bigg{(}1-e^{-p/2v}\bigg{)}\] \[\omega_{4}=\omega(10\hat{1}00) = \frac{1}{2-\epsilon}\bigg{(}1-e^{-p(2-\epsilon)/4v}\bigg{)}\] (A-5) \[\omega_{5}=\omega(1\hat{1}00) = \bigg{(}1-e^{-p(1+\epsilon)/4v}\bigg{)}\] \[\omega_{6}=\omega(1\hat{1}01) = \bigg{(}1-e^{-p/4v}\bigg{)}\] ### Expressions for \(\mathcal{C}^{\pm}_{i}\) We provide the formal definitions for \(\mathcal{C}^{\pm}_{i}\) below. These denote the conditional probability of a specific local configuration, given that the defect site is occupied. \[\mathcal{C}^{+}_{1}=\text{Prob.}(00\hat{1}01|\hat{1})=\frac{\left\langle(1- \eta^{(\alpha)}_{\alpha-1})(1-\eta^{(\alpha)}_{\alpha})\eta^{(\alpha)}_{ \alpha+1}(1-\eta^{(\alpha)}_{\alpha+2})\eta^{(\alpha)}_{\alpha+3}\right\rangle }{\langle\eta^{(\alpha)}_{\alpha+1}\rangle}\] (A-6) \[\mathcal{C}^{+}_{2}=\text{Prob.}(10\hat{1}01|\hat{1})=\frac{\left\langle\eta^ {(\alpha)}_{\alpha-1}(1-\eta^{(\alpha)}_{\alpha})\eta^{(\alpha)}_{\alpha+1}( 1-\eta^{(\alpha)}_{\alpha+2})\eta^{(\alpha)}_{\alpha+3}\right\rangle}{\langle \eta^{(\alpha)}_{\alpha+1}\rangle}\] (A-7) \[\mathcal{C}^{+}_{3}=\text{Prob.}(00\hat{1}00|\hat{1})=\frac{\left\langle(1- \eta^{(\alpha)}_{\alpha-1})(1-\eta^{(\alpha)}_{\alpha})\eta^{(\alpha)}_{ \alpha+1}(1-\eta^{(\alpha)}_{\alpha+2})(1-\eta^{(\alpha)}_{\alpha+3})\right\rangle }{\langle\eta^{(\alpha)}_{\alpha+1}\rangle}\] (A-8) \[\mathcal{C}^{+}_{4}=\text{Prob.}(10\hat{1}00|\hat{1})=\frac{\left\langle\eta^ {(\alpha)}_{\alpha-1}(1-\eta^{(\alpha)}_{\alpha})\eta^{(\alpha)}_{\alpha+1}( 1-\eta^{(\alpha)}_{\alpha+2})(1-\eta^{(\alpha)}_{\alpha+3})\right\rangle}{ \langle\eta^{(\alpha)}_{\alpha+1}\rangle}\] (A-9) \[\mathcal{C}^{+}_{5}=\text{Prob.}(1\hat{1}00|\hat{1})=\frac{\left\langle\eta^ {(\alpha)}_{\alpha}\eta^{(\alpha)}_{\alpha+1}(1-\eta^{(\alpha)}_{\alpha+2})( 1-\eta^{(\alpha)}_{\alpha+3})\right\rangle}{\langle\eta^{(\alpha)}_{\alpha+1 }\rangle}\] (A-10) \[\mathcal{C}^{+}_{6}=\text{Prob.}(1\hat{1}01|\hat{1})=\frac{\left\langle\eta^ {(\alpha)}_{\alpha}\eta^{(\alpha)}_{\alpha+1}(1-\eta^{(\alpha)}_{\alpha+2}) \eta^{(\alpha)}_{\alpha+3}\right\rangle}{\langle\eta^{(\alpha)}_{\alpha+1}\rangle}\] (A-11) \[\mathcal{C}_{1}^{-}=\text{Prob.}(10\hat{1}00|\hat{1})=\frac{\left\langle\eta_{\alpha -1}^{(\alpha)}(1-\eta_{\alpha}^{(\alpha)})\eta_{\alpha+1}^{(\alpha)}(1-\eta_{ \alpha+2}^{(\alpha)})(1-\eta_{\alpha+3}^{(\alpha)})\right\rangle}{\left\langle \eta_{\alpha+1}^{(\alpha)}\right\rangle} \tag{12}\] \[\mathcal{C}_{2}^{-}=\text{Prob.}(10\hat{1}01|\hat{1})=\frac{\left\langle\eta_{ \alpha-1}^{(\alpha)}(1-\eta_{\alpha}^{(\alpha)})\eta_{\alpha+1}^{(\alpha)}(1- \eta_{\alpha+2}^{(\alpha)})\eta_{\alpha+3}^{(\alpha)}\right\rangle}{\left\langle \eta_{\alpha+1}^{(\alpha)}\right\rangle} \tag{13}\] \[\mathcal{C}_{3}^{-}=\text{Prob.}(00\hat{1}00|\hat{1})=\frac{\left\langle(1- \eta_{\alpha-1}^{(\alpha)})(1-\eta_{\alpha}^{(\alpha)})\eta_{\alpha+1}^{( \alpha)}(1-\eta_{\alpha+2}^{(\alpha)})(1-\eta_{\alpha+3}^{(\alpha)})\right\rangle }{\left\langle\eta_{\alpha+1}^{(\alpha)}\right\rangle} \tag{14}\] \[\mathcal{C}_{4}^{-}=\text{Prob.}(00\hat{1}01|\hat{1})=\frac{\left\langle(1- \eta_{\alpha-1}^{(\alpha)})(1-\eta_{\alpha}^{(\alpha)})\eta_{\alpha+1}^{( \alpha)}(1-\eta_{\alpha+2}^{(\alpha)})\eta_{\alpha+3}^{(\alpha)}\right\rangle }{\left\langle\eta_{\alpha+1}^{(\alpha)}\right\rangle} \tag{15}\] \[\mathcal{C}_{5}^{-}=\text{Prob.}(00\hat{1}1|\hat{1})=\frac{\left\langle(1- \eta_{\alpha-1}^{(\alpha)})(1-\eta_{\alpha}^{(\alpha)})\eta_{\alpha+1}^{( \alpha)}\eta_{\alpha+2}^{(\alpha)}\right\rangle}{\left\langle\eta_{\alpha+1}^ {(\alpha)}\right\rangle} \tag{16}\] \[\mathcal{C}_{6}^{-}=\text{Prob.}(10\hat{1}1|\hat{1})=\frac{\left\langle\eta_ {\alpha-1}^{(\alpha)}(1-\eta_{\alpha}^{(\alpha)})\eta_{\alpha+1}^{(\alpha)} \eta_{\alpha+2}^{(\alpha)}\right\rangle}{\left\langle\eta_{\alpha+1}^{(\alpha )}\right\rangle} \tag{17}\]
2306.07361
A sub-functor for Ext and Cohen-Macaulay associated graded modules with bounded multiplicity-II
Let $(A,\mathfrak{m})$ be a \CM\ local ring, then notion of $T$-split sequence was introduced in part-1 of this paper for $\mathfrak{m}$-adic filtration with the help of numerical function $e^T_A$. We have explored the relation between AR-sequences and $T$-split sequences. For a Gorenstein ring $(A,\mathfrak{m})$ we define a Hom-finite Krull-Remak-Schmidt category $\mathcal{D}_A$ as a quotient of the stable category $\CMS(A)$. This category preserves isomorphism, i.e. $M\cong N$ in $\mathcal{D}_A$ if and only if $M\cong N$ in $\CMS(A)$.This article has two objectives; first objective is to extend the notion of $T$-split sequence, and second objective is to explore function $e^T_A$ and $T$-split sequence. When $(A,\mathfrak{m})$ be an \anram\ \CM\ local ring and $I$ be an $\mathfrak{m}$-primary ideal then we extend the techniques in part-1 of this paper to the integral closure filtration with respect to $I$ and prove a version of Brauer-Thrall-II for a class of such rings.
Ankit Mishra, Tony J. Puthenpurakal
2023-06-12T18:32:29Z
http://arxiv.org/abs/2306.07361v1
# A sub-functor for ext and Cohen-Macaulay associated graded modules with bounded multiplicity-II ###### Abstract. Let \((A,\mathfrak{m})\) be a Cohen-Macaulay local ring, then notion of \(T\)-split sequence was introduced in part-1 of this paper for \(\mathfrak{m}\)-adic filtration with the help of numerical function \(e_{A}^{T}\). We have explored the relation between AR-sequences and \(T\)-split sequences. For a Gorenstein ring \((A,\mathfrak{m})\) we define a Hom-finite Krull-Remak-Schmidt category \(\mathcal{D}_{A}\) as a quotient of the stable category \(\underline{\text{CM}}(A)\). This category preserves isomorphism, i.e. \(M\cong N\) in \(\mathcal{D}_{A}\) if and only if \(M\cong N\) in \(\underline{\text{CM}}(A)\).This article has two objectives; first objective is to extend the notion of \(T\)-split sequence, and second objective is to explore function \(e_{A}^{T}\) and \(T\)-split sequence. When \((A,\mathfrak{m})\) be an analytically unramified Cohen-Macaulay local ring and \(I\) be an \(\mathfrak{m}\)-primary ideal then we extend the techniques in part-1 of this paper to the integral closure filtration with respect to \(I\) and prove a version of Brauer-Thrall-II for a class of such rings. Key words and phrases:Associated graded rings and modules, Brauer-Thrall conjectures, strict complete intersections, Henselian rings, Ulrich modules, integral closure filtration 2 **Definition 1.1**.: We say \(s\) is \(T\)-split if \(e^{T}_{A}(s)=0\). **Definition 1.2**.: Let \(M,N\) be MCM \(A\)-modules. Set \[T_{A}(M,N)=\{s\mid s\text{ is a $T$-split extension}\}.\] We proved [16, 1.4], **Theorem 1.3**.: _(with notation as above) \(T_{A}\colon\operatorname{CM}(A)\times\operatorname{CM}(A)\to\operatorname{ mod}(A)\) is a sub-functor of \(\operatorname{Ext}^{1}_{A}(-,-)\)._ It is not clear from the definition whether \(T_{A}(M,N)\) is non-zero. Theorem [16, 1.5] shows that there are plenty of \(T\)-split extensions if \(\dim\operatorname{Ext}^{1}_{A}(M,N)>0\). We proved [16, 1.5] **Theorem 1.4**.: _Let \((A,\mathfrak{m})\) be a Cohen-Macaulay local ring and let \(M,N\) be MCM \(A\)-modules. Then_ \[\operatorname{Ext}^{1}_{A}(M,N)/T_{A}(M,N)\quad\text{has finite length.}\] Note Theorem 1.4 has no content if \(M\) is free on the punctured spectrum of \(A\). One of our motivations of this paper was to investigate \(T_{A}(M,N)\) when \(M\) is free on the punctured spectrum of \(A\). Now assume \((A,\mathfrak{m})\) is Henselian and \(M\) is an indecomposable MCM \(A\)-module with \(M_{\mathfrak{p}}\) is free for all \(\mathfrak{p}\in\operatorname{Spec}^{0}(A)=\operatorname{Spec}(A)\setminus\{ \mathfrak{m}\}\), then a fundamental short exact sequence known as the Auslander-Reiten (AR)-sequence ending at \(M\) exists. For a good introduction to AR-sequences see [19, Chapter 2]. The following result gives a large number of examples of AR-sequences which are \(T\)-split. **Theorem 1.5**.: _Let \((Q,\mathfrak{n})\) be a Henselian regular local ring and \(\underline{f}=f_{1},\ldots,f_{c}\in\mathfrak{n}^{2}\) a regular sequence. Set \(I=(f_{1},\ldots,f_{c})\) and \((A,\mathfrak{m})=(Q/I,\mathfrak{n}/I)\). Assume \(\dim A=1\). Let \(M\) be an indecomposable MCM \(A\)-module with \(\operatorname{cx}_{A}M\geq 2\). Assume \(M\) is free on \(\operatorname{Spec}^{0}(A)\). Set \(M_{n}=\operatorname{Syz}^{A}_{n}(M)\), then for \(n\gg 0\) the AR-sequences ending in \(M_{n}\) are \(T\)-split._ For hypersurfaces defined by quadrics we prove: **Theorem 1.6**.: _Let \((Q,\mathfrak{n})\) be a Henselian regular local ring with algebraically closed residue field \(k=Q/\mathfrak{n}\) and let \(f\in\mathfrak{n}^{2}\setminus\mathfrak{n}^{3}\). Assume the hypersurface \(A=Q/(f)\) is an isolated singularity. Then all but finitely many AR-sequences in \(A\) are \(T\)-split._ Theorems 1.5 and 1.6 shows that \(T\)-split sequences are abundant in general. However the following example is important: **Example 1.7**.: There exists a complete hypersurface isolated singularity \(A\) and an indecomposable MCM \(A\)-module \(M\) such that \(T_{A}(M,N)=0\) for any MCM \(A\)-module \(N\). **1.8**.: Now assume \(A\) is Gorenstein. As observed in [13] the function \(e_{A}^{T}(-)\) is infact a function on \(\underline{\mathrm{CM}}(A)\) the stable category of all MCM \(A\)-modules. Let \(M\) and \(N\) be MCM \(A\)-modules. It is well-known that we have a natural isomorphism \[\eta\colon\underline{\mathrm{Hom}}_{A}(M,N)\cong\mathrm{Ext}^{1}_{A}(\Omega^{- 1}(M),N)\] Let \(T_{A}(\Omega^{-1}(M),N)\) denotes the set of all \(T\)-split sequences in \(\mathrm{Ext}^{1}_{A}(\Omega^{-1}(M),N)\). We denote \(\eta^{-1}(T_{A}(\Omega^{-1}(M),N))\) by \(\mathcal{R}(M,N)\). Then \(\eta\) induces following isomorphism \[\frac{\underline{\mathrm{Hom}}_{A}(M,N)}{\mathcal{R}(M,N)}\cong\frac{\mathrm{ Ext}^{1}_{A}(\Omega^{-1}(M),N)}{T_{A}(\Omega^{-1}(M),N)}.\] Suprisingly **Proposition 1.9**.: \(\mathcal{R}\) _is a relation on \(\underline{\mathrm{CM}}(A)\)._ Thus we may consider the quotient category \(\mathcal{D}_{A}=\underline{\mathrm{CM}}(A)/\mathcal{R}\). Clearly \(\mathcal{D}_{A}\) is a Hom-finite additive category. Surprisingly the following result holds **Theorem 1.10**.: _Let \((A,\mathfrak{m})\) be a Henselian Gorenstein local ring and let \(M\) and \(N\) be MCM \(A\)-modules. Then the following holds_ 1. \(M\cong N\) _in_ \(\mathcal{D}_{A}\) _if and only if_ \(M\cong N\) _in_ \(\underline{\mathrm{CM}}(A)\)_._ 2. \(M\) _is indecomposable in_ \(\mathcal{D}_{A}\) _if and only if_ \(M\) _is indecomposable in_ \(\underline{\mathrm{CM}}(A)\)__ 3. \(\mathcal{D}_{A}\) _is a Krull-Remak-Schmidt (KRS) category._ The main application of \(T\)-split sequences was to study _Weak Brauer-Thrall-II_ for associated graded modules for a large class of rings. Note that in [16] the concept was introduced only for \(\mathfrak{m}\)-adic filtration, but for general \(I(\neq\mathfrak{m})\)-adic filtrations that method will not work (see [16, Remark 3.2]). In this article we extend the results in [16] to a large family of filtrations. Let \((A,\mathfrak{m})\) be an analytically unramified Cohen-Macaulay local ring of dimension \(d\geq 1\) and let \(I\) be an \(\mathfrak{m}\)-primary ideal. Let \(\mathcal{F}=\{I_{n}\}_{n\in\mathbb{Z}}\) where \(I_{1}=\overline{I}\) and \(I_{n}=\overline{I^{n}}\) for \(n\gg 0\) be an \(I\)-admissible filtration. Let \(M\) be an MCM \(A\)-module. **1.11**.: Set \[e_{\mathcal{F}}^{T}(M)=\lim_{n\to\infty}\frac{(d-1)!}{n^{d-1}}\ell(\mathrm{ Tor}^{A}_{1}(M,A/I_{n+1}))\] Then \(e_{\mathcal{F}}^{T}(M)=0\) if and only if \(M\) is free (see [9, Theorem 7.5]). Let \(M,N\) be maximal Cohen-Macaulay \(A\)-modules and \(\alpha\in\mathrm{Ext}^{1}_{A}(M,N)\). Let \(\alpha\) be given by an extension \(0\to N\to E\to M\to 0\), here \(E\) is an maximal Cohen-Macaulay module. Now set \[e_{\mathcal{F}}^{T}(\alpha)=e_{\mathcal{F}}^{T}(M)+e_{\mathcal{F}}^{T}(N)-e_{ \mathcal{F}}^{T}(E).\] It can be shown that \(e_{\mathcal{F}}^{T}(\alpha)\geq 0\), see 3.6. **Definition 1.12**.: An extension \(s\in\mathrm{Ext}^{1}_{A}(M,N)\) is \(T\mathcal{F}\)-split if \(e_{\mathcal{F}}^{T}(s)=0\). As before we can show that \(T_{\mathcal{F}}(M,N)\) is a submodule of \(\operatorname{Ext}^{1}_{A}(M,N)\) (see 4.1). Furthermore \(T_{\mathcal{F}}\colon\operatorname{CM}(-)\times\operatorname{CM}(-)\to \operatorname{mod}(A)\) is a sub-functor of \(\operatorname{Ext}^{1}_{A}(-,-).\), see 4.2. **Theorem 1.13**.: _Let \((A,\mathfrak{m})\) be an analytically unramified Cohen-Macaulay local ring of dimension \(d\) with one of the following conditions:_ 1. _the residue field_ \(k(=A/\mathfrak{m})\) _is uncountable._ 2. _the residue field_ \(k\) _is perfect field._ _Let \(I\) be an \(\mathfrak{m}\)-primary ideal and \(\mathcal{F}=\{I_{n}\}_{n\in\mathbb{Z}}\) where \(I_{1}=\overline{I}\) and \(I_{n}=\overline{I^{n}}\) for \(n\gg 0\). Let \(M,N\) be MCM \(A\)-module then \(\operatorname{Ext}^{1}_{A}(M,N)/T_{\mathcal{F}}(M,N)\) has finite length._ Next we prove following theorem. **Theorem 1.14**.: _Let \((A,\mathfrak{m})\) be a complete reduced Cohen-Macaulay local ring of dimension \(d\geq 1\) with either uncountable residue field or a perfect residue field. Let \(I\) be an \(\mathfrak{m}\)-primary ideal. Set \(R=A[[X_{1},\ldots,X_{m}]]\), \(J=(I,X_{1},\ldots,X_{m})\), \(\mathcal{I}=\{\overline{I^{n}}\}_{n\in\mathbb{Z}}\), and \(\mathcal{J}=\{\overline{J^{n}}\}_{n\in\mathbb{Z}}\). If \(A\) has an MCM module \(M\) with \(G_{\mathcal{I}}(M)\) Cohen-Macaulay. Then there exists \(\{E_{n}\}_{n\geq 1}\) indecomposable MCM \(R\)-modules with bounded multiplicity (with respect to \(\mathcal{J}\)) and having \(G_{\mathcal{J}}(E_{n})\) Cohen-Macaulay for all \(n\geq 1\)._ Let \(e^{T}_{\mathcal{F}}\colon\operatorname{Ext}^{1}_{A}(M,N)\to\mathbb{N}\) be the function defined by \(\alpha\mapsto e^{T}_{\mathcal{F}}(\alpha)\). When \(A\) has characteristic \(p>0\) then we can say more about this function. If \(V\) is a vector-space over a field \(k\) then let \(\mathbb{P}(V)\) denote the projective space determined by \(V\). **Theorem 1.15**.: _(with hypotheses as in 1.13) Further assume \(A\) is of characteristic \(p>0\) and that \(A\) contains a field \(k\cong A/\mathfrak{m}\). If \(\operatorname{Ext}^{1}_{A}(M,N)\neq T_{\mathcal{F}}(M,N)\) then the function \(e^{T}_{\mathcal{F}}\) factors as_ \[[\overline{e^{T}_{A}}]\colon\mathbb{P}(\operatorname{Ext}^{1}_{A}(M,N)/T_{ \mathcal{F}}(M,N))\to\mathbb{N}\setminus 0.\] We now describe in brief the contents of this paper. In section two we discuss some preliminary results. In section three we introduce our function 1.11 and discuss few of its properties We also discuss in detail the base changes that we need to prove our results. In section four we prove Theorem 1.13. In the next section we prove Theorem 1.14. In section six we prove Theorem 1.15. In the next section we discuss our result on relation between T-split sequences and AR-sequences. In section eight we prove Theorem 1.5. In the next section we prove Theorem 1.6 and construct Example 1.7. In section ten we prove Proposition 1.9 and Theorem 1.10. ## 2. Preliminaries Let \((A,\mathfrak{m})\) be a Noetherian local ring and \(I\) be an \(\mathfrak{m}\)-primary ideal. Then a filtration \(\mathcal{F}=\{F_{n}\}_{n\in\mathbb{Z}}\) is said to be \(I\)-admissible filtration if 1. \(I^{n}\subseteq F_{n}\) for all \(n\). 2. \(F_{n}F_{m}\subseteq F_{n+m}\) for all \(n,m\in\mathbb{Z}\). 3. \(F_{n}=IF_{n-1}\) for \(n\gg 0\). **Definition 2.2**.: A Noetherian local ring \((A,\mathfrak{m})\) is said to be analytically unramified if its \(\mathfrak{m}\)-adic completion is reduced. **2.3**.: Let \(\overline{\mathfrak{a}}\) denote integral closure of the ideal \(\mathfrak{a}\). If \(A\) is analytically unramified then from a result of Rees [17], the integral closure filtration \(\mathcal{F}=\{\overline{I^{n}}\}_{n\in\mathbb{Z}}\) is \(I\)-admissible. **2.4**.: Let \((A,\mathfrak{m})\) be a Noetherian local ring, \(I\) an \(\mathfrak{m}\)-primary ideal and \(\mathcal{F}=\{F_{n}\}_{n\in\mathbb{Z}}\) a \(I\)-admissible filtration. Let \(M\) be a finite \(A\)-module with dimension \(r\). Then the numerical function \(H_{\mathcal{F}}(M,n)=\ell(M/F_{n+1}M)\) is known as the Hilbert function of \(M\) with respect to \(\mathcal{F}\). For large value of \(n\), \(H_{\mathcal{F}}(M,n)\) coincides with a polynomial \(P_{\mathcal{F}}(M,n)\) of degree \(r\); and this polynomial known as the Hilbert polynomial of \(M\) with respect to \(\mathcal{F}\). There exist unique integer \(e_{0}^{\mathcal{F}}(M),e_{1}^{\mathcal{F}}(M),\ldots,e_{r}^{\mathcal{F}}(M)\) such that Hilbert polynomial of \(M\) with respect to \(\mathcal{F}\) can be written as \[P_{\mathcal{F}}(M,n)=\sum_{i=0}^{r}(-1)^{i}e_{i}^{\mathcal{F}}(M)\binom{n+r-i} {r-i}.\] These integers \(e_{0}^{\mathcal{F}}(M),e_{1}^{\mathcal{F}}(M),\ldots,e_{r}^{\mathcal{F}}(M)\) are known as the Hilbert coefficients of \(M\) with respect to \(\mathcal{F}\). In case of \(\mathfrak{m}\)-adic and \(I\)-adic filtrations these coefficients will be denoted as \(e_{i}(M)\) and \(e_{i}^{I}(M)\) for \(i=1,\ldots,r\) respectively. **2.5**.: Let \((A,\mathfrak{m})\) be a Noetherian local ring and \(I\) be an \(\mathfrak{m}\)-primary ideal. Let \(\mathcal{F}=\{F_{n}\}_{n\in\mathbb{Z}}\) be an \(I\)-admissible filtration and \(M\) an \(A\)-module with positive dimension. Then an element \(x\in F_{1}\setminus F_{2}\) is said to be \(\mathcal{F}\)-superficial element for \(M\) if there exists \(c\in\mathbb{N}\) such that for all \(n\geq c\), \[(F_{n+1}M:_{M}x)\cap F_{c}M=F_{n}M.\] The following facts are well known: 1. If \(k=A/\mathfrak{m}\) is infinite, then \(\mathcal{F}\)-superficial elements for \(M\) exist. 2. If \(\operatorname{depth}M>0\) then every \(\mathcal{F}\)-superficial element for \(M\) is also \(M\)-regular. 3. If \(x\) is \(\mathcal{F}\)-superficial element for \(M\) and \(\operatorname{depth}M>0\) then \((F_{n+1}M:_{M}x)=F_{n}M\) for \(n\gg 0\). 4. If \(x\) is \(\mathcal{F}\)-superficial element for \(M\) and \(\operatorname{depth}M>0\) then \(e_{i}^{\overline{\mathcal{F}}}(M/xM)=e_{i}^{\mathcal{F}}(M)\) for \(i=0,1,\ldots,\dim M-1\) (here \(\overline{\mathcal{F}}\) is the obvious quotient filtration of \(\mathcal{F}\)). **2.6**.: A sequence \(\underline{x}=x_{1},\ldots,x_{r}\) with \(r\leq\dim M\) is said to be \(\mathcal{F}\)-superficial sequence for \(M\) if \(x_{1}\) is \(\mathcal{F}\)-superficial element for \(M\) and \(x_{i}\) is \(\mathcal{F}/(x_{1},\ldots,x_{i-1})\)-superficial element for \(M/(x_{1},\ldots,x_{i-1})M\) for all \(i\leq r\). ## 3. The case when \(A\) is analytically unramified Let \((A,\mathfrak{m})\) be an analytically unramified Cohen-Macaulay local ring with \(\dim A=d\geq 1\), \(I\) an \(\mathfrak{m}\)-primary ideal. We are primarily interested in the integral closure filtration of \(I\). However to prove our results we need the following class of \(I\)-admissible filtrations \(\mathcal{F}=\{I_{n}\}_{n\in\mathbb{Z}}\) where \(I_{1}=\overline{I}\) and \(I_{n}=\overline{I^{n}}\) for \(n\gg 0\). Let \(M\) be an MCM \(A\)-module. **3.1**.: The numerical function \[n\longmapsto\ell(\operatorname{Tor}_{1}^{A}(M,A/I_{n+1}))\] is polynomial type; that is there is a polynomial \(t_{\mathcal{F}}^{A}(M,z)\) which coincides with this numerical function for \(n\gg 0\). If \(M\) is non free MCM \(A\)-module then \(\deg t_{\mathcal{F}}^{A}(M,z)=d-1\) (see [9, Theorem 7.5]). Note that normalised leading coefficient of \(t_{\mathcal{F}}^{A}(M,z)\) is \(e_{1}^{\mathcal{F}}(A)\mu(M)-e_{1}^{\mathcal{F}}(M)-e_{1}^{\mathcal{F}}( \operatorname{Syz}_{1}^{A}(M))\). **3.2**.: Set \[e_{\mathcal{F}}^{T}(M) =\lim_{n\to\infty}\frac{(d-1)!}{n^{d-1}}\ell(\operatorname{Tor}_{ 1}^{A}(M,A/I_{n+1}))\] \[=e_{1}^{\mathcal{F}}(A)\mu(M)-e_{1}^{\mathcal{F}}(M)-e_{1}^{ \mathcal{F}}(\operatorname{Syz}_{1}^{A}(M))\] **3.3**.: **Base-change:** We need to do several base changes in our arguments. (I) We first discuss the general setup: Let \(\psi\colon(A,\mathfrak{m})\to(B,\mathfrak{n})\) be a flat map such that \(B\) is also a Cohen-Macaulay local ring and \(\mathfrak{m}B=\mathfrak{n}\). If \(M\) is an \(A\)-module set \(M_{B}=M\otimes_{A}B\). If \(\mathcal{F}=\{I_{n}\}_{n\in\mathbb{Z}}\) is an \(I\)-admissible filtration then set \(\mathcal{F}_{B}=\{I_{n}B\}_{n\in\mathbb{Z}}\). Then 1. \(\ell(N)=\ell(N_{B})\) for any finite length \(A\)-module \(N\). 2. \(\mathcal{F}_{B}\) is an \(IB\)-admissible filtration. 3. \(\dim M=\dim M_{B}\) and \(\operatorname{depth}M=\operatorname{depth}M_{B}\). In particular, \(M\) is an MCM \(A\)-module if and only if \(M_{B}\) is MCM \(B\)-module. 4. \(\operatorname{Syz}_{i}^{A}(M)\otimes_{A}B\cong\operatorname{Syz}_{i}^{B}(M_{ B})\) for all \(i\geq 0\). 5. \(e_{i}^{\mathcal{F}}(M)=e_{i}^{\mathcal{F}_{B}}(M_{B})\) for all \(i\). 6. If \(\psi\) is regular and \(\mathfrak{a}\) is integrally closed \(\mathfrak{m}\)-primary ideal in \(A\) then \(\mathfrak{a}B\) is integrally closed in \(B\) (for instance see [7, 2.2(7)]). (II) Assume \(A\) is analytically unramified Cohen-Macaulay local ring and \(\mathcal{F}=\{I_{n}\}_{n\in\mathbb{Z}}\) is an \(I\)-admissible filtration with \(I\), \(\mathfrak{m}\)-primary and furthermore \(I_{1}=\overline{I}\) and \(I_{n}=\overline{I^{n}}\) for \(n\gg 0\). We need to base changes as above where \(\mathcal{F}_{B}\) has the property that \(I_{n}B=I^{n}B=\overline{I^{n}B}\) whenever \(I_{n}=\overline{I^{n}}\). Note this automatically forces \(B\) to be analytically unramified. The specific base changes we do are the following: (i) \(B=\widehat{A}\) the completion of \(A\). Note that if \(J\) is an \(\mathfrak{m}\)-primary integrally closed ideal then \(J\widehat{A}\) is also integrally closed, cf., [8, 9.1.1]. (ii) If \(A\) has a finite residue field then we can consider the extension \(B=A[X]_{\mathfrak{m}A[X]}\). The residue field of \(B\) is \(k(X)\) which is infinite. Note that if \(J\) is an \(\mathfrak{m}\)-primary integrally closed ideal then \(J\widehat{A}\) is also integrally closed, cf., [8, 8.4.2]. (iii) Assume \(\dim A\geq 2\). Even if \(A\) has infinite residue field there might not exist \(\mathcal{F}\)-superficial element \(x\) such that \(A/(x)\) is analytically unramified. However a suitable extension \(B\) has this property. To see this we first observe two facts. Let \(\mathcal{E}\) be a countable set of MCM of \(A\)-modules. Assume that the residue field \(k\) of \(A\) is uncountable if \(\mathcal{E}\) is an infinite set. Otherwise \(k\) is infinite. (a) There exist \(\underline{x}=x_{1},\ldots,x_{d}\in\overline{I}\) such that \(\underline{x}\) is \(\mathcal{F}\)-superficial for each \(N\in\mathcal{E}\). This result is well-known (for instance see [14, Lemma 2.2]). (b) There exists a generating set \(r_{1},\ldots,r_{t}\) of \(I\) such that for each \(i\), \(r_{i}\) is \(I\)-superficial and \(\mathcal{F}\)-superficial element for each \(N\in\mathcal{E}\) (see (a) and [9, Lemma 7.3]). ([5],[9, Lemma 7.4, Theorem 7.5]) Choose \(r_{1},\ldots,r_{t}\) as in (b). Now consider following flat extension of rings \[A\to\hat{A}\to B=\hat{A}[X_{1},\ldots,X_{t}]_{\mathfrak{m}\hat{A}[X_{1}, \ldots,X_{t}]}.\] Let \(\zeta=r_{1}X_{1}+\ldots+r_{t}X_{t}\). Set \(C=B/\zeta B\) and \(\mathcal{F}_{C}=\{I_{n}C\}\). For \(N\in\mathcal{E}\), set \(N_{B}=N\otimes_{A}B\). Then we have 1. \(B\) is analytically unramified Cohen-Macaulay local ring of dimension \(d\). 2. \(N_{B}\) is MCM \(B\)-module for each \(N\in\mathcal{E}\). 3. If \(J\) is a integrally closed \(\mathfrak{m}\)-primary ideal in \(A\) then \(JB\) a integrally closed \(\mathfrak{n}\)-primary ideal in \(B\). 4. \(I_{1}C=\overline{I}C=\overline{I}C\). 5. \(I_{n}C=\overline{I^{n}C}\) for all \(n\gg 0\). 6. \(C\) is analytically unramified Cohen-Macaulay local ring of dimension \(d-1\). 7. \(\zeta\) is \(\mathcal{F}_{B}\)-superficial for each \(N_{B}\) (here \(N\in\mathcal{E}\)). (iv) For some of our arguments we need the residue field of \(A\) to be uncountable. If \(k\) is finite or countably infinite do the following: First complete \(A\). By (i) this is possible. So we may assume \(A\) is complete. Consider extension \(\phi:A\longrightarrow A[[X]]_{\mathfrak{m}A[[X]]}=(B,\mathfrak{n})\). Set \(B_{0}=B\otimes_{A}k=B/\mathfrak{m}B\). So \(B_{0}=B/\mathfrak{n}=k((X))\) is uncountable. As \(k\) is perfect we get \(k((X))\) is \(0\)-smooth over \(k\), see [10, 28.7]. Using [10, 28.10] we get \(B\) is \(\mathfrak{n}(=\mathfrak{m}B)\)-smooth. This implies \(\phi\) is regular (see [1, Theorem]). By I(6) if \(\mathfrak{a}\) is an integrally closed \(\mathfrak{m}\)-primary ideal in \(A\) then \(\mathfrak{a}B\) is integrally closed in \(B\). Thus \(I_{n}B=\overline{I^{n}B}\) whenever \(I_{n}=\overline{I^{n}}\). **Definition 3.4**.: We say a flat extension \(\psi\colon(A,\mathfrak{m})\to(B,\mathfrak{n})\) with \(\mathfrak{m}B=\mathfrak{n}\)_behaves well_ with respect to integral closure if for any integrally closed \(\mathfrak{m}\)-primary ideal \(\mathfrak{a}\) in \(A\) the ideal \(\mathfrak{a}B\) is integrally closed in \(B\). We need the following result: **Proposition 3.5**.: _Let \((A,\mathfrak{m})\) be analytically unramified Cohen-Macaulay local ring with \(\dim A=d\geq 1\), \(I\) an \(\mathfrak{m}\)-primary ideal. Let \(\mathcal{F}=\{I_{n}\}_{n\in\mathbb{Z}}\) be a \(I\)-admissible filtration where \(I_{1}=\overline{I}\) and \(I_{n}=\overline{I^{n}}\) for \(n\gg 0\)._ 1. _Let_ \((B,\mathfrak{n})\) _be a flat extension of_ \(A\) _which behaves well with respect to integral closure. Set_ \(\mathcal{F}_{B}=\{I_{n}B\}_{n\in\mathbb{Z}}\)_. Then for any MCM_ \(A\)_-module_ \(M\) _we have_ \(e_{\mathcal{F}}^{T}(M)=e_{\mathcal{F}_{B}}^{T}(M_{B})\)_._ 2. _Let_ \(\dim A\geq 2\)_. and the residue field of_ \(A\) _is infinite. Let_ \(\mathcal{V}\) _be any countable set of MCM_ \(A\)_-modules (containing_ \(A\)_). Assume_ \(k=A/\mathfrak{m}\) _is uncountable if_ \(\mathcal{V}\) _is infinite otherwise_ \(k\) _is infinite. Then there exists a flat extension_ \(B\) _of_ \(A\) _which behaves well with respect to integral closure such that there exist_ \(\zeta\in IB\) _which is_ \(\mathcal{F}_{B}\)_-superficial with respect to each_ \(N_{B}\) _(for all_ \(N\in\mathcal{V}\)_). Furthermore if_ \(C=B/\zeta B\) _then_ \(C\) _is analytically unramified with_ \(IC=\overline{IC}\) _and_ \(I_{n}C=\overline{I^{n}C}\) _for all_ \(n\gg 0\)_. Set_ \(N_{C}=N\otimes_{A}C\)_. Furthermore_ \(e_{\mathcal{F}}^{T}(N)=e_{\mathcal{F}_{C}}^{T}(N_{C})\) _for each_ \(N\in\mathcal{V}\)_._ Proof.: (1) This follows from 3.3(I). (2) Set \(\mathcal{E}=\{\operatorname{Syz}_{i}^{A}(N)\colon i=0,1\text{ and }N\in\mathcal{V}\}\). Then \(\mathcal{E}\) is a countable set and is finite if \(\mathcal{V}\) is. Now do the construction in 3.3(I)(iii) and use 2.5(4) and 3.2 to conclude. The following lemma follows from [13, Theorem 2.6], but here we give a short proof (Similar proof also works for \(e_{A}^{T}()\)): **Lemma 3.6**.: _Let \(\alpha:0\to N\to E\to M\to 0\) be an exact sequence of MCM \(A\)-modules. Then \(e_{\mathcal{F}}^{T}(E)\leq e_{\mathcal{F}}^{T}(M)+e_{\mathcal{F}}^{T}(N)\)._ Proof.: Consider the long exact sequence of \(\alpha\otimes_{A}A/I_{n+1}\). We get \[\ldots\to\operatorname{Tor}_{1}^{A}(N,A/I_{n+1})\to\operatorname{Tor}_{1}^{A} (E,A/I_{n+1})\to\operatorname{Tor}_{1}^{A}(M,A/I_{n+1})\to\ldots\] So, \(\ell(\operatorname{Tor}_{1}^{A}(E,A/I_{n+1}))\leq\ell(\operatorname{Tor}_{1} ^{A}(M,A/I_{n+1}))+\ell(\operatorname{Tor}_{1}^{A}(N,A/I_{n+1}))\). Now from the definition of \(e_{\mathcal{F}}^{T}(-)\), required inequality follows. **3.7**.: Let \(M,N\) be maximal Cohen-Macaulay \(A\)-modules and \(\alpha\in\operatorname{Ext}_{A}^{1}(M,N)\). Let \(\alpha\) be given by an extension \(0\to N\to E\to M\to 0\), here \(E\) is an maximal Cohen-Macaulay module. Now set \[e_{\mathcal{F}}^{T}(\alpha)=e_{\mathcal{F}}^{T}(M)+e_{\mathcal{F}}^{T}(N)-e_{ \mathcal{F}}^{T}(E).\] **3.8**.: Let \(\alpha_{1},\alpha_{2}\in\operatorname{Ext}_{A}^{1}(M,N)\). Suppose \(\alpha_{i}\) can be given by \(0\to N\to E_{i}\to M\to 0\) for \(i=1,2\). If \(\alpha_{1}\) and \(\alpha_{2}\) are equivalent then \(E_{1}\cong E_{2}\). So \(e_{\mathcal{F}}^{T}(\alpha_{1})=e_{\mathcal{F}}^{T}(\alpha_{2})\). This implies \(e_{\mathcal{F}}^{T}(\alpha)\) is well defined. Note that \(e_{\mathcal{F}}^{T}(\alpha)\geq 0\). **Definition 3.9**.: An extension \(s\in\operatorname{Ext}_{A}^{1}(M,N)\) is \(T_{\mathcal{F}}\)-split if \(e_{\mathcal{F}}^{T}(s)=0\). **Definition 3.10**.: Let \(M,N\) be maximal Cohen-Macaulay \(A\)-modules. Set \[T_{\mathcal{F},A}(M,N)=\{s|s\text{ is a }T_{\mathcal{F}}\text{-split extension}\}.\] Note that if the choice of the ring \(A\) is unambiguous from the context, we denote this set as \(T_{\mathcal{F}}(M,N)\). We will need the following two results: **Lemma 3.11**.: _Let \((A,\mathfrak{m})\) be an analytically unramified Cohen-Macaulay local ring of dimension \(d\geq 1\) and let \(M,N,N_{1},E,E_{1}\) be MCM \(A\)-modules. Suppose we have a commutative diagram_ _If \(\alpha\) is \(T_{\mathcal{F}}\) -split, then \(\beta\) is also \(T_{\mathcal{F}}\)-split._ Proof.: If \(\dim A=1\) then we can give an argument similar to [16, Proposition 3.8]. Now assume \(d=\dim A\geq 2\) and the result has been proved for all analytically unramified rings of dimension \(d-1\). If the residue field of \(A\) is finite then use 3.3II.(ii). So we may assume \(A/\mathfrak{m}\) is infinite. Using 3.5 we may assume that (after going to a flat extension) there exists \(\zeta\in I\) such that (i) \(\zeta\) is \(\mathcal{F}\)-superficial with respect to \(A\oplus U\oplus\operatorname{Syz}_{1}^{A}(U)\) for each \(U\) in the above diagram. (ii) \(C=A/\zeta A\) is analytically unramified with \(IC=\overline{IC}\) and \(I_{n}C=\overline{I^{n}C}\) for all \(n\gg 0\). (iii) \(e_{\mathcal{F}_{C}}^{T}(U/\zeta U)=e_{\mathcal{F}}^{T}(U)\) for each \(U\) in the above diagram. Notice \(\alpha\otimes C\) and \(\beta\otimes C\) are exact. For an \(A\)-module \(V\) set \(\overline{V}=V/\zeta V\). So we have a diagram Note \(\alpha\otimes C\) is \(T_{\mathcal{F}_{C}}\) -split. By our induction hypotheses \(\beta\otimes C\) is \(T_{\mathcal{F}_{C}}\) -split. By our construction it follows that \(\beta\) is also \(T_{\mathcal{F}}\)-split. **Lemma 3.12**.: _Let \((A,\mathfrak{m})\) be an analytically unramified Cohen-Macaulay local ring of dimension \(d\geq 1\) and let \(M,M_{1},N,E,E_{1}\) be MCM \(A\)-modules. Suppose we have a commutative diagram_ _If \(\beta\) is \(T_{\mathcal{F}}\) -split, then \(\alpha\) is also \(T_{\mathcal{F}}\)-split._ Proof.: This is dual to 3.11. ## 4. \(T_{\mathcal{F}}\)-split sequences In this section we prove our results regarding \(T_{\mathcal{F}}\). **Theorem 4.1**.: _Let \((A,\mathfrak{m})\) be an analytically unramified Cohen-Macaulay local ring of dimension \(d\). Let \(I\) be an \(\mathfrak{m}\)-primary ideal and and \(\mathcal{F}=\{I_{n}\}\) where \(I_{1}=\overline{I}\) and \(I_{n}=\overline{I^{n}}\) for \(n\gg 0\). Let \(M,N\) be MCM \(A\)-module, then \(T_{\mathcal{F}}(M,N)\) is a submodule of \(\operatorname{Ext}^{1}_{A}(M,N)\)._ Proof.: Let \(\alpha:0\to N\to E\to M\to 0\) be a \(T_{\mathcal{F}}\)-split extension and \(r\in A\), then we can define \(r\alpha\) Note that first square is push-out diagram. Since \(\alpha\) is \(T_{\mathcal{F}}\)-split, this implies \(r\alpha\) is also \(T_{\mathcal{F}}\)-split (see 3.11). Let \(\alpha:0\to N\to E\to M\to 0\) and \(\alpha^{\prime}::0\to N\to E^{\prime}\to M\to 0\) be two \(T_{\mathcal{F}}\)-split extensions. We want to show \(\alpha+\alpha^{\prime}\) is also \(T_{\mathcal{F}}\)-split. Note that the addition operation on \(\operatorname{Ext}^{1}_{A}(M,N)\) is Bear sum, that is \(\alpha+\alpha^{\prime}:=(\nabla(\alpha\oplus\alpha^{\prime}))\Delta\). Since \(\alpha\) and \(\alpha^{\prime}\) are \(T_{\mathcal{F}}\)-split this implies \(\alpha\oplus\alpha^{\prime}:0\to N\oplus N\to E\oplus E^{\prime}\to M\oplus M\to 0\) also \(T_{\mathcal{F}}\)-split. Consider following diagram Note that first square is is pushout diagram. From 3.11, \(\nabla(\alpha\oplus\alpha^{\prime})\) is \(T_{\mathcal{F}}\)-split. Now consider the diagram Here second square is pullback diagram. Now from 3.12, \(\alpha+\alpha=(\nabla(\alpha\oplus\alpha^{\prime}))\Delta\) is \(T_{\mathcal{F}}\)-split. We now show **Theorem 4.2**.: _(with hypotheses as in 4.1) \(T_{\mathcal{F}}\colon\operatorname{CM}(-)\times\operatorname{CM}(-)\to \operatorname{mod}(A)\) is a functor._ Proof.: This is similar to [16, 3.13]. We have to use Theorem 4.1 and Lemmas 3.11, 3.12. The following is one of the main result of our paper. **Theorem 4.3**.: _Let \((A,\mathfrak{m})\) be an analytically unramified Cohen-Macaulay local ring of dimension \(d\) with uncountable residue field. Let \(I\) be an \(\mathfrak{m}\)-primary ideal and \(\mathcal{F}=\{I_{n}\}\) where \(I_{1}=\overline{I}\) and \(I_{n}=\overline{I^{n}}\) for \(n\gg 0\). Let \(M,N\) be MCM \(A\)-module then \(\operatorname{Ext}^{1}_{A}(M,N)/T_{\mathcal{F}}(M,N)\) has finite length._ Proof.: We prove this theorem by induction. If \(\dim A=1\), then \(\operatorname{Ext}^{1}_{A}(M,N)\) has finite length. In fact, for any prime ideal \(\mathfrak{p}\neq\mathfrak{m}\), \((\operatorname{Ext}^{1}_{A}(M,N))_{\mathfrak{p}}=0\) because \(A\) is reduced. Note that for dimension one case we do not need any assumption on residue field. We now assume \(\dim A\geq 2\) and result is true for dimension \(d-1\). Let \(\alpha:0\to N\to E\to M\to 0\in\operatorname{Ext}^{1}_{A}(M,N)\) and \(a\in I\). Then we have following pushout diagram of \(R\)-modules for all \(n\geq 1\) Set \(\mathcal{V}=\{M,N,E,E_{n}\colon n\geq 1\}\) and set \[\mathcal{E}=\{A\}\cup\{\operatorname{Syz}^{A}_{i}(U)\colon i=0,1\text{ and }U\in\mathcal{V}\}.\] We now do the base change as described in 3.3.II.(iii). \[A\to\hat{A}\to B=\hat{A}[X_{1},\dots,X_{t}]_{\mathfrak{m}\hat{A}[X_{1},\dots, X_{t}]}\] For any MCM \(A\)-module \(L\), set \(L_{B}=L\otimes_{A}B\). Let \(\mathcal{F}_{B}=\{I_{n}B\}_{n\in\mathbb{Z}}\). From 3.3.II.(iii), for all \(n\geq 1\), \(\zeta\) is \(\mathcal{F}_{B}\)-superficial for \[B\oplus M_{B}\oplus N_{B}\oplus E_{n,B}\oplus\operatorname{Syz}^{B}_{1}(M_{B} )\oplus\operatorname{Syz}^{B}_{1}(N_{B})\oplus\operatorname{Syz}^{B}_{1}(E_{ n,B}).\] Set \(C=B/\zeta B\), \(\mathcal{F}_{C}=\{I_{n}C\}_{n\in\mathbb{Z}}\). Then \(C\) is analytically unramified with \(\dim C=d-1\). Furthermore \(I_{1}C=\overline{I_{1}C}\) and \(I_{n}C=\overline{I^{n}C}\) for \(n\gg 0\). From 3.5, we have for all \(n\geq 0\), \[e^{T}_{\mathcal{F}}(a^{n}\alpha)=e^{T}_{\mathcal{F}_{C},C}(a^{n}\alpha\otimes C )=e^{T}_{\mathcal{F}_{C},C}(\overline{a^{n}}(\alpha\otimes C)).\] But from the assumption result is true for \(C\). So \[e^{T}_{\mathcal{F}_{C},C}(a^{n}\alpha\otimes C)=e^{T}_{\mathcal{F}_{C},C}( \overline{a^{n}}(\alpha\otimes C))=0\quad\text{for }n\gg 0.\] This implies \(e^{T}_{\mathcal{F},A}(a^{n}\alpha)=0\) for \(n\gg 0\). Let \(I=(a_{1},\dots,a_{u})\). It follows that \[(a_{1}^{n_{1}},\dots,a_{u}^{n_{u}})\operatorname{Ext}^{1}_{A}(M,N)\subseteq T _{\mathcal{F}}(M,N).\] So \(\operatorname{Ext}^{1}_{A}(M,N)/T_{\mathcal{F}}(M,N)\) has finite length. **Theorem 4.4**.: _Let \((A,\mathfrak{m})\) be a Cohen-Macaulay analytically unramified local ring of dimension \(d\) with residue field \(k\). Suppose \(k\) is perfect field. Let \(I\) be an \(\mathfrak{m}\)-primary ideal and \(\mathcal{F}=\{I_{n}\}\) where \(I_{1}=\overline{I}\) and \(I_{n}=\overline{I^{n}}\) for \(n\gg 0\). Let \(M,N\) be MCM \(A\)-module then \(\operatorname{Ext}^{1}_{A}(M,N)/T_{\mathcal{F}}(M,N)\) has finite length._ Proof.: By 3.3 we may assume \(A\) is complete. If \(k\) is uncountable, result follows from Theorem 4.3. Now we consider the case when \(k\) is finite or countably infinite. Then by 3.3(iv) there exists a flat local extension \((B,\mathfrak{n})\) of \(A\) with \(\mathfrak{m}B=\mathfrak{n}\) which behaves well with respect to integral closure such that the residue field of \(B\) is uncountable. Set \(\mathcal{F}_{B}=\{I_{n}B\}_{n\in\mathbb{Z}}\), \(M_{B}=M\otimes B\) and \(N_{B}=N\otimes B\). Also note that \(I_{n}B=\overline{I^{n}B}\) for \(n\gg 0\). Let \(\alpha\in\operatorname{Ext}^{1}_{A}(M,N)\) and \(a\in\mathfrak{m}\). Then for all \(n\geq 1\) it is easy to see \[e^{T}_{\mathcal{F}}(a^{n}\alpha)=e^{T}_{\mathcal{F}_{B}}((a^{n}\alpha)\otimes B )=e^{T}_{\mathcal{F}_{B}}((a^{n}\otimes 1)(\alpha\otimes B)).\] From Theorem 4.3, \(e^{T}_{\mathcal{F}_{B}}((a^{n}\otimes 1)(\alpha\otimes B))=0\) for \(n\gg 0\). So \(e^{T}_{\mathcal{F}}(a^{n}\alpha)=0\) for \(n\gg 0\). Therefore \(a^{n}\alpha\in T_{\mathcal{F}}(M,N)\) for \(n\gg 0\). Now the result follows from the similar argument as in Theorem 4.3. ## 5. Weak Brauer-Thrall-II We need the following two results. **Lemma 5.1**.: _Let \((A,\mathfrak{m})\) be an analytically unramified Cohen-Macaulay local ring of dimension \(d\geq 1\). Let \(I\) be an \(\mathfrak{m}\)-primary ideal and \(\mathcal{F}=\{I_{n}\}\) where \(I_{1}=\overline{I}\) and \(I_{n}=\overline{I^{n}}\) for \(n\gg 0\). If \(M,N\) and \(E\) are MCM modules and we have a \(T_{\mathcal{F}}\)-split sequence \(0\to N\to E\to M\to 0\). Assume \(G_{\mathcal{F}}(N)\) is Cohen-Macaulay. Then we have short exact sequence_ \[0\to G_{\mathcal{F}}(N)\to G_{\mathcal{F}}(E)\to G_{\mathcal{F}}(M)\to 0.\] _Furthermore, \(e^{\mathcal{F}}_{i}(E)=e^{\mathcal{F}}_{i}(N)+e^{\mathcal{F}}_{i}(M)\) for \(i=0,\ldots,d\)._ Proof.: Follows from an argument similar to [16, Lemma 6.3]. **Proposition 5.2**.: _Let \((A,\mathfrak{m})\) be an analytically unramified Cohen-Macaulay local ring of dimension \(d\geq 1\), \(I\) an \(\mathfrak{m}\)-primary ideal, and \(\mathcal{F}=\{\overline{I^{n}}\}_{n\geq 0}\). Assume the residue field \(k=A/\mathfrak{m}\) is either uncountable or a perfect field. Let \(M\) and \(N\) be MCM \(A\)-modules with \(G_{\mathcal{F}}(M)\) and \(G_{\mathcal{F}}(N)\) Cohen-Macaulay. If there exists only finitely many non-isomorphic MCM \(A\)-modules \(E\) with \(G_{\mathcal{F}}(E)\) Cohen-Macaulay and \(e^{\mathcal{F}}(E)=e^{\mathcal{F}}(N)+e^{\mathcal{F}}(M)\), then \(T_{\mathcal{F}}(M,N)\) has finite length (in particular \(\text{Ext}^{1}_{A}(M,N)\) has finite length)._ Proof.: Follows from an argument similar to [16, Theorem 7.1]. The following result is well-known. We indicate a proof for the convenience of the reader. **Lemma 5.3**.: _Let \((A,\mathfrak{m})\) be a Noetherian local ring and \(I\) be an ideal of \(A\). Set \(B=A[X]\) and \(J=(I,X)\) then \(\overline{J^{n}}=\sum_{i=0}^{n}\overline{I^{n-i}}X^{i}\)._ Proof.: Consider Rees algebra of \(I\), \(\mathcal{R}(I)=A[It]=A\oplus It\oplus I^{2}t^{2}\oplus\ldots\). Its integral closure in \(A[t]\) is \(\overline{\mathcal{R}(I)}=A\oplus\overline{I}t\oplus\overline{I^{2}}t^{2}\oplus\ldots\). By [2, Chapter 5, Exercise 9] we get that \(\overline{\mathcal{R}(I)}[X]\) is integral closure of \(\mathcal{R}(I)[X]\) in \(A[t][X]\). Comparing homogeneous components for all \(n\) we get \(\overline{(I,X)^{n}}=\sum_{i=0}^{n}\overline{I^{n-i}}X^{i}\). **Proposition 5.4**.: _Let \((A,\mathfrak{m})\) be an analytically unramified Cohen-Macaulay local ring of dimension one and \(I\) an \(\mathfrak{m}\)-primary ideal. Set \(R=A[X]_{(\mathfrak{m},X)}\), \(J=(I,X)\), \(\mathcal{I}=\{\overline{I^{n}}\}_{n\geq 0}\) and \(\mathcal{J}=\{\overline{J^{n}}\}_{n\geq 0}\). Then there exists an MCM \(R\)-module \(E\) with \(G_{\mathcal{J}}(E)\) Cohen-Macaulay and dim \(\operatorname{Ext}^{1}_{R}(E,E)>0\)._ Proof.: Let \(M\) be an MCM \(A\)-module. Fix large enough \(n\) (say \(n_{0}\)), then it is easy to see \(N=\overline{I^{n_{0}}}M\) is MCM and \(G_{\mathcal{I}}(N)\) is Cohen-Macaulay. From lemma 5.3, we get \(G_{\mathcal{J}}(N\otimes R)=G_{\mathcal{I}}(N)[X]\). So \(G_{\mathcal{J}}(N\otimes R)\) is Cohen-Macaulay. From [3, Theorem A.11(b)], dim \(\operatorname{Ext}^{1}_{R}(N\otimes R,N\otimes R)>0\). **Theorem 5.5**.: _Let \((A,\mathfrak{m})\) be an analytically unramified Cohen-Macaulay local ring of dimension \(d\geq 1\), and \(I\) an \(\mathfrak{m}\)-primary ideal. Set \(R=A[X_{1},\ldots,X_{m}]_{(\mathfrak{m},X_{1},\ldots,X_{m})}\), \(J=(I,X_{1},\ldots,X_{m})\), \(\mathcal{I}=\{\overline{I^{n}}\}_{n\geq 0}\) and \(\mathcal{J}=\{\overline{J^{n}}\}_{n\geq 0}\). Also set \(S=\widehat{R}\) and \(\mathcal{K}=\{\overline{J^{n}}S\}_{n\geq 0}\). If \(A\) has an MCM module \(M\) with \(G_{\mathcal{I}}(M)\) Cohen-Macaulay. Then_ 1. \(M\otimes R\) _is an MCM_ \(R\)_-module with_ \(G_{\mathcal{J}}(M\otimes R)\) _Cohen-Macaulay and_ \(\dim\operatorname{Ext}^{1}_{R}(M\otimes R,M\otimes R)>0\)_._ 2. _We have_ \(\overline{J^{n}S}=\overline{J^{n}}S\) _for all_ \(n\geq 1\)_. Furthermore_ \(M\otimes S\) _is an MCM_ \(S\)_-module with_ \(G_{\mathcal{K}}(M\otimes S)\) _Cohen-Macaulay and_ \(\dim\operatorname{Ext}^{1}_{S}(M\otimes S,M\otimes S)>0\)_._ Proof.: (1) It is sufficient to prove the result for \(n=1\). So we can assume \(R=A[X]\). It is easy to see \(M\otimes R\) is MCM \(R\)-module and \(G_{\mathcal{J}}(M\otimes R)=G_{\mathcal{I}}(M)[X]\) (follows from Lemma 5.3). So \(G_{\mathcal{J}}(M\otimes R)\) is Cohen-Macaulay. From [3, Theorem A.11(b)], dim \(\operatorname{Ext}^{1}_{R}(M\otimes R,M\otimes R)>0\). (2) The assertion \(\overline{J^{n}S}=\overline{J^{n}}S\) for all \(n\geq 1\) follows from [8, 9.1.1]. For the rest observe that \(M\otimes_{A}S=(M\otimes_{A}R)\otimes_{R}S\). This gives \(\dim\operatorname{Ext}^{1}_{S}(M\otimes S,M\otimes S)>0\). Furthermore \(G_{\mathcal{K}}(S)\) is a flat extension of \(G_{\mathcal{J}}(R)\) with zero-dimensional fiber. Notice \[G_{\mathcal{K}}(M\otimes S)=G_{\mathcal{J}}(M\otimes R)\otimes_{G_{\mathcal{J }}(R)}G_{\mathcal{K}}(S).\] By Theorem [10, 23.3] the result follows. **Theorem 5.6**.: _Let \((A,\mathfrak{m})\) be a complete reduced Cohen-Macaulay local ring of dimension \(d\geq 1\) and \(I\) an \(\mathfrak{m}\)-primary ideal. Assume the residue field \(k=A/\mathfrak{m}\) is either uncountable or perfect. Set \(R=A[[X_{1},\ldots,X_{m}]]\), \(J=(I,X_{1},\ldots,X_{m})\), \(\mathcal{I}=\{\overline{I^{n}}\}_{n\geq 0}\), and \(\mathcal{J}=\{\overline{J^{n}}\}_{n\geq 0}\). If \(A\) has an MCM module \(M\) with \(G_{\mathcal{I}}(M)\) Cohen-Macaulay then \(R\) has infinitely many non-isomorphic MCM modules \(D\) with \(G_{\mathcal{J}}(D)\) Cohen-Macaulay and bounded multiplicity._ Proof.: Follows from 5.1, 5.2 and 5.5. ## 6. Some results about \(e_{A}^{T}()\) In this section we prove Theorem 1.15 (see Theorem 6.3). **Lemma 6.1**.: _Let \((A,\mathfrak{m},k)\) be a Cohen-Macaulay local ring and \(M,N\) be MCM \(A\)-modules. Let \(\alpha\) be \(T\)-split and \(\alpha^{\prime}\) be any extension, then \(e_{A}^{T}(\alpha+\alpha^{\prime})\leq e_{A}^{T}(\alpha^{\prime})\). Also, if char\((A)=p^{n}>0\), then \(e^{T}(\alpha+\alpha^{\prime})=e_{A}^{T}(\alpha^{\prime})\)._ Proof.: Let \(\alpha\) can be represented as \(0\to N\to E\to M\to 0\) and \(\alpha^{\prime}\) as \(0\to N\to E^{\prime}\to M\to 0\). Consider following pullback diagram From 3.12, \(\beta\) is \(T\)-split. So, \(e_{A}^{T}(E^{\prime\prime})=e_{A}^{T}(N)+e_{A}^{T}(E^{\prime})\). Now \(\alpha+\alpha^{\prime}\) can be written as \(0\to N\to Y\to M\to 0\) where \(Y=E^{\prime\prime}/S\) and \(S=\{(-n,n)\in E^{\prime\prime}|n\in N\}\). So we have following commutative diagram Here \(\delta\) is natural surjection. Now from the exact sequence \(\gamma:0\to N\to E^{\prime\prime}\xrightarrow{\delta}Y\to 0\) we get \(e_{A}^{T}(\gamma)=e_{A}^{T}(N)+e_{A}^{T}(Y)-e_{A}^{T}(E^{\prime\prime})\). Now we get \[e_{A}^{T}(\alpha+\alpha^{\prime}) =e_{A}^{T}(N)+e_{A}^{T}(M)-e_{A}^{T}(Y)\] \[=e_{A}^{T}(\alpha^{\prime})+e_{A}^{T}(E^{\prime})-e_{A}^{T}(Y)\] \[=e_{A}^{T}(\alpha^{\prime})+e_{A}^{T}(E^{\prime\prime})-e_{A}^{T} (N)-e_{A}^{T}(Y)\] \[=e_{A}^{T}(\alpha^{\prime})-e_{A}^{T}(\gamma)\] So, \(e_{A}^{T}(\alpha+\alpha^{\prime})\leq e_{A}^{T}(\alpha^{\prime})\). If char\((A)=p^{n}>0\), then we have \[e_{A}^{T}(\alpha^{\prime})\leq e_{A}^{T}((p^{n}-1)\alpha+\alpha^{\prime})\leq \ldots\leq e_{A}^{T}(\alpha+\alpha^{\prime})\leq e_{A}^{T}(\alpha^{\prime}).\] Note that \(p^{n}\alpha=0\) is split exact sequence. This implies \(e_{A}^{T}(\alpha+\alpha^{\prime})=e_{A}^{T}(\alpha^{\prime})\). Let \(\mathbb{N}\) be the set of non-negative integers. **Remark 6.2**.: If \(char(A)=p^{n}>0\) then we have a well defined function \([e_{A}^{T}]\colon\operatorname{Ext}_{A}^{1}(M,N)/T_{A}(M,N)\to\mathbb{N}\). If \(V\) is a vector-space over a field \(k\) then let \(\mathbb{P}(V)\) denote the projective space determined by \(V\). **Theorem 6.3**.: _(with hypotheses as in 6.1) Further assume \(A\) is of characteristic \(p>0\) and that \(A\) contains a field \(k\cong A/\mathfrak{m}\). If \(\operatorname{Ext}^{1}_{A}(M,N)\neq T_{A}(M,N)\) then the function \([e^{T}_{A}]\) defined in 6.2 factors as_ \[[\overline{e^{T}_{A}}]\colon\mathbb{P}(\operatorname{Ext}^{1}_{A}(M,N)/T_{A}(M,N))\to\mathbb{N}\setminus 0.\] Proof.: Let \(\alpha\in\operatorname{Ext}^{1}_{A}(M,N)\) be represented as \(0\to N\to E\to M\to 0\). Let \(r\in k^{*}\) and let \(r\alpha\) be represented as \(0\to N\to E^{\prime}\to M\to 0\). Consider the diagram Note \(\psi\colon E\to E^{\prime}\) is an isomorphism. It follows that \(e^{T}(\alpha)=e^{T}(r\alpha)\). The result follows. **Remark 6.4**.: All the results in this section is also true for \(e^{T}_{\mathcal{F}}()\). Same proof works in that case also. For the rest of this section we consider the following setup: \((A,\mathfrak{m})\) is a complete reduced CM local ring. Also assume \(A\) contains a field \(k\cong A/\mathfrak{m}\). Furthermore \(k\) is either uncountable or a perfect field. Let \(I\) be an \(\mathfrak{m}\)-primary ideal and let \(\mathcal{F}=\{I_{n}\}_{n\in\mathbb{Z}}\) be an \(I\)-admissible filtration with \(I_{1}=\overline{I}\) and \(I_{n}=\overline{I^{n}}\) for \(n\gg 0\). Let \(M,N\) be MCM \(A\)-modules and consider the function \[e^{T}_{\mathcal{F}}\colon\operatorname{Ext}^{1}_{A}(M,N)\to \mathbb{N}\] \[\alpha\mapsto e^{T}(\alpha).\] Notice \(e^{T}_{\mathcal{F}}(\alpha)\leq e^{T}_{\mathcal{F}}(M)+e^{T}_{\mathcal{F}}(N)\). So \(e^{T}_{\mathcal{F}}(\operatorname{Ext}^{1}_{A}(M,N))\) is a bounded set. If \(Z\) is a finite set then set \(|Z|\) denote its cardinality. Set \(Z_{\mathcal{F}}(M,N)=|e^{T}_{\mathcal{F}}(\operatorname{Ext}^{1}_{A}(M,N))|\). **Corollary 6.6**.: _(with hypotheses as in 6.5). Further assume \(k\) is a finite field and \(\operatorname{Ext}^{1}_{A}(M,N)\) is non-zero and has finite length as an \(A\)-module (and so a finite dimensional \(k\)-vector space). Set \(c(M,N)=|\mathbb{P}(\operatorname{Ext}^{1}_{A}(M,N))|\). Let \(I\) be any \(\mathfrak{m}\)-primary ideal and let \(\mathcal{F}=\{I_{n}\}_{n\in\mathbb{Z}}\) be an \(I\)-admissible filtration with \(I_{1}=\overline{I}\) and \(I_{n}=\overline{I^{n}}\) for \(n\gg 0\). Then \(Z_{\mathcal{F}}(M,N)\leq c(M,N)\)._ Proof.: We may assume \(T_{\mathcal{F}}(M,N)\neq\operatorname{Ext}^{1}_{A}(M,N)\). By 6.3 we get that \[Z_{\mathcal{F}}(M,N)\leq|\mathbb{P}(\operatorname{Ext}^{1}_{A}(M,N)/T_{ \mathcal{F}}(M,N))|.\] Note \(|\mathbb{P}(\operatorname{Ext}^{1}_{A}(M,N)/T_{\mathcal{F}}(M,N))|\) is bounded above by \(c(M,N)\). The result follows. ## 7. T-split sequences and AR-sequences The goal of this section is to prove the following result: **Theorem 7.1**.: _Let \((A,\mathfrak{m})\) be a Henselian Cohen-Macaulay local ring and let \(M\) be an indecomposable MCM \(A\)-module free on the punctured spectrum of \(M\). The following assertions are equivalent:_ 1. _There exists a_ \(T\)_-split sequence_ \(\alpha\colon 0\to K\to E\to M\to 0\) _with_ \(\alpha\) _non-split._ 2. _There exists a_ \(T\)_-split sequence_ \(\beta\colon 0\to V\to U\to M\to 0\) _with_ \(V\) _indecomposable and_ \(\beta\) _non-split._ 3. _The AR-sequence ending at_ \(M\) _is_ \(T\)_-split._ For definition of Auslander-Reiten (AR) sequences, see [19, Chapter 2]. From [19, Theorem 3.4], we know that for an indecomposable MCM module over \(A\), then there is an AR sequence ending in \(M\) if and only if \(M_{\mathfrak{p}}\) is free for all \(\mathfrak{p}\in\operatorname{Spec}^{0}(A)=\operatorname{Spec}(A)\setminus\{ \mathfrak{m}\}\) Before proving Theorem 7.1 we need the following well-known result. We give a proof for the convenience of the reader. **Lemma 7.2**.: _Let \(A\) be a Noetherian ring and \(N,M\) and \(E\) are finite \(A\)-module. Let \(N=N_{1}\oplus N_{2}\) and we have following diagram_ _for \(i=1,2\). Here \(p_{i}:N\to N_{i}\) is projection map for \(i=1,2\). If \(s\) is non-split then one of the \(s_{i}\) is non-split._ Proof.: Let \(s_{1}\) and \(s_{2}\) are split exact sequences. So we have \(g_{i}:M_{i}\to N_{i}\) for \(i=1,2\) such that \(g_{i}f_{i}=1_{N_{i}}\). Consider function \(g=(g_{1}\gamma_{1},g_{2}\gamma_{2}):M\to N_{1}\oplus N_{2}\). Let \((n_{1},n_{2})\in N\) then \[gf(n_{1},n_{2}) =(g_{1}\gamma_{1}f(n_{1},n_{2}),g_{2}\gamma_{2}f(n_{1},n_{2}))\] \[=(g_{1}f_{1}p_{1}(n_{1},n_{2}),g_{2}f_{2}p_{2}(n_{1},n_{2}))\] \[=(n_{1},n_{2})\] This implies \(g\) is a left inverse of \(f\), so \(s\) is split exact sequence. We now give Proof of Theorem 7.1.: The assertions (iii) \(\implies\) (ii) \(\implies\) (i) are clear. (i) \(\implies\) (ii). As \(A\) is Henselian the module \(K\) splits as a sum of indecomposable modules \(K=K_{1}\oplus K_{2}\oplus\cdots\oplus K_{r}\). The result follows from Lemma 7.2. (ii) \(\implies\) (iii). Let \(\beta:0\to V\to U\to M\to 0\) be \(T\)-split and \(\beta\) non-split. As \(V\) is indecomposable, we have following diagram \(\beta:0\)\(\gamma\)\(\gamma(M)\)\(\gamma\)\(\gamma(M)\)\(\gamma Fix \(n_{0}\geq\operatorname{red}(A)\). For \(j=1,\dots,n_{0}\) we have \[\operatorname{Tor}_{*}^{A}(A/\mathfrak{m}^{j},M)=\bigoplus_{i\geq 0}\operatorname{Tor }_{i}^{A}(A/\mathfrak{m}^{j},M)\text{ is *-Artinian }B=A[t_{1},\dots,t_{c}]\text{ module,}\] where \(t_{1},\dots,t_{c}\) are Eisenbud operators. Then for \(i\gg 0\) (say \(i\geq i_{0}\)) and for \(j=1,\dots,n_{0}\) we have following exact sequence \[\operatorname{Tor}_{i+2}^{A}(A/\mathfrak{m}^{j},M)\xrightarrow{\xi}\operatorname {Tor}_{i}^{A}(A/\mathfrak{m}^{j},M)\to 0.\] Here \(\xi\) is a linear combination of \(t_{1},\dots,t_{c}\). (see [6, Lemma 3.3]). We have following commutative diagram for \(i\geq i_{0}\) where \(\theta_{i}=\operatorname{Tor}_{i}^{A}(\alpha_{n_{0}},M)\). As \(\theta_{i}\) and \(\theta_{i+2}\) are isomorphism we get that the bottom row is also surjective. Iterating we get an exact sequence for all \(j\geq 1\) and for all \(i\geq i_{0}\), \[\operatorname{Tor}_{i+2}^{A}(A/\mathfrak{m}^{j},M)\xrightarrow{\xi} \operatorname{Tor}_{i}^{A}(A/\mathfrak{m}^{j},M)\to 0.\] Note \(\xi\) induces a chain map \(\xi\colon\mathbb{F}[2]\to\mathbb{F}\). As we have a surjection \[\operatorname{Tor}_{i+2}^{A}(A/\mathfrak{m},M)\xrightarrow{\xi}\operatorname {Tor}_{i}^{A}(A/\mathfrak{m},M)\to 0,\quad\text{for }i\geq i_{0},\] by Nakayama Lemma we have surjections \(F_{i+2}\xrightarrow{\xi}F_{i}\) for all \(i\geq i_{0}\) (say with kernel \(G_{i}\)). Notice we have a short exact sequence of complexes \[0\to\mathbb{G}_{\geq i_{0}}\to\mathbb{F}[2]_{\geq i_{0}}\xrightarrow{\xi} \mathbb{F}_{\geq i_{0}}\ \to 0.\] Thus we have surjections \(M_{i+2}\xrightarrow{\xi}M_{i}\) for all \(i\geq i_{0}\), say with kernel \(K_{i}\). We note that \(\mathbb{G}_{\geq i_{0}}\) is a free resolution of \(K_{i_{0}}\) and that \(K_{i}\) is (possibly upto a free summand) the \((i-i_{0})^{th}\) syzygy of \(K_{i_{0}}\). It follows that \(\operatorname{cx}K_{i}=\operatorname{cx}K_{i_{0}}\leq\operatorname{cx}M-1\). We have an exact sequence \(\alpha_{r}\colon 0\to K_{r}\to M_{r+2}\to M_{r}\to 0\) for all \(r\geq r_{0}\). Since \(M\) is indecomposable, \(M_{r}=\operatorname{Syz}_{r}^{A}(M)\) is also indecomposable for all \(r\geq 1\) (see [19, Lemma 8.17]). As \(\operatorname{cx}M\geq 2\) it follows that \(M_{r+2}\not\cong M_{r}\) for all \(r\geq 1\). It follows that \(\alpha_{r}\) is not split for all \(r\geq r_{0}\). By 8.2 it follows that for \(i\geq i_{0}\) we have an an exact seqquence \[0\to\operatorname{Tor}_{1}^{A}(A/\mathfrak{m}^{j},K_{i})\to\operatorname{Tor }_{1}^{A}(A/\mathfrak{m}^{j},M_{i+2})\to\operatorname{Tor}_{1}^{A}(A/ \mathfrak{m}^{j},M_{i})\to 0,\] for all \(j\geq 1\). Clearly this implies that \(\alpha_{i}\) is \(T\)-split. Notice \(M_{r}\) is free on \(\operatorname{Spec}^{0}(A)\) for all \(r\geq 1\). As \(\alpha_{r}\) is \(T\)-split, it follows from 7.1 that the AR-sequence ending at \(M_{r}\) is \(T\)-split for all \(r\geq r_{0}\) ## 9. \(T\)-split sequences on hypersurfaces defined by quadrics In this section we prove Theorem 1.6 (see Theorem 9.4). We also construct example 1.7 (see 9.5). **9.1**.: In this section \((Q,\mathfrak{n})\) is a Henselian regular local ring with algebraically closed residue field \(k=Q/\mathfrak{n}\) and let \(f\in\mathfrak{n}^{2}\setminus\mathfrak{n}^{3}\). Assume the hypersurface \(A=Q/(f)\) is an isolated singularity of dimension \(d\geq 1\). **Remark 9.2**.: It is well-known that as \(f\) is a quadric, the ring \(A\) has minimal multiplicity. It follows that \(e_{0}(A)=2\) and \(e_{1}(A)=1\). We also have that if \(M\) is MCM then \(N=\operatorname{Syz}_{1}^{A}(M)\) is Ulrich, i.e., \(\mu(N)=e_{0}(N)\) (furthermore \(e_{1}(N)=0\)). As \(A\) is also Gorenstein we get that any MCM \(A\)-module \(M\cong F\oplus E\) where \(F\) is free and \(E\) has no-free summands and is a syzygy of an MCM \(A\)-module; in particular \(E\) is Ulrich. The following results computes \(e^{T}(-)\) for MCM \(A\)-modules. We also give a sufficient condition for a short exact sequence to be \(T\)-split. **Proposition 9.3**.: _(with hypotheses as in 9.1) Let \(M,N,U,V\) be MCM \(A\)-modules with \(M,N\) having no free-summands. Then_ 1. \(e^{T}(M)=\mu(M)\)_._ 2. _Let_ \(U=L\oplus F\) _where_ \(F\) _is free and_ \(L\) _has no free summands. Then_ \(e^{T}(U)=\mu(L)\)_._ 3. _Let_ \(\alpha\colon 0\to N\to V\to M\to 0\)_. If_ \(\mu(V)=\mu(N)+\mu(M)\) _then_ 1. \(V\) _is Ulrich_ 2. \(\alpha\) _is_ \(T\)_-split._ Proof.: (1) Note \(\operatorname{Syz}_{1}^{A}(M)\) is also Ulrich. Using 9.2 we have \[e^{T}(M)=e_{1}(A)\mu(M)-e_{1}(M)-e_{1}(\operatorname{Syz}_{1}^{A}(M))=\mu(M).\] (2) Note \(e^{T}(U)=e^{T}(L)+e^{T}(F)=\mu(L)+0=\mu(L)\). (3) We have \[e_{0}(V) =e_{0}(M)+e_{0}(N),\] \[=\mu(M)+\mu(N)\quad\text{as $M,N$ are Ulrich},\] \[=\mu(V).\] In particular \(V\) is Ulrich. So \(V\) has no free summands. We have \[e^{T}(\alpha)=e^{T}(M)+e^{T}(N)-e^{T}(V)=\mu(M)+\mu(N)-\mu(V)=0.\] So \(\alpha\) is \(T\)-split. We now state and prove the main result of this section. **Theorem 9.4**.: _(with hypotheses as in 9.1) All but a finitely many AR-sequences of \(A\) are \(T\)-split._ Proof.: We may assume that \(A\) is of infinite CM representation type (i.e., there exists infinitely many mutually non-isomorphic indecomposable MCM \(A\)-modules) otherwise there is nothing to prove. The AR-quiver of \(A\) is locally finite graph, [19, 5.9]. It follows that for all but finitely many MCM indecomposable \(A\)-modules the middle term of the AR-sequence ending at \(M\) and \(\operatorname{Syz}_{1}^{A}(M)\) will not contain a free summand. Let \(M\) be such a indecomposable MCM \(A\)-module and let \(s\colon 0\to\tau(M)\to V\to M\to 0\) be the AR-sequence ending at \(M\). Then by [15, 7.11] we have \(\mu(V)=\mu(M)+\mu(\tau(M))\). By 9.3(3) it follows that \(s\) is \(T\)-split. We now give example of an AR-sequence which is not split. **Example 9.5**.: (with hypotheses as in 9.1) Let \(s\colon 0\to N\to E\to M\to 0\) be an AR-sequence such that \(E\) has a free summand. Then 1. \(s\) is NOT \(T\)-split. 2. If \(t\colon 0\to V\to U\to M\to 0\) is any non-split exact sequence of MCM \(A\)-modules then \(t\) is NOT \(T\)-split. Proof.: (1) Note \(\mu(N)\geq\mu(E)-\mu(M)\). Furthermore equality cannot hold for otherwise by Proposition 9.3 we will get \(E\) is Ulrich, a contradiction. Let \(E=L\oplus F\) with \(F\neq 0\) free and \(L\) has no free summands. We note that \[e^{T}(s) =e^{T}(N)+e^{T}(M)-e^{T}(E),\] \[=\mu(N)+\mu(M)-\mu(L)\] \[>\mu(N)+\mu(M)-\mu(E)>0.\] Thus \(s\) is NOT \(T\)-split. (2) This follows from Theorem 7.1. ## 10. An Application of \(T\)-split sequences in Gorenstein case In this section we prove Proposition 1.9 (see 10.6). We also prove Theorem 1.10 (see 10.9 and 10.10). **10.1**.: Let \((A,\mathfrak{m})\) be a Gorenstein local ring. Let \(\operatorname{CM}(A)\) denotes the category of MCM \(A\)-modules and \(\underline{\operatorname{CM}}(A)\) the stable category of \(\operatorname{CM}(A)\). Note that objects of \(\underline{\operatorname{CM}}(A)\) are same as the objects of \(\operatorname{CM}(A)\) and if \(M\) and \(N\) are MCM \(A\)-modules then \[\underline{\operatorname{Hom}}_{A}(M,N)=\frac{\operatorname{Hom}_{A}(M,N)}{ \{f:M\to N|\ f\text{ factors through a projective module}\}}.\] **10.2**.: (Co-syzygy) Let \((A,\mathfrak{m})\) be a Gorenstein local ring and \(M\) be an MCM \(A\)-module. Let \(M^{*}=\operatorname{Hom}(M,A)\), then \(M^{**}\cong M\). Suppose \(G\xrightarrow{\epsilon}F\to M^{*}\to 0\) is a minimal presentation of \(M^{*}\). Dualizing this, we get \(0\to M\to F^{*}\xrightarrow{\epsilon^{*}}G^{*}\). Co-syzygy of \(M\) can be defined as \(\operatorname{coker}(\epsilon^{*})\) and denoted as \(\Omega^{-1}(M)\). So we have exact sequence \(0\to M\to F\to\Omega^{-1}(M)\to 0\). Note that co-syzygy does not depend on the minimal presentation, that is if we take another minimal presentation \(G^{\prime}\xrightarrow{\epsilon^{\prime}}F^{\prime}\to M^{*}\to 0\) then \(\operatorname{coker}(\epsilon^{*})\cong\operatorname{coker}((\epsilon^{ \prime})^{*})\). **10.3**.: Let \(\Omega^{-1}(M)\) be the co-syzygy of \(M\), then we have following exact sequence \[0\to M\to F\to\Omega^{-1}(M)\to 0\] here \(F\) is a free \(A\)-module (see 10.2). For any \(f\in\operatorname{Hom}_{A}(M,N)\), we have following diagram Here the first sequare is a pushout diagram. **Remark 10.4**.: Note that \(\underline{\operatorname{CM}}(A)\) is an triangulated category with the projection of the sequence \(M\xrightarrow{f}N\xrightarrow{i}C(f)\xrightarrow{-p}\Omega^{-1}(M)\) in \(\underline{\operatorname{CM}}(A)\) as a basic triangles for any morphism \(f\). Exact triangles are triangles isomorphic to a basic triangle (see [4, 4.7]). Also note that for any short exact sequence \(0\to U\to V\to W\to 0\) in \(\operatorname{CM}(A)\), we have exact triangle \(U\to V\to W\to\Omega^{-1}(U)\) (see [13, Remark 3.3]). **10.5**.: Let \(M\) and \(N\) be MCM \(A\)-modules, the it is easy to show that \[\underline{\operatorname{Hom}}_{A}(M,N)\cong\operatorname{Ext}^{1}_{A}(\Omega ^{-1}(M),N)\text{ as $A$-modules}.\] In fact, the map \(\eta:f\mapsto\alpha_{f}\) is an isomorphism. It is clear that \(\eta\) is natural in \(M\) and \(N\). Let \(T_{A}(\Omega^{-1}(M),N)\) denotes the set of all \(T\)-split sequences in \(\operatorname{Ext}^{1}_{A}(\Omega^{-1}(M),N)\). If we denote \(\eta^{-1}(T_{A}(\Omega^{-1}(M),N))\) by \(\mathcal{R}(M,N)\). Then \(\eta\) induces following isomorphism \[\frac{\underline{\operatorname{Hom}}_{A}(M,N)}{\mathcal{R}(M,N)}\cong\frac{ \operatorname{Ext}^{1}_{A}(\Omega^{-1}(M),N)}{T_{A}(\Omega^{-1}(M),N)}.\] **Proposition 10.6**.: \(\mathcal{R}\) _is a relation on \(\underline{\operatorname{CM}}(A)\)._ Proof.: To prove that \(\mathcal{R}\) is a relation on \(\underline{\operatorname{CM}}(A)\) we need to show : if \(M_{1},M,N,N_{1}\in\operatorname{CM}(A)\), \(u\in\mathcal{R}(M,N)\), \(f\in\underline{\operatorname{Hom}}_{A}(M_{1},N)\) and \(g\in\underline{\operatorname{Hom}}_{A}(N,N_{1})\) then \(u\circ f\in\mathcal{R}(M_{1},N)\) and \(g\circ u\in\mathcal{R}(M,N_{1})\). We first prove \(u\circ f\in\mathcal{R}(M_{1},N)\). We have following diagram of exact triangles Note that the map \(h\) exists from the property (TR3) (see[18, Definition 10.2.1]). So we have following diagram of exact sequences where \(F\) and \(G\) are free \(A\)-modules. Now since \(u\in\mathcal{R}(M,N)\), this implies \(\alpha_{u}\) is \(T\)-split. So from [16, Proposition 3.8], \(\alpha_{g\circ u}\) is \(T\)-split. In other words \(g\circ u\in\mathcal{R}(M_{1},N)\). Note that the property (TR3) (see [18, Definition 10.2.1]) guarantees the existence of map \(\theta\). So we have following diagram of exact sequences where \(F^{\prime}\) and \(G^{\prime}\) are free \(A\)-modules. Now since \(u\in\mathcal{R}(M,N)\), this implies \(\alpha_{u}\) is \(T\)-split. So from [16, Proposition 3.9], \(\alpha_{u\circ f}\) is \(T\)-split. In other words \(u\circ f\in\mathcal{R}(M_{1},N)\). Next we prove \(g\circ u\in\mathcal{R}(M,N_{1})\). We have following diagram of exact triangles Note that the property (TR3) (see [18, Definition 10.2.1]) guarantees the existence of map \(\theta\). So we have following diagram of exact sequences where \(F^{\prime}\) and \(G^{\prime}\) are free \(A\)-modules. Now since \(u\in\mathcal{R}(M,N)\), this implies \(\alpha_{u}\) is \(T\)-split. So from [16, Proposition 3.8], \(\alpha_{g\circ u}\) is \(T\)-split. In other words \(g\circ u\in\mathcal{R}(M_{1},N)\). **10.7**.: Since \(\mathcal{R}\) is a relation on \(\underline{\text{CM}}(A)\), the factor category \(\mathcal{D}_{A}=\underline{\text{CM}}(A)/\mathcal{R}\) is an additive category. Note that objects of \(\mathcal{D}_{A}\) are the same as those of \(\underline{\text{CM}}(A)\), and for any \(M,N\in\text{Obj}(\mathcal{D}_{A})\), \(\text{Hom}_{\mathcal{D}_{A}}(M,N)=\underline{\text{Hom}}_{A}(M,N)/\mathcal{R }(M,N)\). Also note that \(\ell(\text{Hom}_{\mathcal{D}_{A}}(M,N))<\infty\) (see [16, Theorem 4.1]). Next we want to prove the main result of this section. But first we prove a lemma. **Lemma 10.8**.: _Let \((A,\mathfrak{m})\) be a Henselian Gorenstein local ring and \(M\) be an MCM \(A\)-module. Then \(\mathcal{R}(M,M)\subseteq\text{Jac}(\underline{\text{End}}_{A}(M))\) in \(\underline{\text{CM}}(A)\)._ Proof.: We prove this result in three cases: **Case 1:**\(M\) is indecomposable MCM module. Let \(u\in\mathcal{R}(M,M)\) and if possible assume that \(u\notin\operatorname{Jac}(\underline{\operatorname{End}}_{A}(M))\). This implies \(u\) is invertible. Now we have following diagram of exact sequences From here we get \(C(u)\cong F\). Also from the assumption \(\alpha_{u}\) is \(T\)-split. We know that \(e^{T}(\alpha_{u})=e^{T}(M)+e^{T}(\Omega^{-1}(M))-e^{T}(C(u))\). So \(e^{T}(\alpha_{u})=e^{T}(M)+e^{T}(\Omega^{-1}(M))\) because \(C(u)\cong F\). This implies \(e^{T}(\alpha_{u})>0\) but this is a contradiction because \(\alpha_{u}\) is \(T\)-split. Therefore, \(u\in\operatorname{Jac}(\underline{\operatorname{End}}_{A}(M))\). **Case 2:**\(M\cong E^{n}\) for some indecomposable MCM module \(E\). It is clear that \(\mathcal{R}(M,M)=\mathcal{R}(E^{n},E^{n})\cong M_{n}(\mathcal{R}(E,E))\). Here \(M_{n}()\) denotes \(n\times n\)-matrix. We also know that \(\underline{\operatorname{End}}(E^{n})\cong M_{n}(\underline{\operatorname{End }}(E))\) and \(\operatorname{Jac}(\underline{\operatorname{End}}(E^{n}))\cong M_{n}( \operatorname{Jac}(\underline{\operatorname{End}}(E)))\). From the case (1), \(M_{n}(\mathcal{R}(E,E))\subseteq M_{n}(\operatorname{Jac}(\underline{ \operatorname{End}}(E)))\). So, \(\mathcal{R}(M,M)\subseteq\operatorname{Jac}(\underline{\operatorname{End}}(M))\). **Case 3:**\(M\cong M_{1}^{r_{1}}\oplus\ldots\oplus M_{q}^{r_{q}}\) with each \(M_{i}\) indecomposable for all \(i=1,\ldots,q\) and \(M_{i}\not\cong M_{j}\) if \(i\neq j\) (since \(A\) is complete, Krull-Remak-Schmidt (KRS) holds for \(\underline{\operatorname{CM}}(A)\)). We can assume that \(q>1\) because \(q=1\) case follows from case (2). Now it is sufficient to prove the following claim. **Claim:** Let \(E\) and \(L\) be MCM \(A\)-module. Assume that \(E\cong E_{1}^{a_{1}}\oplus\ldots\oplus E_{n}^{a_{n}}\) and \(L\cong L_{1}^{b_{1}}\oplus\ldots\oplus L_{r}^{b_{r}}\) where \(E_{i}\) and \(L_{j}\) are distinct indecomposable MCM modules and \(E_{i}\not\cong L_{j}\) for \(i=1,\ldots,n\) and \(j=1,\ldots,r\). If the lemma is true for \(E\) and \(L\), then it is also true for \(N=E\oplus L\). **Proof of the claim:** We know that \[\underline{\operatorname{End}}_{A}(N)=\begin{pmatrix}\underline{ \operatorname{End}}_{A}(E)&\underline{\operatorname{Hom}}_{A}(L,E)\\ \underline{\operatorname{Hom}}_{A}(E,L)&\underline{\operatorname{End}}_{A}(L )\end{pmatrix},\] \[\operatorname{Jac}(\underline{\operatorname{End}}_{A}(N))=\begin{pmatrix} \operatorname{Jac}(\underline{\operatorname{End}}_{A}(E))&\underline{ \operatorname{Hom}}_{A}(L,E)\\ \underline{\operatorname{Hom}}_{A}(E,L)&\operatorname{Jac}(\underline{ \operatorname{End}}_{A}(L))\end{pmatrix}\] and \[\mathcal{R}(N,N)=\begin{pmatrix}\mathcal{R}(E,E)&\mathcal{R}(L,E)\\ \mathcal{R}(E,L)&\mathcal{R}(L,L)\end{pmatrix}\] Since the result is true for \(E\) and \(L\), this implies \(\mathcal{R}(N,N)\subseteq\operatorname{Jac}(\underline{\operatorname{End}}_{A }(N))\). **Theorem 10.9**.: _Let \((A,\mathfrak{m})\) be a Henselian Gorenstein local ring, \(M\) and \(N\) be MCM \(A\)-modules. Then \(M\cong N\) in \(\mathcal{D}_{A}\) if and only if \(M\cong N\) in \(\underline{\mathrm{CM}}(A)\)._ Proof.: Let \(f:M\to N\) be an isomorphism in \(\underline{\mathrm{CM}}(A)\). Then \(f\) is an isomorphism of \(M\) and \(N\) in \(\mathcal{D}_{A}\). For the other direction, suppose \(f:M\to N\) be an isomorphism in \(\mathcal{D}_{A}\). Then there exists an isomorphism \(g:N\to M\) such that \(g\circ f=\mu\) and \(\mu=1+\delta\) for some \(\delta\in\mathcal{R}(M,M)\). From Lemma 10.8, \(\delta\in\mathrm{Jac}(\underline{\mathrm{End}}_{A}(M))\). This implies \(\mu\) is an isomorphism in \(\underline{\mathrm{CM}}(A)\). Therefore, \(g\circ f\) is an isomorphism in \(\underline{\mathrm{CM}}(A)\). Similarly, \(f\circ g\) is also an isomorphism in \(\underline{\mathrm{CM}}(A)\). This implies \(M\cong N\) in \(\underline{\mathrm{CM}}(A)\). **Proposition 10.10**.: _Let \((A,\mathfrak{m})\) be a Henselian Gorenstein local ring. If \(M\) is indecomposable in \(\underline{\mathrm{CM}}(A)\) then it is indecomposable in \(\mathcal{D}_{A}\). Furthermore, \(\mathcal{D}_{A}\) is a Krull-Remak-Schmidt (KRS) category._ Proof.: Let \(M\) be an MCM \(A\)-module, then \(M\cong M_{1}^{a_{1}}\oplus\ldots\oplus M_{n}^{a_{n}}\) in \(\underline{\mathrm{CM}}(A)\) here each \(M_{i}\) is distinct indecomposable non free MCM \(A\)-module. For any indecomposable non free MCM module \(N\) we know \(\underline{\mathrm{End}}_{A}(N)\) is a local ring and \(\mathrm{End}_{\mathcal{D}_{A}}(N)=\underline{\mathrm{End}}_{A}(N)/\mathcal{R}( N,N)\). From lemma 10.8, \(\mathcal{R}(N,N)\subseteq\mathrm{Jac}(\underline{\mathrm{End}}_{A}(N))\). So, \(\mathrm{End}_{\mathcal{D}_{A}}(N)\) is a local ring. Thus \(N\) is indecomposable in \(\mathcal{D}_{A}\).
2302.05081
Steering undulatory micro-swimmers in a fluid flow through reinforcement learning
This work aims at finding optimal navigation policies for thin, deformable microswimmers that progress in a viscous fluid by propagating a sinusoidal undulation along their slender body. These active filaments are embedded in a prescribed, non-homogeneous flow, in which their swimming undulations have to compete with the drifts, strains, and deformations inflicted by the outer velocity field. Such an intricate situation, where swimming and navigation are tightly bonded, is addressed using various methods of reinforcement learning. Each swimmer has only access to restricted information on its configuration and has to select accordingly an action among a limited set. The optimisation problem then consists in finding the policy leading to the most efficient displacement in a given direction. It is found that usual methods do not converge and this pitfall is interpreted as a combined consequence of the non-Markovianity of the decision process, together with the highly chaotic nature of the dynamics, which is responsible for high variability in learning efficiencies. Still, we provide an alternative method to construct efficient policies, which is based on running several independent realisations of Q-learning. This allows the construction of a set of admissible policies whose properties can be studied in detail and compared to assess their efficiency and robustness.
Zakarya El Khiyati, Raphael Chesneaux, Laetitia Giraldi, Jeremie Bec
2023-02-10T06:44:54Z
http://arxiv.org/abs/2302.05081v1
# Steering undulatory micro-swimmers in a fluid flow through reinforcement learning ###### Abstract This work aims at finding optimal navigation policies for thin, deformable microswimmers that progress in a viscous fluid by propagating a sinusoidal undulation along their slender body. These active filaments are embedded in a prescribed, non-homogeneous flow, in which their swimming undulations have to compete with the drifts, strains, and deformations inflicted by the outer velocity field. Such an intricate situation, where swimming and navigation are tightly bonded, is addressed using various methods of reinforcement learning. Each swimmer has only access to restricted information on its configuration and has to select accordingly an action among a limited set. The optimisation problem then consists in finding the policy leading to the most efficient displacement in a given direction. It is found that usual methods do not converge and this pitfall is interpreted as a combined consequence of the non-Markovianity of the decision process, together with the highly chaotic nature of the dynamics, which is responsible for high variability in learning efficiencies. Still, we provide an alternative method to construct efficient policies, which is based on running several independent realisations of \(Q\)-learning. This allows the construction of a set of admissible policies whose properties can be studied in detail and compared to assess their efficiency and robustness. ## I Introduction A number of microorganisms, including bacteria and plankton, are natural examples of active, self-propelled particles. They often inspire the design of artificial devices used for industrial micro-manufacturing, toxic waste disposal, targeted drug delivery and localised medical diagnostics [1]. Recent technological developments in the use of micro-swimmers in medicine open new frontiers, such as microscopic-scale surgery directly inside the human body and medicine and drugs delivery in very precise places where their efficiency will be optimal. Much work has been devoted to designing adequate nano-robots and studying the way they can be propelled and controlled using an external magnetic field [2], in particular for _in-vivo_ conditions. Still, many questions remain open on how to optimise the displacement of these micro-swimmers, and in particular whether their behaviour is altered when they are embedded in complex flows comprising obstacles, walls, or having non-Newtonian properties. This is particularly important to design new strategies that will allow artificial swimmers to reach today inaccessible regions of the human body. Studying and optimising the movement of swimmers and micro-swimmers is generally addressed in two successive steps. The first is to find an appropriate _swimming strategy_ by choosing the composition, shape, or deformation that will lead to an efficient locomotion. The second step is to define a _navigation strategy_ that takes into account obstacles, fluctuations in the surrounding flow, and its geometry, with the aim to minimise the time needed or the energy used to reach a specific target. Studying swimming strategies at the microscopic level requires advanced tools to describe fluid-structure interactions [3; 4], to take a non-Newtonian rheology of the surrounding fluid into account [5], to model the hydrodynamics stresses due to the vicinity of walls [6]. Finding an effective strategy then relies on biomimetics [7; 8] or on solving costly problems of optimal control [9]. As a matter of fact, such swimming issues are most of the time addressed in situations where the surrounding flow is at rest. This is justified by the complexity and the computational costs that would be required to accurately model the intricate fluid-structure interactions occurring in a chaotic or turbulent medium. Regarding navigation problems, there is an increasing interest in considering complicated carrier flows (see [10] for a recent review). The swimming mechanisms are often oversimplified and one rather focuses on how to adjust macroscopic active features of the swimmers in order to optimise their long-term displacement. Under such conditions, the use of machine learning techniques has proved efficiency [11]. Reinforcement learning has for instance been used to address navigation in a turbulent flow and to construct strategies that allow swimmers to find optimal paths to their targets in such a chaotic and fluctuating environment [12; 13; 14; 15; 16; 17; 18]. Navigation problems have also been studied from different perspectives such as finding new paths in the presence of obstacles that can be modelled as barriers of potential [15]. As to approaches that use deep reinforcement learning, they demonstrated successes in various applications, such as terrain-adaptive dynamic locomotion [19] or real-world manipulation tasks [20]. Here we want to address the situation where locomotion and navigation are tightly dependent on each other. Our goal is to show the feasibility of using machine learning approaches for a mesoscopic model of swimmer, and in particular to understand if such approaches are able, not only to make the swimmer move, but also to have it at the same time navigate a complex environment. The swimmers are assumed to be simple, deformable, inextensible thin filaments whose interactions with the fluid are explicitly described by the slender-body theory. Among the different types of swimming, we have chosen wave locomotion which is a self-propulsion strategy that re lies on the generation and propagation of an undulation along the swimmer [21]. This is a relatively simple, but remarkably robust technique that builds on the interactions between the swimmer and the fluid and appears in a variety of swimming strategies observed in nature. We consider the problem where such swimmers are aiming at moving as fast as possible in a given direction, being at the same time embedded in a space-periodic, time-stationary, incompressible fluid flow that produces headwinds and deformations hindering their mobility. We find that in such settings, the undulatory swimmers progress efficiently only if they follow a policy that prescribes different actions to be performed depending on their configuration. We focus on a simple, paradigmatic case: The actions and observations of the environment by the swimmer are both chosen from discrete sets that consist, respectively, of swimming either horizontally or vertically with different amplitudes and having sparse information on its orientation and the local direction of the flow. We look for optimal policies for this partially-observable Markov decision process, by applying and comparing various algorithms of reinforcement learning, ranging from the popular \(Q\)-learning technique to approximation methods (differential SARSA and Actor-Critic). We find that these approaches do not provide satisfactory results: Either they do not converge, or if they do so, they require prohibiting long times. We propose an alternative method that can be seen as belonging to the class of _competitive_\(Q\)-learning approaches. It builds on the observation that, because of the highly chaotic character of the dynamics, individual realisations of simple, deterministic \(Q\)-learning are able to identify, quite quickly, a diversity of policies that lead to a reasonable displacement of the swimmer. The analysis of these admissible strategies can then be easily refined and systematised in order to rank them and select the most efficient ones. The advantage of this method is that it provides a short list of policies whose robustness can be tested and compared by varying the problem setting, for instance, the physical attributes of the swimmer (length, elasticity) or the properties of the surrounding flow. The paper is organised as follows. Section II introduces the swimmer model and reports results on how the efficiency of its locomotion depends on its physical properties. In Section III, we describe the outer flow and formulate the navigation problem in terms of discrete observations and actions. We also show that a policy is needed for the swimmer's displacement and introduce a naive strategy that allows it. Section IV is dedicated to a detailed comparison of various reinforcement learning techniques, leading to introduce the competitive \(Q\)-learning approach described above. Results on the performances and robustness of the short-listed policies are reported in Section V, including trials performed in unsteady flows that are solving the Navier-Stokes equation. Finally, Section VI gathers concluding remarks and perspectives. ## II A model of undulatory threadlike swimmer ### Dynamics of deformable slender bodies We consider elongated, flexible, inextensible swimmers. We moreover assume that they are very thin, meaning that their cross-section diameter \(d\) is much smaller than their length \(\ell\). This leads to describe their interactions with the surrounding viscous fluid in terms of the slender-body theory [22]. The swimmers are embedded in an incompressible flow whose velocity field is denoted by \(\mathbf{u}(\mathbf{x},t)\). We neglect the swimmers feedback onto this prescribed flow, which is justified in the limit when swimmers are very thin and dilute. The conformation of an individual swimmer at time \(t\) is given by a curve \(\mathbf{X}(s,t)\) parametrised by its arc-length \(s\in[0,\ell]\). We neglect the swimmer's inertia, so that its dynamics is given by equating to \(0\) the sum of the forces that act on it, namely \[-\zeta\,\mathbb{R}\left[\partial_{t}\mathbf{X}-\mathbf{u}(\mathbf{X},t)\right] +\;\partial_{s}(T\partial_{s}\mathbf{X})\] \[-K\,\partial_{s}^{4}\mathbf{X}+\mathbf{f}(s,t)=0. \tag{1}\] This equation of motion, which corresponds to the overdamped Cosserat equation, is the same as that obtained by resistive force theory to describe bio-filaments [23]. The first term on the left-hand side involves the drag coefficient \(\zeta=8\pi\mu/[2\log(\ell/d)-1]\) (with \(\mu\) the fluid dynamic viscosity) and the local Oseen resistance tensor \(\mathbb{R}=\mathbb{1}-(1/2)\,\partial_{s}\mathbf{X}\,\partial_{s}\mathbf{X}^{\top}\). This expression of the force exerted by the fluid assumes that, despite an arbitrary length, the fibre's thickness is so small that its perturbation on the flow has a vanishingly small Reynolds number, whence a linear but anisotropic drag. The second force appearing in Eq. (II) is the tension. Its amplitude \(T\) results from the inextensibility constraint \(|\partial_{s}\mathbf{X}(s,t)|=1\), valid at all time \(t\) and all position \(s\) along the swimmer. The third term is the bending elasticity force and depends on the swimmer's flexural rigidity \(K\) (product of Young's modulus and inertia). The last term, denoted by \(\mathbf{f}\), is a prescribed internal force that accounts for the _active behaviour_ of the swimmer responsible for its locomotion. Equation (II) is associated with the free-end boundary conditions \(\partial_{s}^{2}\mathbf{X}(s,t)=0\) and \(\partial_{s}^{3}\mathbf{X}(s,t)=0\) at the swimmer's extremities \(s=0\) and \(\ell\). The tension itself satisfies a second-order differential equation obtained by imposing \(\partial_{t}|\partial_{s}\mathbf{X}|^{2}=0\) with the boundary conditions \(T(s,t)=0\) at \(s=0\) and \(\ell\). In the absence of active force (\(\mathbf{f}=0\)), the swimmer is just a passive, flexible but inextensible fibre, whose dynamics depends on two non-dimensional parameters. One is given by the ratio \(\ell/L\) between the fibre's length \(\ell\) and the characteristic spatial scale \(L\) of the fluid flow. It characterises to which extent the fibre samples the fluid flow length scales and monitors geometrical interactions with surrounding structures and eddies [24; 25]. The other parameter is \((U\zeta/KL)^{1/4}\ell\), where \(U\) is a typical magnitude of the fluid velocity. It measures the fibre's flexibility and in particular its likeliness to be bent or buckled by the flow [26; 27; 28]. The larger it is, the more deformable is the fibre when it is subject to shear or compression. ### The undulatory swimming procedure We focus on swimmers that move by propagating a sinusoidal plane wave along their body. This undulation is assumed to be applied through the active body force \(\mathbf{f}\) appearing in the dynamical equation (1). The swimmers are thus assumed to have the ability to adapt their curvature along their body, as in the case of nematodes [29; 30]. Such settings are somewhat different from the beating of cilia or flagella, for which it is rather a time-periodic boundary data that is imposed to a flexible beating appendage, as in the case of sperm cells [31; 32]. We choose here to write the active force as \[\mathbf{f}(s,t)=A\,\zeta\,\nu\,\ell\,\cos(2\pi\,k\,s/\ell-\nu\,t)\,\mathbf{p} \tag{2}\] where \(\mathbf{p}\) is a unit vector in a direction orthogonal to that in which the swimmer is expected to move. The wave has frequency \(\nu\) and wavenumber \(2\pi k/\ell\) where \(k\) is an integer. To ensure self-propulsion, we impose that the force \(\mathbf{f}\) is not a global source of momentum for the swimmer, namely that \(\int\mathbf{f}\mathrm{d}s=0\), justifying why the wavenumber has to be chosen as a multiple of \((2\pi/\ell)\). The strength of the active force is controlled by the dimensionless amplitude \(A\). The resulting swimming speed in the \(\mathbf{p}^{\perp}\) direction, which is hereafter denoted by \(V_{\mathrm{swim}}\), non-trivially depends on the forcing parameters and the physical properties of the swimmer. To our knowledge, there is at present no analytic expression for \(V_{\mathrm{swim}}\), even in the absence of external fluid flow (\(\mathbf{u}=0\)). This can be explained by the intricate role played by inextensibility and tension and the imposed free-end boundary conditions that prevent from obtaining an explicit solution for the fibre conformation \(\mathbf{X}\) for this force. Still, when rescaling spatial scales by the swimmer's length \(\ell\) and time scales by the wave frequency \(\nu^{-1}\), one finds that \(V_{\mathrm{swim}}=\ell\nu\,\Psi_{k}(A,\mathcal{F})\), where \(\mathcal{F}=(\zeta\nu/K)^{1/4}\ell\) is a non-dimensional measure of the swimmer's flexibility under the action of the active force and the \(\Psi_{k}\)'s are non-dimensional functions indexed by the wavenumber \(k\). To obtain their behaviour, we have recourse to numerics. To set our physical parameters and understand better how the swimmers respond to activation, we have performed numerical simulations of the over-damped Cosserat equation (1) for isolated fibres in a fluid flow at rest. We use the second-order, centred finite-difference scheme of [33] with \(N=201\) to \(801\) grid-points along the fibre's arc-length. The inextensibility constraint is enforced by a penalisation method. Time marching uses a second-order semi-implicit Adams-Bashforth method with time step ranging from \(\delta t=10^{-3}\) to \(10^{-4}\). We have performed several simulations varying the forcing amplitude, its wavenumber, and the swimmer bending elasticity. After transients, the swimmer, which is initialised in a straight configuration, develops an undulating motion corresponding to a travelling wave propagating from its head (\(s=0\)) to its tail (\(s=\ell\)). Once this periodic regime is attained, we measure the time required for its displacement over several lengths \(\ell\) in order to evaluate the asymptotic swimming speed \(V_{\mathrm{swim}}\). The dependence of the swimming speed upon the amplitude parameter \(A\) is shown in Fig. 1(a), for different wave-numbers \(k\) and a fixed dimensionless flexibility \(\mathcal{F}\). Several representative configurations of the swimmer are also shown, with a dot indicating its head (\(s=0\)). At small forcing amplitudes, the undulation of the swimmer is very close to the imposed wave and the swimming speed increases quadratically. This behaviour can be obtained from a linear expansion of Eq. (1) at \(A\ll 1\). To leading order the swimmer is aligned with \(\mathbf{p}^{\perp}\), the unit vector orthogonal to the force, and it moves along this direction. The projection of its position can thus be expanded as \(\mathbf{p}^{\perp}\cdot\mathbf{X}=-s+X_{1}^{\prime}\) with \(X_{1}^{\prime}\ll 1\). In the transverse direction, one gets from Eq. (1) that \(X_{2}^{\prime}=\mathbf{p}\cdot\mathbf{X}=\mathcal{O}(A)\). The inextensibility constraint reads \(|\partial_{s}\mathbf{X}|^{2}=(1-\partial_{s}X_{1}^{\prime})^{2}+(\partial_{s}X_{ 2}^{\prime})^{2}=1\), implying that the longitudinal perturbation \(X_{1}^{\prime}\) is of the order of \((X_{2}^{\prime})^{2}\). This indeed implies that \(V_{\mathrm{swim}}\sim\partial_{t}X_{1}^{\prime}=\mathcal{O}(A^{2})\). Figure 1: Swimming speed in the absence of fluid velocity field, **(a)** as a function of the forcing amplitude, for flexibility \(\mathcal{F}=15\) and three various values of the wavenumber (\(k=2\), \(3\), and \(4\), as labelled), and **(b)** as a function of the swimmer’s flexibility \(\mathcal{F}\), for \(k=2\) and three values of the forcing dimensionless amplitude \(A\). This quadratic growth saturates for \(A\approx 0.1\)- \(0.2\) and the swimming speed then attains a maximum. This optimal speed slowly decreases and shifts toward larger values of \(A\) when \(k\) increases. One consequently observes that achieving a given swimming speed is getting more energetic, or even impossible, when the wavenumber of the undulation is taken larger. Beyond this maximum, swimming becomes less and less efficient at larger forcing amplitudes. At such value the swimmer's distorsion is highly non-linear and bending elasticity becomes important and induces an important dissipation. Figure 1(b) represents again \(V_{\mathrm{swim}}\), but this time as a function of the non-dimensional flexibility \(\mathcal{F}\), for \(k=2\) and three different amplitudes of forcing, before, at the maximum, and after. The swimming speed attains a maximum at intermediate values of \(\mathcal{F}\). When too stiff, the swimmer is not able to develop any significant undulation as the power input from the active force is dissipated by bending elasticity. At very large values of the flexibility, the swimmer is conversely too limp and energy is dissipated through viscous drag. An optimal locomotion is attained when the two dissipative mechanisms balance. This preliminary study of undulatory swimming in the absence of an external flow allows us to properly choose the physical parameters that will be considered. Hereafter we focus on the forcing wavenumber \(k=2\), the flexibility is chosen to be \(\mathcal{F}=15\), and the forcing amplitudes are picked before the saturation of swimming efficiency, _i.e._\(A\lesssim 0.15\). ## III Statement of the navigation problem We consider the two-dimensional navigation problem, which consists in having the swimmer moving as fast as possible in the \(x_{1}>0\) direction in the presence of a prescribed external fluid flow. In Sec. III.1, after introducing the model flow, we demonstrate that displacement can only occur if the swimmer follows a strategy. We then present in Sec. III.2 the observations and actions that can be used by the swimmers to control its displacement and we formulate the optimisation problem. We finally introduce in Sec. III.3 a "naive" strategy and evaluate its performance, with the aspiration that the reinforcement-learning methods applied in Sec. IV can outperform it. ### Swimming in a cellular flow To design a navigation strategy, we consider an undulatory swimmer that is embedded is a two-dimensional cellular flow. More specifically, we prescribe the outer fluid velocity to be \(\mathbf{u}=\nabla^{\perp}\Psi=(-\partial_{2}\Psi,\partial_{1}\Psi)\) with the stream function taking the simple periodic form \(\Psi(\mathbf{x},t)=(L\,U/\pi)\,\cos(\pi\,x_{1}/L)\,\cos(\pi\,x_{2}/L)\). The spatial domain is hence covered by a tile of cells mimicking eddies. Their size \(L\) is chosen of the same order of magnitude as the fiber length \(\ell\). The velocity field has an amplitude \(U\) to be compared to the swimming velocity \(V_{\mathrm{swim}}\) introduced in previous section. Such a two-dimensional flow is a stationary solution of the incompressible Euler equations and is used, for instance, to model the convection cells present in steady Rayleigh-Benard convection. It is often employed to study the effects of fluid shear and rotation on transport and mixing. It moreover has the convenience of being easily reproducible by experiments [34]. As seen later, even if the motion of tracers in such a flow is not chaotic, the dynamics of swimmers can be so. Our aim is to maximise the swimmer displacement toward the \(x_{1}>0\) direction. When using the basic swimming, that is to say always binding the fibre to swim with the force (2) constantly applied along the direction \(\mathbf{p}=\mathbf{e}_{2}\), one does not observe any long-term, net displacement. We have indeed perform a set of numerical simulations where the swimmer is initialised in a straight configuration, with its head always oriented toward \(x_{1}>0\), and varying its initial angle with the horizontal direction. Unless otherwise stated, we always use a discretisation of the swimmer with \(N=201\) grid-points and a time step \(\delta t=10^{-3}\). Performance is then monitored by \[\bar{x}_{1}(t)=\mathbf{e}_{1}\cdot\bar{\mathbf{X}}(t)=\frac{1}{\ell}\int_{0}^{\ell} \mathbf{e}_{1}\cdot\mathbf{X}(s,t)\,\mathrm{d}s, \tag{3}\] _i.e._ by the horizontal displacement of the swimmer's centre of mass \(\bar{\mathbf{X}}\). Figure 2a reports the evolution of the displacement of swimmers initialised with various initial orientations. After crossing a few cells, they get systematically trapped on rather stable cyclic orbits, preventing them from further displacements. We identify two types of cyclic trap, which are illustrated in Fig. 2b. In the case shown in blue, the swimmer is oriented in the wrong direction (towards \(x_{1}<0\)) and swims in a counterflow that pushes it to the right and exactly compensates its locomotion. The position of its center of mass barely changes during an undulation period. In the second case, shown in red, the swimmer alternatively swims to the left, is rotated by the flow, swims to the right, then changes again direction, and so on. The mean abscissa \(\bar{x}_{1}(t)\) performs in that case a cyclic motion with an amplitude \(\simeq 1.6\,L\) and a period corresponding to approximately \(300\) forcing periods. The black line shows the position \(\bar{\mathbf{X}}(t)\) of the swimmer's center of mass sampled over more than \(30\) cycles. Actually, it does not exactly form a closed loop and tiny deviations can be observed from one cycle to the other. Despite this, such a cyclic motion remains stable and persists for hundreds of cycles. Note that these simulations indicate a very sensitive dependence upon the swimmer's initial orientation as a tiny variation of the initial angle can lead the swimmer to end up in distant cells of the flow and in different configurations. This sensitivity is a hallmark of a chaotic behaviour. However it also indicates that the swimmers dynamics is not ergodic when they continuously undulate in such a flow. Hence the swimmers do not show any net displacement if they just follow their basic swimming procedure without observing any further strategy. Moreover, an adequate navigation policy should be able to prevent, or at least destabilise, the two kinds of trap that were identified. Such an observation can be used to make a guess on adequate minimal observations and actions that should be accounted for in the swimmer's decision process. ### The optimisation problem Our problem is to optimise navigation for a swimmer by controlling the parameters of the actuating force based on the current state of the swimmer. This problem is typically studied using the formalism of _Markov decision processes_ (MDPs), which assumes that the _state_ of the system is fully observable. This requires grabbing an information that lives, in principle, in the infinite-dimensional set of all two-dimensional curves \(s\mapsto\mathbf{X}(s,t)\) with length \(\ell\), and in numerics, in the \((N+1)\)-dimensional manifold of \(\mathbb{R}^{2N}\) formed by attainable discretised configurations (\(N\) being the number of points used to discretise the swimmer arc-length). We hereafter denote by \(\mathcal{S}\) this set of states. Because of the high dimensionality of \(\mathcal{S}\), a full description of the swimmer state is clearly not possible, neither in numerics, nor in practical applications. Instead of assuming a full information on the state \(\sigma\in\mathcal{S}\), we consider that only a minimalistic information is available. This problem falls under the category of _partially-observable Markov decision processes_ (POMDPs), where the observations of the _agent_ -- the swimmer -- are not sufficient to infer the true state of the system. As a result, optimal decision strategies must rely on a limited amount of data, making the problem even more challenging. We denote by \(\mathcal{O}\) the set of all possible _observations_\(\omega\). We infer from previous section that the swimmer requires information on two features of its state: whether or not it is rightly oriented and whether the fluid velocity helps or hinders its displacement towards \(x_{1}>0\). More specifically, the first property is deduced from the sign of \(X_{1}(0,t)-\bar{x}_{1}(t)\), namely whether the swimmer's head is located on the right (\(\omega=0,1,2\)) or on the left (\(\omega=3,4,5\)) of its center of mass. The second property is obtained from the horizontal component \(u_{\rm h}=\mathbf{e}_{1}\cdot\mathbf{u}(\mathbf{X}(0,t),t)\) of the fluid velocity at the Figure 3: **(a)** The discretisation of observations depends on both the swimmer orientation, which can be towards positive or negative abscissae, and on the strength of the horizontal fluid velocity at its head \(u_{\rm h}\), which divides the flow in regions of three different kinds. **(b)** The discretisation of actions set whether the swimmer should propagate an undulation in the horizontal or vertical direction, and with which amplitude \(A\). Figure 2: Swimmers continuously undulating in the vertical direction without any specific strategy. The parameters are here \(\mathcal{F}=15\), \(U=0.025\ell\nu\) and \(\ell/L=1\). **(a)** Displacement along the horizontal direction \(x_{1}\) as a function of time for initially-straight swimmers released with various angles with the \(x_{1}\) axis. **(b)** Two instances of trapped swimmers: The blue one is oriented toward \(x_{1}<0\) and is stuck between two cells where it swims against the flow. The red one performs a cycle across several cells, during which it is tumbled back and forth by the flow; The trajectory of the swimmer’s centre of mass is shown as a black line. The fluid vorticity \(\omega=\partial_{1}u_{2}-\partial_{2}u_{1}\) is represented as coloured contour lines. swimmer's head. Three cases are distinguished: either \(u_{\rm h}<-u_{0}\) and the swimmer feels a headwind (\(\omega=0,3\)), either \(u_{\rm h}>u_{0}\) and it feels a tailwind (\(\omega=2,5\)), or \(|u_{\rm h}|<u_{0}\) and it feels no significant wind (\(\omega=1,4\)). \(u_{0}\) is a parameter that we fix to \(u_{0}/U=1/5\). This makes up for a total of 6 possible observations that are illustrated and numbered in Fig. 3(a), so that \(\mathcal{O}=\{0,1,2,3,4,5\}\). The various _actions_ that the swimmer can take are illustrated in Fig. 3(b). Seven choices are possible, consisting in doing nothing (in black, \(\alpha=0\)) or applying the active force either in the horizontal (\(\mathbf{p}=\mathbf{e}_{1}\), in red, \(\alpha=0,1,2\)) or in the vertical (\(\mathbf{p}=\mathbf{e}_{2}\), in blue, \(\alpha=4,5,6\)) direction, choosing among three possible amplitudes: \(A=\frac{1}{3}A_{0}\) (\(\alpha=2,4\)), \(A=\frac{2}{3}A_{0}\) (\(\alpha=1,5\)), or \(A=A_{0}\) (\(\alpha=0,6\)), where the base non-dimensional amplitude is fixed to \(A_{0}=0.08\). The set of all possible actions is again discrete and denoted \(\mathcal{A}=\{0,1,2,3,4,5,6\}\). We assume that the swimmer observes its environment at discrete times \(t_{n}=n\Delta t\) with \(n\in\mathbb{N}\). We choose the time step \(\Delta t\) smaller than all physical timescales (in practice, we fix \(\Delta t=0.2\,\nu^{-1}\)). A navigation strategy consists in following a _policy_\(\pi\), which associates to each couple \((\alpha_{n},\omega_{n})\in\mathcal{A}\times\mathcal{O}\), a probability \(\pi(\alpha_{n}|\omega_{n})\) to choose the action \(\alpha_{n}\) having observed \(\omega_{n}\) at time \(t_{n}\). A deterministic policy corresponds to having \(\pi(\alpha|\omega)=1\) for \(\alpha=\alpha_{\pi}(\omega)\) and \(\pi(\alpha|\omega)=0\) otherwise. Finding an optimal strategy consists in finding the policy \(\pi_{\star}\) that maximises a given reward over time. To formally define our POMDP we use the tuple \((\mathcal{S},\mathcal{A},\mathcal{O},R,T,\Omega)\), where \(\mathcal{S}\), \(\mathcal{A}\), and \(\mathcal{O}\) are the state, action, and observation sets introduced above. The decision process also depends on the reward function \(R\), the transition function \(T\), and the observation function \(\Omega\). The reward function \(R\) maps the current state \(\sigma_{n}\in\mathcal{S}\) and action \(\alpha_{n}\in\mathcal{A}\) to a real number measuring the benefit of having chosen this action. As we are interested in maximising the motion of the swimmer to the right, the chosen reward is the horizontal displacement of its centre of mass \(R(\sigma_{n},\alpha_{n})=\bar{x}_{1}(t_{n+1})-\bar{x}_{1}(t_{n})\). The transition function \(T\) is the function that maps the current state and the action taken by the swimmer to the next state: \(\sigma_{n+1}=T(\sigma_{n},\alpha_{n})\). Such a function clearly exists because the full dynamics is deterministic and Markovian. Finally, the observation function \(\Omega\) is the function that maps the state to the observation sensed by the swimmer: \(\omega_{n}=\Omega(\sigma_{n})\). A given policy \(\pi\) defines a (possibly stochastic) flow on \(\mathcal{S}\): \(\sigma_{n}\mapsto\sigma_{n+1}=T(\sigma_{n},\alpha_{n})\) with \(\alpha_{n}\) chosen with probability law \(\pi(\cdot|\Omega(\sigma_{n}))\). The policy thus fully determines the sequence \(\{(\sigma_{n},\alpha_{n}),n>0\}\) for a given \(\sigma_{0}\). We aim at finding a policy \(\pi\) that maximises the long-term displacement of the swimmer towards positive abscissae. Formalising this optimisation problem requires introducing an adequate objective function. One could naively think of maximising the actual asymptotic displacement \(\lim_{N\to\infty}\left[\bar{x}_{1}(t_{N})-\bar{x}_{1}(t_{0})\right]=\sum_{n=0}^ {\infty}R(\sigma_{n},\alpha_{n})\). The infinite-horizon sum is however expected to diverge, because we seek policies leading to effective displacement. Such a pitfall is usually circumvented in MDPs by introducing a discount factor \(\gamma\) to ensure convergence. One then maximises the _discounted return_ \[\mathcal{R}^{\rm disc}[\pi]=\sum_{n=0}^{\infty}\mathrm{e}^{-\gamma t_{n}}R( \sigma_{n},\alpha_{n}). \tag{4}\] The discount factor \(\gamma\) attributes more importance to immediate rewards than to those obtained in a distant future. The choice of this parameter is largely problem-dependent and can have a significant impact on the learned policy. As seen later, we use such a reward in our implementation of \(Q\)-learning (Sec. IV.1). Still, as discussed in [35], using a discounted reward can be problematic for POMDPs. One can alternatively maximise the so-called _differential return_ \[\mathcal{R}^{\rm diff}[\pi]= \sum_{n=0}^{\infty}\bigl{(}R(\sigma_{n},\alpha_{n})-\bar{R}[\pi] \bigr{)}\] \[\text{where }\bar{R}[\pi]= \lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N}\langle R(\sigma_{n}, \alpha_{n})\rangle. \tag{5}\] This formulation weights equally all rewards. It makes use of the mean reward \(\bar{R}[\pi]\) that is averaged over both time and the realisations of the POMDP. In the framework of MDP (for which \(\omega\equiv\sigma\)), one often introduces the _state value function_\(V_{\pi}(\sigma)=\langle\mathcal{R}[\pi]\mid\sigma_{0}=\sigma\rangle\), which quantifies the quality of the policy \(\pi\) when we start from the state \(\sigma\). A particularly useful function when searching for an optimal policy is the _\(Q\)-value function_ \[\mathcal{Q}_{\pi}(\sigma,\alpha)=\langle\mathcal{R}[\pi]\mid\sigma_{0}=\sigma, \alpha_{0}=\alpha\rangle\,. \tag{6}\] It assesses the value of the policy \(\pi\) when taking the specific action \(\alpha\) in a given state \(\sigma\). Typically, value-based reinforcement learning algorithms try to learn an estimate \(\mathcal{Q}_{\star}\) of the optimal \(Q\)-value function over all possible policies and use it to extract an optimal deterministic policy \(\pi_{\star}\) as \[\pi_{\star}(\alpha|\sigma)=1\text{ if }\alpha=\alpha_{\pi_{\star}}( \sigma)=\text{argmax}_{\alpha^{\prime}}\mathcal{Q}_{\star}(\sigma,\alpha^{ \prime}),\] \[\text{ and }0\text{ otherwise.} \tag{7}\] Such an optimal policy always exists for MDPs, in the sense that it maximises the value function \(V_{\pi}(\sigma)\) for all states \(\sigma\). For our partially-observable settings, the agent does not have a full information on \(\sigma\) and the the \(Q\)-value function (6) becomes irrelevant to the navigation problem. A policy that is optimal in the same sense as in MPD is thus no longer guaranteed to exist [35]. Still, as seen above, one can instead use a different optimality criterion and maximise the differential return (5). Following [35], the \(Q\)-value function can be then be defined by projecting \(Q_{\pi}\) on observations, namely \[Q_{\pi}(\omega,\alpha)=\sum_{\sigma\in\mathcal{S}}\mathcal{Q}_{\pi}(\sigma, \alpha)\,P(\sigma|\omega) \tag{8}\] where \(P(\sigma|\omega)\) is the probability to be in state \(\sigma\), given observation \(\omega\). ### A naive strategy We introduce in this section a policy allowing the swimmer to reasonably move in the \(x_{1}>0\) direction. We call it the _naive strategy_. It consists in following rather simple rules: If the swimmer has the proper orientation and simultaneously feels no headwind (\(\omega=4,5\)), the sinusoidal force (2) is applied with maximal amplitude \(A_{0}\) in the direction \(\mathbf{p}=\mathbf{e}_{2}\) (\(\alpha=6\)). If the swimmer is wrongly oriented and faces the \(x_{1}<0\) direction, or experiences a headwind (all other observations), then no force is applied and the locomotion is stopped (\(\alpha=3\)). This naive policy is shown in Fig. 5a, using a graphical representation that we will employed later on to describe other policies: The different observations \(\omega\) are represented as 6 vertically aligned coloured boxes, each colour (from red to blue) standing for the action \(\alpha\) taken when \(\omega\) is observed. This policy breaks the symmetry \(x_{1}\mapsto-x_{1}\) and thus induces a positive drift. It moreover prevents the swimmer from being indefinitely trapped by similar mechanisms as those observed in Sec. III.1 in the absence of any strategy. We performed numerical simulations of 100 naive swimmers initialised at \(t=0\) at the centre of a cell in a straight configuration, but with different initial orientations spanning \([-\pi/2,\pi/2]\). As we can see from Fig. 4b, the naive strategy leads to a positive average displacement, with a distribution of swimmers that perceptibly spreads with time. Sometimes the swimmers temporarily fall in a trap and their displacement stays approximately constant during rather long times. As seen from the sample of trajectories of Fig. 4c, these trapping events correspond to the swimmer turning several times around a given cell before escaping and pursuing its motion towards \(x_{1}>0\). The quasi-periodic cycles of Fig. 2b are no more stable and the naive strategy makes endless trapping impossible. Thanks to that, all trajectories are asymptotically moving toward \(x_{1}>0\) and the dynamics of swimmers that follow this policy becomes statistically stationary and ergodic. Figure 5 shows more detailed statistics on the displacement of swimmers that follow the naive policy. As can be seen in Fig. 5a, the displacement \(\delta_{\tau}\bar{x}_{1}=\bar{x}_{1}(t+\tau)-\bar{x}_{1}(t)\) approaches a self-averaged linear behaviour \(\delta_{\tau}\bar{x}_{1}\approx V\,\tau\) at large time \(\tau\). The average horizontal speed \(V\) is approximately 0.75 times the speed \(V_{\rm swim}\) that the swimmer has in the absence of an external flow. When zooming on much shorter timescales (Fig. 5b), one actually observes that this average displacement consists of an alternate sequence of inefficient trapping periods and efficient displacements, during which the swimmer swings smoothly between cells with a speed slightly exceeding \(V_{\rm swim}\). As we will see later, the long-term balance between these two kinds of events is precisely what determines the effectiveness of a given policy. The variance of \(\delta_{\tau}\bar{x}_{1}\) is shown in Fig. 5c. Its dependence on \(\tau\) follows three successive regimes. At short times \(\tau\lesssim 10/\nu\), one has \({\rm Var}\,[\delta_{\tau}\bar{x}_{1}]\propto\tau^{2}\), resulting from swimmers moving with an instantaneous velocity different from \(V\), and thus deviations \(\propto\tau\) from the average displacement. The corresponding higher-order moments of \(\delta_{\tau}\bar{x}_{1}\) (skewness \(S\) and flatness \(F\)) are shown in Fig. 5d. One observes at small time lags \(S<0\) with \(|S|\ll 1\) and thus an almost-symmetric distribution of \(\delta_{\tau}\bar{x}_{1}\), so that trapping is certainly not contributing much to this regime. Fluctuations are sub-Gaussian, _i.e._\(F<3\). At larger times, naive swimmers follow an intermediate regime where the variance of \(\delta_{\tau}\bar{x}_{1}\) grows super-diffusively, approximately as \(t^{1.67}\). This regime displays a negative skewness, meaning that trapping is involved. The flat Figure 4: **(a)** Naive strategy, which consists in swimming horizontally with maximum amplitude (\(\alpha=6\)) whenever the swimmer is rightly oriented and feel a calm or tailwind fluid flow (\(\omega=4\) or \(5\)) and do nothing (\(\alpha=3\)) otherwise. **(b)** Horizontal displacement of swimmers initialised with various orientations and following the naive strategy; The average is shown as a bold solid line, and the interval defined by the standard deviation as dashed lines. **(c)** Sample of different trajectories in the \((x_{1},x_{2})\) plane. ness reaches values above 3, indicating a significant contribution from extreme events. As seen later (Sec. IV.3), this intermediate regime falls in a range during which swimmers have a significant probability to be trapped. It extends to significantly long times, of the order of \(\tau\approx 500/\nu\), above which the displacement becomes a sequence of independent events. The resulting ultimate regime is diffusive, _i.e._\(\mathrm{Var}\left[\delta_{\tau}\bar{x}_{1}\right]\propto\tau\). The skewness tends asymptotically to \(S=0\) and the flatness decreases to possibly approach \(F=3\). We aim at finding policies that outperform this naive strategy. For that, we test in next section various methods of reinforcement learning. It will be important to keep in mind that, even if the swimmer follows a strategy leading to a significant displacement, trapping can be present and result in a significant dependence on history, over times exceeding thousands of undulatory beats. ## IV Reinforcement learning ### \(Q\)-learning Here, we first test the performance of classical \(Q\)-learning. This method, which has been borrowed from MDPs, has been extensively and successfully applied in the past to optimise the navigation of active swimmers [12; 13; 14; 15; 16; 17]. #### Method \(Q\)-learning is based on the value-iteration update of the Bellman equation. At each step \(t_{n}=n\Delta t\), the swimmer has at disposal an estimation \(Q_{t_{n}}\) of the \(Q\)-table. It makes an observation \(\omega_{n}\) of its environment, takes an action according to the running policy, which is in the \(\varepsilon\)-greedy case, is such that \(\alpha_{n}=\mathrm{argmax}_{\alpha}Q_{t_{n}}(\omega_{n},\alpha)\) with probability \(1-\varepsilon\), other actions being chosen uniformly with probability \(\varepsilon/6\). The swimmer then receives a reward \(R_{n}=\bar{x}_{1}(t_{n+1})-\bar{x}_{1}(t_{n})\) and the \(Q\)-table is updated accordingly. The whole procedure is summarised in Algorithm 1. ``` Parameters: rates \(\lambda\) and \(\gamma\); exploration parameter \(\varepsilon\) 1: Initialise \(Q\) and \(\omega\) 2:for\(n=1,2,\ldots\)do 3: Take action \(\alpha\) with the \(\varepsilon\)-greedy law given by \(Q(\omega,\cdot)\) 4: Evolve the swimmer to the new state \(\sigma^{\prime}\) 5: Measure reward \(R\) and observation \(\omega^{\prime}=\Omega(\sigma^{\prime})\) 6:\(Q(\omega,\alpha)\leftarrow(1-\lambda\Delta t)Q(\omega,\alpha)\) 7:\(\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\lambda\Delta t\left[R+\mathrm{e}^ {-\gamma\Delta t}\max_{\alpha^{\prime}}Q(\omega^{\prime},\alpha^{\prime})\right]\) 8:\(\omega\leftarrow\omega^{\prime}\) 9:endfor ``` **Algorithm 1**\(Q\)-learning In addition to \(\varepsilon\in[0,1]\) that controls how much randomness is put in the learning process, the method depends upon two parameters, which are here appropriately expressed as inverse time scales. The first is the learning rate \(\lambda\) that we chose as the inverse of the time needed by a swimmer to cross one cell with velocity \(V_{\mathrm{swim}}\) in the absence of outer flow, namely here \(\lambda=\nu/40\) for \(A_{0}=0.08\). This rate sets the timescale at which the \(Q\)-table is updated. A smaller \(\lambda\) would have led to adapting the policy with a too long delay compared to the dynamical timescales of the swimmer, and thus to inefficient adjustments. A larger \(\lambda\) would imply that the \(Q\)-table is updated too fast compared to the actual time needed to discern the outcomes of a given action. The second parameter is the discount rate \(\gamma\), which sets the horizon of future rewards. It was chosen as the inverse of the time needed by the swimmer to travel across ten cells with \(V_{\mathrm{swim}}\), namely \(\gamma=\nu/400\). The corresponding timescale is at the edge of the long-correlated regime observed in previous section for the naive policy. Initial entries of the \(Q\)-table are all set to an arbitrary positive number, equal in our case to \(0.25\,L\). For MDPs, successive iterations of the procedure 1 lead to convergence of the entries of the \(Q\)-table to the optimal \(Q\)-value function (6) in the limit when \(n\to\infty\) and \(\varepsilon\to 0\) simultaneously. Convergence results rely on the usual stochastic approximation assumptions on the learning rate and are valid as long as all the state-action pairs are indefinitely visited with a positive probability. The associated empirical greedy policy then converges to the optimal deterministic policy \(\pi_{\star}\) given by Eq. (7). However, such convergence results only hold in the Markovian case. There is no guarantee that they extend to our settings and actually, counter-examples have been constructed in [35] showing that \(Q\)-learning techniques do not generally apply to POMDPs. We nevertheless test this procedure below. #### Non-convergence of \(\varepsilon\)-greedy \(Q\)-learning Figure 6(a) shows the displacement of swimmers during the evolution of \(Q\)-learning for decreasing values of the exploration parameter \(\varepsilon\). All instances lead to a net displacement of the swimmer. It consists of long periods of forward motions interrupted by phases during which the swimmer barely progresses. These alternations become less and less frequent when \(\varepsilon\) decreases. Figure 6(b) shows the time-evolution of the policy followed by the swimmer for \(\varepsilon=0.025\). Each extended period of forward motion corresponds to a stabilisation of the running policy. For instance, between times \(t=0.7\) and \(1.4\times 10^{6}\nu^{-1}\), the swimmer maintains an average horizontal velocity \(\approx 0.45\,V_{\mathrm{swim}}\) that is smaller, but comparable to the performance of the naive strategy. During this time interval, the swimmer follows a policy that differs from the naive one only by favouring a vigorous horizontal undulation (\(\alpha=0\), bright red) when a headwind is observed (\(\omega=0\) and \(3\)). This temporarily learned policy is however forgotten at times \(t>1.5\times 10^{6}\nu^{-1}\). Other sustainable strategies are selected later on, giving rise to subsequent periods of forward motion with different, but comparable horizontal velocities. These numerical experiments obtained at varying \(\varepsilon\) allow us to extrapolate to what would be obtained if the level of randomness were decreased progressively: As the duration of forward-motion periods expands when \(\varepsilon\) increases, the learning procedure will probably get stacked to a given policy determined by the history of the swimmer's trajectory and thus very unlikely to be the optimum. This gives evidence that \(Q\)-learning methods do not easily converge for our problem. We interpret the above-mentioned loss of memory as a consequence of long-term trapping phases that can clearly not be detected from our reduced set of observations. The underlying mechanism gets clearer when looking at the time evolution of the \(Q\)-table entries in Fig. 7. The periods of forward motion are associated with an increase of all \(Q_{t}(\omega,\alpha)\), with the current running policy weakly singling out given entries. Once the swimmer enters a phase of quasi immobilisation, this growth stops and all entries of the \(Q\)-table decrease simultaneously, without any possibility to keep in mind the previously learned strategy. Hence, convergence could in principle be only achieved if the learning rates is small enough to filter out such trapping events, and would thus require running the \(Q\)-learning algorithm for extravagantly long times. #### An iterative Markovian approximation Motivated by the suspicion that convergence could require very long times, we test here the idea to approximate the dynamical evolution of the swimmer by an MDP. Our hope is that this approximation will capture the most relevant information of our optimisation problem, namely, the transition probabilities between the states of our environment and the distribution of the rewards obtained by our agent. The advantages of this approach are twofold: First, since the MDP only cares about the transitions and the rewards process abstracting away all the other aspects of the dynamics, the associated learning algorithms will run significantly faster, without having to simulate simultaneously the whole swimmer dynamics; Second, this approach would separate the issue of non-Markovianity from other potential difficulties. Our procedure consists in constructing a sequence of policies \(\pi_{0}\), \(\pi_{1}\), \(\dots\pi_{k}\) that will hopefully converge to the optimal \(\pi_{\star}\). At each step, we simulate a swimmer that follows the policy \(\pi_{k}\), trying out at every time step \(t=t_{n}\), all possible actions to monitor the new observation and reward at time \(t_{n+1}\). This is used to construct numerical approximations to the transition probability \(p_{\mathrm{T},k}(\omega^{\prime}|\omega,\alpha)\) of observing \(\omega^{\prime}\) at time \(t+\Delta t\) given that \(\omega\) was observed and action \(\alpha\) was performed at time \(t\), together with the corresponding distribution of rewards \(p_{\mathrm{R},k}(R|\omega,\alpha)\). Both distributions depend of course on \(\pi_{k}\). We then use the approximate probabilities \(p_{\mathrm{T},k}\) and \(p_{\mathrm{R},k}\) to run the \(Q\)-learning algorithm that, because of the Markovian formulation imposed now, is ensured to converge. This leads to construct the optimal policy \(\pi_{k+1}\) associated to the approximate system. This procedure is reiterated changing the base policy to \(\pi_{k+1}\), until it attains a fixed point. The method is summarised in Algorithm 2. Figure 7: Time evolution of the different components of the \(Q\) table obtained for \(\varepsilon=0.025\), as in Fig. 6b. The six panels correspond to the various values of the observation \(\omega\), while the different colours stand for the action \(\alpha\), as labeled. Figure 6: Results of \(\varepsilon\)-greedy \(Q\)-learning for \(\mathcal{F}=15\), \(U/(\ell\nu)=0.025\), \(\ell/L=1\), \(u_{0}/U=1/5\) and \(A_{0}=0.08\). **(a)** Displacement as a function of time for three different values of the exploration parameter \(\varepsilon\). **(b)** Time evolution of the policy shown here for \(\varepsilon=0.025\). The motivation behind this procedure is that, if the Markovian approximation is not too far off, then it is natural to think that the optimal policy \(\pi_{k+1}\) of the approximate system should be at least an improvement on the policy \(\pi_{k}\) if not also the optimal policy when we go back to the real system. Hence, if the optimal policy \(\pi_{\star}\) is a fixed point of our procedure, then the sequence \(\{\pi_{k};k\geq 0\}\) would converge to it, thus solving our problem. We have run this procedure, choosing for the initial policy \(\pi_{0}\) the naive strategy of Sec. III.3. After three iterations the algorithm circled back to the policy we encountered on the first iteration \(\pi_{3}=\pi_{1}\). Hence this proposed procedure does not lead to any improvement with respect to the naive policy. This could be again a sign of the highly non-Markovian nature of our setting. We therefore test in the next section various approximation-based methods that could in principle lead to efficient results for POMDPs. ### Approximation-based methods In the previous section, we made use of the traditional \(Q\)-learning with discounted return (4) to estimate the action-value function. We applied blindly this method by replacing states with observations and obtained only limited success. Here, we will explore two approaches that belong to the broad class of _approximation methods_ for reinforcement learning [36]: the _semi-gradient differential SARSA_ and the _Actor-Critic policy-gradient_ method. Both use a formulation of the optimisation problem in which value functions are estimated in terms of the differential return (5) instead of the discounted return. The main motivation for using such approximation methods is the partially-observable nature of our problem. In such settings, accurate estimations of the action-value function \(Q\) are difficult, hindering the convergence of exact-solution algorithms like \(Q\)-learning [35]. However, by using approximation methods, such as neural networks or other parametric models, we can represent the policy and value function in a way that takes into account only the available observations rather than the full state. Such methods are flexible and effective and, in particular, they provide a way to trade-off between the quality of the solution and computational complexity. This makes them a good choice for problems with large or continuous state spaces, where exact solution methods are not applicable. They can also search for optimal stochastic policies, which can help ensure exploration during the learning process, particularly when the optimal policy may not be deterministic, as is often the case in POMDPs [35], though not likely in our exact case. For these reasons, approximation methods allow us to effectively address the partial observability issue and achieve good performance, at least in theory, without compromising the underlying theory of reinforcement learning. #### iii.2.1 Semi-gradient differential SARSA The semi-gradient differential SARSA algorithm is a value-based method, like \(Q\)-learning. It similarly builds on the idea of estimating the action-value function \(Q\) to construct an optimal policy, but uses for that the differential return instead of the discounted return. A key difference between this method and traditional SARSA or \(Q\)-learning is that it involves an approximation of the \(Q\)-function in place of its exact value. We use here the linear parametrisation \(\mathcal{Q}(\sigma,\alpha)\approx\hat{\mathcal{Q}}_{\boldsymbol{\eta}}( \sigma,\alpha)=\sum_{ij}\eta_{ij}\,\delta_{\Omega(\sigma),i}\,\delta_{\alpha,j}\) where \(\delta\) is the Kronecker delta, \(\sigma\mapsto\Omega(\sigma)=\omega\) is the observation function introduced in Sec. III.2, and \(\boldsymbol{\eta}\in\mathbb{R}^{6}\times\mathbb{R}^{7}\) denotes the approximation parameters. This approach aggregates together all states leading to the same observation (similarly to what we did for \(Q\)-learning). The partial observability of the problem is then devolved to this specific choice of the approximation. Such a method was used successfully in [37] to find the optimal swimming strategy for Najafi's swimmer [38] The main idea of semi-gradient differential SARSA is to update the approximation of the value function by combining the gradient descent and temporal difference (TD) learning methods in order to converge to the parameters \(\boldsymbol{\eta}^{\star}\) that approximate best the optimal \(Q\)-function. The action at the \(n\)-th step is chosen, as in \(Q\)-learning, such that \(\alpha_{n}=\operatorname*{argmax}_{\alpha}\hat{\mathcal{Q}}_{\boldsymbol{\eta}} (\sigma_{n},\alpha)\) with possibly an \(\varepsilon\)-greedy step. The resulting procedure is summarised in Algorithm 3. ``` 1:Initialise \(\omega\), \(\alpha\), \(\bar{R}\), and the approximation parameters \(\boldsymbol{\eta}\) 2:for\(n=1,2,\dots\)do 3: Take action \(\alpha\) and evolve to the new state \(\sigma^{\prime}\) 4: Measure reward \(R\) and observation \(\omega^{\prime}=\Omega(\sigma^{\prime})\) 5: Choose action \(\alpha^{\prime}\) with \(\varepsilon\)-greedy law given by \(\hat{\mathcal{Q}}_{\boldsymbol{\eta}}(\sigma^{\prime},\cdot)\) 6: Compute the error \(\delta=R-\bar{R}+\hat{\mathcal{Q}}_{\boldsymbol{\eta}}(\sigma^{\prime}, \alpha^{\prime})-\hat{\mathcal{Q}}_{\boldsymbol{\eta}}(\sigma,\alpha)\) 7:\(\bar{R}\leftarrow\bar{R}+\lambda_{1}\Delta t\,\delta\) 8:\(\boldsymbol{\eta}\leftarrow\eta+\lambda_{2}\Delta t\,\delta\,\nabla\hat{ \mathcal{Q}}_{\boldsymbol{\eta}}(\sigma,\alpha)\) 9:\(\omega\leftarrow\omega^{\prime}\), \(\alpha\leftarrow\alpha^{\prime}\) 10:endfor ``` **Algorithm 3** Semi-gradient differential SARSA Figure 8 reports results obtained with this method. We have here chosen the rates \(\lambda_{1}=0.025\nu\) and \(\lambda_{2}=0.025\nu\), which corresponds to the inverse of the times needed by the swimmer to cross one and ten cells with \(V_{\rm swim}\), respectively. The displacements obtained for different values of the exploration parameter \(\varepsilon\) (Fig. 8a) are by an order of magnitude smaller than those resulting from the \(\varepsilon\)-greedy \(Q\)-learning algorithm. In addition, results indicate that the learning performance decreases when \(\varepsilon\) decreases, at variance with what was observed for \(Q\)-learning. At the largest value of \(\varepsilon\), one finds forward-moving periods to be much shorter and trapping phases much more frequent. Still, when zooming on an interval of time when the swimmer significantly progresses, one observes that the local velocity is comparable to those obtained with \(Q\)-learning. As can be seen from Fig. 8b, the corresponding running policy fluctuates much, and significant displacement only occurs when the policy is able to maintain for a significant amount of time the action \(\alpha=6\) (undulate vertically with maximal amplitude) for the two most favourable observations \(\omega=4\) and \(5\) (corresponding to being rightly oriented with no headwind). We found that in our setting, the semi-gradient differential SARSA method is not able to learn properly due to a non-ergodicity of the environment. Indeed, the swimmer is often trapped in a situation where it observes the same observation \(\omega=3\) (being wrongly oriented with no headwind), performs the same action \(\alpha=6\) (undulate vertically with maximal amplitude), and remains indefinitely trapped in this situation. (The curve associated to \(\varepsilon=0.025\) in Fig. 8a is an example of such a situation.) This is due to the fact that in a large set of configurations leading to observation \(\omega=3\), the swimmer performing action \(\alpha=6\) remains in the same set of configurations. Furthermore, the swimmer keeps on being stuck, even if it performs other actions but not for a long-enough time, so that its probability of escaping decreases exponentially fast as \(\varepsilon\to 0\). #### iv.2.2 Actor-Critic policy-gradient method Policy-gradient methods strongly differ from \(Q\)-learning and semi-gradient differential SARSA in that, instead of learning the function \(Q\), they learn directly the policy \(\pi\) by interacting with the environment. Additionally, instead of using the temporal difference rule to learn the estimates, policy-gradient methods are gradient-based, meaning that the policy is itself approximated, similarly to the value function for differential SARSA, by an estimate \(\hat{\pi}_{\mathbf{\theta}}\), which involves a set of parameters \(\mathbf{\theta}\) that are learned using gradient descent. We furthermore use here an _Actor-Critic_ version of such a method. The "actor" represents the policy, while the "critic" estimates the value function. This separation can help improve the stability and convergence of the policy-gradient algorithm, as well as reduce the variance of the gradient samples used to update the policy parameters. Together, the actor and the critic form a coalition where the actor selects actions and the critic evaluates the quality of those actions to provide feedback to the actor to improve its policy. The general scheme of the Actor-Critic algorithm is sketched in Fig. 9a. After a change in the environment, both the actor and the critic are informed about the new observation \(\omega\) of the system. The critic, which has also access to the reward, updates its approximation \(\hat{V}_{\eta}\) of the value function and communicates to the actor the temporal-difference (TD) error \(\delta\), which measures the difference between the expected return and the actual return. The actor uses the information that \(\delta\) provides on the quality of the approximated policy \(\hat{\pi}_{\mathbf{\theta}}\) in order to update it and decides, according to the observation \(\omega\), the action to be taken throughout the next step. We choose to represent the policy by the soft-max parametrisation \[\pi(\alpha|\sigma)\approx\hat{\pi}_{\mathbf{\theta}}(\alpha|\sigma)=\sum_{ij} \frac{1}{\mathcal{Z}_{i}}\mathrm{e}^{\theta_{ij}}\delta_{\Omega(\sigma),i} \,\delta_{\alpha,j}, \tag{9}\] with normalising factor \(\mathcal{Z}_{i}=\sum_{j}\mathrm{e}^{\theta_{ij}}\). The approximated policy hence depends on the state \(\sigma\) only through the observation \(\omega=\Omega(\sigma)\). This seamlessly takes into account partial observability by considering only the available information rather than the full state of the system. The policy parameters \(\mathbf{\theta}\in\mathbb{R}^{6}\times\mathbb{R}^{7}\) are optimising a _performance measure_ given by the average return \(\bar{R}[\pi_{\mathbf{\theta}}]\) defined Figure 8: Results of semi-gradient differential SARSA obtained with the same physical parameters as in Fig. 6 of previous subsection. **(a)** Time-evolution of the displacement for three different values of the exploration parameter \(\varepsilon\). **(b)** Time evolution of the optimal policy shown here for \(\varepsilon=0.1\). Both figures show as insets a zoom on a time interval during which the swimmer significantly progresses. in Eq. (5). The gradient-ascent procedure used by the actor to update \(\mathbf{\theta}\) requires to approximate the gradient \(\nabla_{\mathbf{\theta}}\bar{R}[\pi_{\mathbf{\theta}}]\) of the performance measure. We rely on the policy-gradient theorem (see, _e.g._, [36]) \[\nabla_{\mathbf{\theta}}\bar{R}[\pi_{\mathbf{\theta}}] =\left\langle\mathcal{Q}_{\hat{\pi}_{\mathbf{\theta}}}(\sigma,\alpha) \frac{\nabla_{\mathbf{\theta}}\hat{\pi}_{\mathbf{\theta}}(\alpha|\sigma)}{\hat{\pi}_{ \mathbf{\theta}}(\alpha|\sigma)}\right\rangle\] \[=\left\langle\mathcal{Q}_{\hat{\pi}_{\mathbf{\theta}}}(\sigma,\alpha )\,\nabla_{\mathbf{\theta}}\log\hat{\pi}_{\mathbf{\theta}}(\alpha|\sigma)\right\rangle, \tag{10}\] which allows us to instantiate the performance-measure gradient as \(\nabla_{\mathbf{\theta}}\bar{R}[\pi_{\mathbf{\theta}}]\approx\hat{\mathcal{Q}}_{\mathbf{ \eta}}(\sigma,\alpha)\,\nabla_{\mathbf{\theta}}\log\hat{\pi}_{\mathbf{\theta}}(\alpha |\sigma)\), where \(\hat{\mathcal{Q}}_{\mathbf{\eta}}\) is an approximation of the value function, at the hands of the critic, \(\mathbf{\eta}\) being the associated parametrisation parameters. We can use the value function as a baseline for a better estimate of the gradient. Since \(\langle V_{\hat{\pi}_{\mathbf{\theta}}}(\sigma)\,\nabla_{\mathbf{\theta}}\log\hat{\pi }_{\mathbf{\theta}}(\alpha|\sigma)\rangle=0\), the gradient can be rewritten as \[\nabla_{\mathbf{\theta}}\bar{R}[\pi_{\mathbf{\theta}}]=\left\langle A_{\hat{\pi}_{\bm {\theta}}}(\sigma,\alpha)\,\nabla_{\mathbf{\theta}}\log\hat{\pi}_{\mathbf{\theta}}( \alpha|\sigma)\right\rangle, \tag{11}\] where \(A_{\hat{\pi}_{\mathbf{\theta}}}(\sigma,\alpha)=Q_{\hat{\pi}_{\mathbf{\theta}}}(\sigma,\alpha)-V_{\hat{\pi}_{\mathbf{\theta}}}(\sigma)\) is the _advantage function_. We furthermore use that the temporal-difference error of the value function is an unbiased estimate of the advantage function, namely \[A_{\hat{\pi}_{\mathbf{\theta}}}(\sigma,\alpha) =\left\langle\delta\right\rangle,\text{ with } \tag{12}\] \[\delta =R(\sigma_{t},\alpha_{t})-\bar{R}[\hat{\pi}_{\mathbf{\theta}}]+V_{ \hat{\pi}_{\mathbf{\theta}}}(\sigma_{t+\Delta t})-V_{\hat{\pi}_{\mathbf{\theta}}}( \sigma_{t}),\] leading to sample the performance-measure gradient as \(\nabla_{\mathbf{\theta}}\bar{R}[\pi_{\mathbf{\theta}}]\approx\delta\,\nabla_{\mathbf{ \theta}}\log\hat{\pi}_{\mathbf{\theta}}(\alpha_{t},|\,\sigma_{t})\) and use this approximation to update the policy parameters. As for the gradient of the policy, we use the soft-max approximation (9) to write \[\partial_{\theta_{ij}}\log\hat{\pi}_{\mathbf{\theta}}(\alpha|\sigma) =\delta_{\Omega(\sigma),i}\delta_{\alpha,j}-\frac{1}{Z_{i}} \mathrm{e}^{\theta_{ij}}\delta_{\Omega(\sigma),i}\] \[=\delta_{\Omega(\sigma),i}\left[\delta_{\alpha,j}-\hat{\pi}_{\bm {\theta}}(j|\sigma)\right]. \tag{13}\] In practice we use an approximation of the value function \(V_{\hat{\pi}_{\mathbf{\theta}}}(\sigma)\approx V_{\mathbf{\eta}}(\sigma)=\sum_{i}\eta _{i}\delta_{\Omega(\sigma),i}\) with parameters \(\mathbf{\eta}\in\mathbb{R}^{6}\), in order to compute \(\delta\). We trivially get \(\partial_{\eta_{i}}V_{\mathbf{\eta}}(\sigma)=\delta_{\Omega(\sigma),i}\). Summing up these expressions finally yields the procedure presented in Algorithm 4. ``` 1: Algorithm parameters: rates \(\lambda_{1},\lambda_{2},\lambda_{3}\) 2: Initialize \(\omega\), \(\alpha\), \(\bar{R}\) and the parameters \(\mathbf{\theta}\) and \(\mathbf{\eta}\) 3: Initialize the state \(\sigma\) and the action \(\alpha\) 4:for\(n=1,2,\dots\)do 5: Take action \(\alpha\) and evolve to the new state \(\sigma^{\prime}\) 6: Measure reward \(R\) and new observation \(\omega^{\prime}=\Omega(\sigma^{\prime})\) 7: Select the next action \(\alpha^{\prime}\sim\hat{\pi}_{\mathbf{\theta}}(\cdot\,|\,\sigma^{\prime})\) 8: Compute the TD error \(\delta=R-\bar{R}+\hat{V}_{\mathbf{\eta}}(\sigma^{\prime})-\hat{V}_{\mathbf{\eta}}(\sigma)\) 9:\(\bar{R}=\bar{R}+\lambda_{1}\Delta t\,\delta\) 10:\(\mathbf{\eta}(\omega)\leftarrow\mathbf{\eta}(\omega)+\lambda_{2}\Delta t\,\delta\) 11:\(\mathbf{\theta}(\omega,\cdot)\leftarrow\mathbf{\theta}(\omega,\cdot)+\lambda_{3} \Delta t\,\delta\left[\delta_{\alpha,\cdot}-\hat{\pi}_{\mathbf{\theta}}(\cdot\,|\, \sigma)\right]\) 12:\(\omega\leftarrow\omega^{\prime}\), \(\alpha\leftarrow\alpha^{\prime}\) 13:endfor ``` **Algorithm 4** Policy gradient / Actor-Critic We evaluate the performance of the actor-critic policy gradient algorithm in our navigation problem with the learning rates \(\lambda_{1}=5\times 10^{-7}\nu\) and \(\lambda_{2}=\lambda_{3}=5\times 10^{-5}\nu\). The results reported in Fig. 9b show that the swimmer reaches a swimming velocity that is similar to that of the \(\varepsilon\)-greedy \(Q\)-learning algorithm during forward motion periods. However, unlike the \(Q\)-learning algorithm which suffers from an enormous variability, the results obtained by actor-critic are more consistent and stable, showing minimal variability across different realisations. As seen in Fig. 9a, the learning process of the swimmer shows the desired behaviour as the swimming velocity systematically, albeit slowly, improves overtime. The evolution of the value function of the different observations (Fig. 9c) uncovers a sizeable amount of the story, it highlights how the swimmer initially learns to distinguish that tailwind is better than no significant wind, which in turn is better than headwind. Much later in the process, it ends up learning the obvious (to us humans) fact that being rightly oriented is better than being wrongly oriented. Eventually, as can be seen by the very end of this run, it starts to make more precise evaluations by starting to learn that orientation is more important than the wind direction and magnitude. This improvement, which is only reached at the end of the run, indicates that there is still significant potential for further improvement in the swimmer's performance when using such an algorithm. Regarding the policy, as shown in Fig. 10, the swimmer evolves and adapts its strategy over time in the course of the learning process. The policy starts from a random state where the swimmer is equally likely to choose any of the seven possible actions and thus basically being carried along by the flow. Over time, it learns to select with higher probabilities the actions that are more likely to lead to an improvement in its horizontal displacement. The swimmer, for instance, eventually discovers that action 6 is the most effective when it is oriented in the right direction and the wind is blowing in the right direction or not significant. This may seem obvious to us, but it took the swimmer a long time to figure it out. It is worth mentioning that this run for the actor-critic algorithm is Figure 9: **(a)** Sketch of the actor-critic algorithm. **(b)** Time evolution of the displacement obtained for the actor-critic algorithm for fixed hyper-parameters. Inset **(c)** Time evolution of the value function for the six different values of the observation \(\omega\), as labelled. longer by a factor of 10 compared to previous runs, but the performance of the swimmer still improves consistently, although at a slower pace than in the early stages of the process. All in all, the actor-critic algorithm presents a learning process that is more stable and consistent across runs compared to \(Q\)-learning. This stability leads to a policy that is incrementally improved during learning, resulting in the desired feature of improved performance over time. However, despite its consistent learning process, the swimmer's performance achieved through the actor-critic algorithm falls short of the results obtained with the naive strategy and requires substantial computational resources if it is to surpass it. ### Competitive \(Q\)-learning We have seen in previous subsections that various methods of reinforcement learning fail to provide a satisfactory answer to our optimisation problem. On the one hand, the idea of bluntly applying methods designed for Markovian systems, such as \(Q\)-learning, suffers non-convergence. On the other hand, approximation approaches, which were in principle developed to tackle partial observability, face issues related to an extremely slow convergence, if any, making their use ineffective or even prohibitive. Moreover, all the policies that emerged as intermediate or asymptotic outcomes of these trials, were found to be significantly less performant than the naive strategy introduced in Sec. III.3. We interpret these difficulties as a consequence of the rather brusque manner with which we have projected the high-dimensional set of swimmer's configurations onto a very small number of discrete observables. Such a reduction of information triggers the chaoticity of the system, including of the learning process and this explains the sensitivity of our results to both the method and the particular chosen trajectory during iterative optimisation procedures. In light of these observations, we present here a new perspective. In place of searching for a single efficient policy that would possibly outperform the naive strategy, we propose to construct a set of admissible policies. To make such a construction as easy as possible, we consider 200 different realisations of deterministic \(Q\)-learning (with a vanishing exploration parameter i.e., \(\varepsilon=0\)) that are obtained by choosing randomly the initial orientation of the swimmer. Each realisation of the learning algorithm is run in parallel to the others for a time \(t=2\times 10^{5}\nu^{-1}\). After this duration, the deterministic \(Q\)-learning algorithm has in all cases stabilised to a given policy, even if the entries of the \(Q\)-table have not converged. This evolution to a fixed policy is a result of our decision to eliminate exploration by setting \(\varepsilon=0\). The 200 policies obtained this way are then used to construct our admissible set. Figure 11a shows the asymptotic velocity \(\delta_{\tau}\tilde{x}_{1}/\tau\) attained by these 200 instances of \(Q\)-learning, ordered by decreasing efficiency. One observes roughly three classes of policies, which are separated in the plot by vertical dashed lines. The top 15% perform better, or comparably to the naive strategy. The next 15 to 37% realisations overcome the strategy obtained by the actor-critic algorithm and give a reasonable displacement of the swimmer. Finally, the worse 63% do not yield any significant displacement. These three regimes are separated by abrupt jumps in the average velocity. As evidenced from Fig. 11b, they correspond to significant changes in the Figure 11: Results of 200 realisations of deterministic \(Q\)-learning. **(a)** Average swimming speeds, ordered by decreasing performance. The two dotted lines show the velocities of the naive policy and that obtained with the Actor-Critic algorithm; the two dashed vertical lines mark quasi discontinuities in the swimmers performance. **(b)** Strategies that significantly lead to the swimmer’s displacement, again ordered from the most performant to the least. The two dashed vertical lines mark the same change of behaviour as on the left panel. Figure 10: Time evolution of the approximated policies, shown for each value of the observation \(\omega\in\{0..5\}\). The probability of choosing a given action \(\alpha\) is shown as a coloured area. corresponding policies. The top 15% policies clearly belong to the same category as the naive strategy. They are all prescribing a vigorous vertical undulation (\(\alpha=6\)) when the swimmer is favourably oriented and feels no headwind (\(\omega=4\) and \(5\)). They essentially recommend to stop swimming (\(\alpha=3\)) for a right orientation and a headwind (\(\omega=3\)) or when the swimmer is directed the wrong way and experiences a headwind (\(\omega=1\) and \(2\)). They favour horizontal undulations (\(\alpha=0\) and \(1\)) or to stop swimming (\(\alpha=3\)) when the swimmer is wrongly oriented with the flow blowing to the left. These top strategies mostly differ by the actions chosen for \(\omega=0\), \(1\), and \(2\). The separation with the second family of policies is clear from Fig. 11b: It corresponds to a change in the action performed for \(\omega=3\), from stopping swimming to undulating in the horizontal direction. The second change separating the second and third categories is as clear: The policies stop there prescribing a vertical undulation in the favourable configuration \(\omega=6\). When looking with more detail at the 15% top-ranked outcomes of \(Q\)-learning, one notices that the corresponding policies are rather few. They indeed form a set of five admissible policies whose performances are very similar and overtake each other depending on the realisation of the algorithm. In addition to the naive strategy, which can be written as \(\alpha_{\pi}=[3,3,3,3,6,6]\) where the \(i\)-th element of the array corresponds to the action \(\alpha_{\pi}(\omega)\) followed when \(\omega=i\), the other four policies are \(\alpha_{\pi}=[0,3,3,3,6,6]\), \([1,3,3,3,6,6]\), \([3,3,0,3,6,6]\), and \([0,1,3,3,6,6]\). Notice that none of these five policies emerged, neither in an intermediate stage, nor asymptotically, in the various trials of reinforcement learning in previous subsections. We select these five strategies to define a set of admissible deterministic policies that are potential solutions to our optimal navigation problem. In the next section, we address with more detail their actual performance and robustness when varying the physical settings in our system. ## V Performance and robustness of the admissible policies ### Long-term statistics We here provide details on the performance of the five admissible strategies obtained from competitive realisations of deterministic \(Q\)-learning in Sec. IV.3. Figure 12a shows the time evolution of the velocity \(\delta_{\tau}\bar{x}_{1}/\tau\) along trajectories that each follow one of the selected policies (velocities are there expressed in units of the swimming speed \(V_{\rm swim}\) in absence of flow). Unambiguous differences in the performance of the different policies are only visible for \(\tau\gtrsim 10^{6}\nu^{-1}\), the shorter time lags being essentially dominated by violent fluctuations of the displacement. This very slow convergence of the time averages along the swimmer dynamics can clearly be an important source of difficulties when using iterative optimisation algorithms. We hereafter label these trajectories from 1 to 5 using their efficiency ranking. The naive policy is 4 and the diagrams of the other admissible policies are shown in inset of Fig. 12c. The variances of the displacement over a time \(\tau\) evaluated for the five policies are shown in Fig. 12b. We have here divided it by the time lag \(\tau\) in order to measure effective coefficients of diffusion. One observes almost the same ordering of trajectories (except for 5), suggesting that good performance goes together with weaker fluctuations. All curves saturate to a plateau at large times, indicating a long-term diffusive regime of the horizontal displacement about its average, as already observed for the naive strategy in Sec. III.3. The asymptotic value gives an estimate of the associated coefficient of diffusion. For all policies, it is of the order of the fluid-flow units, namely \(UL\), which is itself of the order of the displacement units \(\simeq V_{\rm swim}\ell\). This means that, on a time \(L/V_{\rm swim}\) needed by the swimmer to travel across a cell, it typically diffuses over a distance equal to the cell size \(L\) itself. This strong diffusion accounts for the observed slow convergence of the average velocity. The order of magnitude of diffusion exactly corresponds to a finite contribu tion from trapping. It indicates that on a time \(L/V_{\rm swim}\), the swimmers can remain with a finite probability in the same cell rather than moving to the next one. These considerations become much clearer when measuring the probability distribution of the time \(T_{+1}\) that the swimmer needs to travel from one cell to the next adjacent one. The complementary cumulative distribution functions obtained for the five policies are shown in Fig. 12c. All curves almost collapse on the top of each other, up to times corresponding to hundreds of undulatory beats. Admissible policies therefore differ little in their ability to move the swimmer when its conditions are standard. Nonetheless, marked difference are found in the tails of the distributions, which are sampling trapping events. The two most performant policies (1 and 2) are associated to lesser probabilities of getting a long transition time \(T_{+1}\). This can be interpreted as a consequence of the horizontal undulation that both policies recommend when the swimmer is wrongly oriented with a negative fluid velocity (\(\omega=0\)). Such a choice apparently makes a difference with the two next policies (3 and 4) that both display a fatter tail in the distribution of \(T_{+1}\). For these two policies, the swimmer stops undulating when in such a configuration. Finally, policy 5, which is beaten by the four others, shows a higher probability at smaller values of \(T_{+1}\), possibly indicating that it is more likely to bring about trapping, even if swimmers can then escape faster. ### Robustness with respect to the physical parameters Here we address the performance of admissible policies by varying the physical properties of the swimmer. We have performed a set of numerical simulations where we alternatively vary the size ratio \(\ell/L\), the swimmer flexibility \(\mathcal{F}=(\zeta\nu/K)^{1/4}\ell\), or the velocity ratio \(U/V_{\rm swim}\), while keeping constant the two other parameters. We estimated from these simulations average swimming speeds by monitoring again the asymptotic displacements of the swimmers. Figure 13a shows the performance of five policies obtained at varying the length \(\ell\) of the swimmer. We find that dependence upon policy is only visible for swimmers that are sufficiently small compared to the cell size, whereas at larger sizes, the five policies perform comparably well. One indeed observes for \(\ell\lesssim 0.8\,L\) that the performance ranking of policies is completely shuffled. The swimmers following the otherwise efficient policies 1 and 2 barely move towards \(x_{1}>0\), while the best performance is achieved by 3. This can be understood by the fact that tumbling and trapping completely change their natures for short swimmers (or equivalently large-scale fluid inhomogeneities). The action of trying to escape by a vigorous vertical swim seems hence less efficient than just stopping to swim and waiting for being conveyed by the flow in a more favourable region. At larger swimmer sizes (\(\ell\gtrsim 0.8\,L\)), the ranking between the various policies seems almost independent of \(\ell/L\), even if the various policies seem to asymptotically perform similarly. The swimming speed seems to saturate for \(\ell\gtrsim 1.8\,L\). This due to the fact that long swimmers are very unlikely to get tumbled by the flow, so that what only matters are the actions performed for observations \(\omega=3\), \(4\), and \(5\) and they are identical for the five admissible policies. Figure 13b shows dependence upon flexibility. The various policies perform equally well for rigid swimmers (small \(\mathcal{F}\)). In that case, they are almost never bent, nor buckled by the flow. This prevents trapping, and thus does not allow the various policies to display any differences in performance. At the same time and as seen in Sec. II.2, much energy is dissipated by the elastic forces, hindering efficient swimming motions. The differences between the various strategies is however much more visible for flexible swimmers (large \(\mathcal{F}\)). Policies that efficiently prevent long-term traps (1, 2 and 5) stand clearly out from the two others. This divergence is promoted by flexibility, because the swimmers are more and more likely to get trapped when \(\mathcal{F}\) increases. Finally, figure 13c show results obtained when varying the amplitude \(U\) of the outer fluid flow. For all policies, the average horizontal velocity decreases from the swimming speed in the absence of flow (\(U=0\)) to very small values for strong fluid flows. None of the admissible policies lead to any significant displacement of the swimmers for fluid velocities exceeding \(\simeq 0.045\,\ell\nu\simeq 2.5\,V_{\rm swim}\). It seems from our measurements that the performance ranking between the five policies does not depend on \(U\). ### Tests in two-dimensional unsteady flows To assess further the robustness of the proposed policies, we consider now the case where the swimmers are Figure 13: Robustness of the admissible strategies 1 to 5 when varying the swimmers physical parameters. All results where obtained for \(u_{0}/U=0.2\) and \(A_{0}=0.08\). **(a)** Average swimming speed as a function of the ratio between the swimmer length \(\ell\) and the flow length scale \(L\) (for \(\mathcal{F}=15\) and \(U=0.025\,\ell\nu\) both fixed). **(b)** Same, as a function of the swimmer flexibility \(\mathcal{F}=(\zeta\nu/K)^{1/4}\ell\) and the flow length scale \(L\) (for \(\ell/L=1\) and \(U=0.025\,\ell\nu\)). **(c)** Same as before, varying this time the fluid flow velocity \(U\) (for \(\ell/L=1\) and \(\mathcal{F}=15\)). On each panel, the vertical dashed line shows the parameter value used in earlier sections. moving in a more realistic stationary flow that solves the incompressible Navier-Stokes equations. The fluid velocity field, in place of being a steady cellular flow, is now a solution of \[\rho_{\mathrm{f}}\left[\partial_{t}\mathbf{u}+\mathbf{u}\cdot\nabla\mathbf{u} \right]=-\nabla p+\mu\nabla^{2}\mathbf{u}-\alpha\mathbf{u}+\nabla^{\perp}F,\] \[\nabla\cdot\mathbf{u}=0, \tag{14}\] where \(\rho_{\mathrm{f}}\) is the fluid mass density, assumed constant, \(\mu\) is its dynamic viscosity, \(\alpha\) is a friction coefficient accounting for the two-dimensional confinement of the flow, and \(\nabla^{\perp}F\) is an external incompressible force that maintains the flow in motion. We choose the stationary cellular force \(F=(\alpha UL/\pi)\cos(\pi x_{1}/L)\,\cos(\pi x_{2}/L)\), with a forcing amplitude \(U\) and a spatial period \(L\) that set the large velocity and length scales of the flow. The dynamics then depends upon two non-dimensional parameters, the viscous Reynolds number \(Re_{\mu}=\rho_{\mathrm{f}}U\,L/\mu\), which balances inertia and viscous dissipation, and the friction Reynolds number \(Re_{\alpha}=\rho_{\mathrm{f}}U/(L\alpha)\), which balances inertia and friction. Depending on them, the flow might bifurcate between different regimes [39; 40]. We assume \(Re_{\mu}\gg 1\), so that viscous dissipation acts only at small scales, possibly yielding a direct turbulent cascade of enstrophy. By contrast \(Re_{\alpha}\) is used as a control parameter. With this choice one recovers when \(Re_{\alpha}\ll 1\) the stationary cellular flow that has been previously considered. When \(Re_{\alpha}\) increases, the flow transitions to a turbulent regime where it is unsteady and chaotic. Illustrations of the associated vorticity fields are given in Fig. 14a and b. We have performed several numerical simulations of the fluid velocity at varying the friction Reynolds number \(Re_{\alpha}\). We used a pseudo-spectral solver with \(256^{2}\) collocation points, second-order Runge-Kutta time marching, and implicit integration of the linear viscous and friction terms. The velocity is assumed \(2\pi\)-periodic and we set \(L=\pi/2\). Various swimmers are embedded in the flow (20 for each policy from 1 to 5) and we monitor their progression toward \(x_{1}>0\). In all simulations, both physical and navigation parameters are kept the same as in Sec. V.1, namely \(\mathcal{F}=15\), \(U=0.025\,\ell\nu\), \(\ell/L=1\), \(u_{0}/U=0.2\), and \(A_{0}=0.08\). The average horizontal speed of such swimmers is reported in Fig. 14b as a function of the friction Reynolds number. At low \(Re_{\alpha}\), one recovers the same performance ranking as previously observed in stationary cellular flows. However, for \(Re_{\alpha}>4\), the flow develops a chaotic regime characterised by open streamlines with rather strong jets where the swimmer might be entrained in an inappropriate direction. The performance of policies 3 and 4 drops down significantly, while the other policies continue to operate relatively well. This dissimilarity can be explained by the contrasting responses to trapping observed in Sec. V.1. Policies 1, 2 and 5 have in common to promote a horizontal undulation when the swimmer is wrongly oriented with a headwind. This action allows the swimmer to move transversally and escape strong persistent jets that otherwise sweep it toward \(x_{1}<0\). This makes apparently a noticeable difference at intermediate values of \(Re_{\alpha}\). ## VI Concluding remarks We have studied in this paper the question of optimising the displacement of undulatory, deformable micro-swimmers evolving in a prescribed, non-homogeneous outer flow. Our physical model imposes close links between the macroscopic displacement of the swimmer and its microscopic active deformation that induces motility. This clearly differs from standard optimal-navigation problems, which generally assume a scale separation between these two mechanisms that is sufficiently large to consider them independent from each other. We used reinforcement-learning methods to address this problem, trying constantly to interpret the outcomes of our approaches in terms of the underlying physics. An important message that we want to convey is the necessity of determining the relevant physical timescales of the problem. This leads not only to choosing appropriate hyperparameters of the learning schemes, but also to estimating and understanding their convergence rate. In our settings, the swimmer's configurations form a high-dimensional set from which we arbitrarily decided to exploit only very partial information. However, these settings happened to constitute a clear instance where the prescription of only a limited knowledge of the state of the agent has drastic impacts on the optimisation of the decision process. We have tested several methods, ranging from simple \(Q\)-learning to more sophisticated approximation methods. All these trials lead to prohibitively Figure 14: Swimmers immersed in a non-steady flow that follow the five admissible policies. Left panels: snapshot of the fluid vorticity \(\omega=\partial_{1}u_{1}-\partial_{2}u_{1}\) (contour lines in background), together with the instantaneous position of swimmers coloured according to the policy they follow, for **(a)**\(Re_{\alpha}\simeq 2\) and **(b)**\(Re_{\alpha}\simeq 9.5\). Right panel **(c)** Average swimming speed as a function of the friction Reynolds number \(Re_{\alpha}\) for the five admissible policies, as labeled. long convergence times, if not infinite. To our opinion, this is due to the fact that the information on the swimmer's configuration is so coarse that our problem deviates in a significant manner from the usual Markovian framework. This, combined with chaotic dynamics, leads to tremendous fluctuations with respect to initial data that jeopardise the outcomes of reinforcement-learning procedures. The combination of a very-partially observable character with a highly-chaotic nature of the system is certainly a feature shared by many other practical decision problems. It would be of significant interest to formalise better such connections by evaluating for instance the stability and ergodicity of the global dynamical system defined as the union of the iterative learning procedure and the underlying dynamics. Despite these difficulties, we have proposed an alternative approach based on concurrent realisations of reinforcement learning. Instead of limiting the optimisation procedure to searching for a unique satisfactory approximation of the optimal policy, we shifted our objective to constructing an almost-comprehensive set of admissible strategies whose performance and robustness might be assessed subsequently. The case we have considered is particularly rich, while remaining tractable. The set of admissible strategies was obtained in a quite simple manner by running different instances of deterministic \(Q\)-learning, whose results proved to be particularly sensitive to the specific initialisation of the algorithm. Moreover, the set constructed this way reduces to only five different admissible policies, making rather easy any systematic assessment on their efficiencies. Still, as demonstrated in Sec. V, the performance of each of these policies can appreciably vary when changing the physical parameters of the swimmer or the type of fluid flow in which it is immersed. Such a systematic investigation would have been impossible if one had to solve, for each setting, an expensive optimisation problem. Finally, let us stress that most of the difficulties we faced could be stemming from the arbitrary choice of limited observables and actions that we considered in the decision process. The motivation for such a prescription was mainly coming from practical applications. In general, the amount of accessible information and of possible manoeuvres is strongly constrained by the design, cost, and efficiency of sensors and engines that equip a micro-robot, or by the primitive nature of the micro-organisms in consideration. However, it could largely be that the observables and actions that we have chosen are not enough for this physical model and the associated navigation problem. It would thus be interesting to repeat this analysis and the reinforcement-learning trials by adding, at both ends of the decision process, encoder-decoder neural networks that would automatically extract and redistribute the relevant information. Interpreting the encoded information could be highly pertinent to the design of optimal sensors and actuators and their implementation in practical applications. ###### Acknowledgements. The authors are grateful to the OPAL infrastructure from Universite Cote d'Azur for providing computational resources. This work received support from the UCA-JEDI Future Investments funded by the French government (grant no. ANR-15-IDEX-01) and from the Agence Nationale de la Recherche (grant no. ANR-21-CE30-0040-01).
2310.03839
Davydov-Yetter cohomology for Tensor Triangulated Categories
One way to understand the deformation theory of a tensor category $M$ is through its Davydov-Yetter cohomology $H_{DY}^{\ast}(M)$ which in degree 3 and 4 is known to control respectively first order deformations of the associativity coherence of $M$ and their obstructions. \\ In this work we take the task of developing an analogous theory for the deformation theory of tensor triangulated categories with a focus on derived categories coming from algebraic geometry. We introduce the concept of perfect pseudo dg-tensor structure $\Gamma$ on an appropriate dg-category $\mathscr{T}$ as a truncated dg-lift of a tensor triangulated category structure on $H^{0}(\mathscr{T})$ and we define a double complex $DY^{\ast,\ast}(\Gamma)$ and we see that the 4th cohomology group $HDY^{4}(\Gamma)$ of the total complex of $DY^{\ast,\ast}(\Gamma)$ contains information about infinitesimal first order deformations of the tensor structure.
Angel Israel Toledo Castro
2023-10-05T18:49:52Z
http://arxiv.org/abs/2310.03839v1
# Davydov-Yetter cohomology for tensor triangulated categories ###### Abstract. One way to understand the deformation theory of a tensor category \(M\) is through its Davydov-Yetter cohomology \(H^{*}_{DY}(M)\) which in degree 3 and 4 is known to control respectively first order deformations of the associativity coherence of \(M\) and their obstructions. In this work we take the task of developing an analogous theory for the deformation theory of tensor triangulated categories with a focus on derived categories coming from algebraic geometry. We introduce the concept of perfect pseudo dg-tensor structure \(\Gamma\) on an appropriate dg-category \(\mathscr{T}\) as a truncated dg-lift of a tensor triangulated category structure on \(H^{0}(\mathscr{T})\) and we define a double complex \(DY^{*,*}(\Gamma)\) and we see that the 4th cohomology group \(HDY^{4}(\Gamma)\) of the total complex of \(DY^{*,*}(\Gamma)\) contains information about infinitesimal first order deformations of the tensor structure. ###### Contents * 1 Introduction * 2 Lifting tensor triangulated categories * 2.1 Preliminaries of dg-categories * 2.2 Pseudo dg-tensor structures * 3 Davydov-Yetter cohomology * 3.1 The abelian case * 3.2 Deforming tensor triangulated structures ## 1. Introduction The study of tensor categories, nicely behaved k-linear abelian categories equipped with a rigid symmetric monoidal category structure, has been an active area of research in the past decades and has seen as a successful tool in representation theory ever since the work of Saavedra Rivano in Tannakian categories [10]. The deformation theory of these categories was first introduced and studied by Davydov, Yetter and Crane-Yetter in [1, 2, 3, 4, 5]. In concrete terms, this deformation theory is a deformation of the coherence data of the monoidal category or monoidal functor structures. Given a tensor category \((\mathscr{A},\otimes)\), one defines a complex \(DY^{*}(\mathscr{A},\otimes)\) given by natural morphisms \[A_{1}\otimes(A_{2}\otimes(\cdots\otimes(A_{n-1}\otimes A_{n})\dots)\to( \dots(A_{1}\otimes A_{2})\otimes\dots)\otimes A_{n-1})\otimes A_{n})\] Where \(A_{i}\) are possibly different objects of \(\mathscr{A}\). The cohomology of this complex is known to give information on the infinitesimal deformations of coherence conditions. In particular the 3rd cohomology group \(HDY^{3}(\mathscr{A},\otimes)\) controls the first order infinitesimal deformations of the associativity condition of the monoidal structure \(\otimes\) and the group \(HDY^{4}(\mathscr{A},\otimes)\) controls the obstructions of this deformations ([1, Exercise 7.22.2]). If we denote by \(\alpha_{A,B,C}:A\otimes(B\otimes C)\to(A\otimes B)\otimes C\) the natural associativity morphisms of the monoidal structure \(\otimes\), by first order deformation of \(\alpha\) we mean a natural family of morphisms \[\alpha^{\prime}_{A,B,C}:A\otimes^{\prime}(B\otimes^{\prime}C)\to(A\otimes^{ \prime}B)\otimes^{\prime}C\] Such that \(\alpha^{\prime}\) is the associativity data of a monoidal structure on a \(k[\epsilon]/\epsilon^{2}\)-linear tensor category \(\mathscr{A}^{\prime}\) whose objects are the same as \(\mathscr{A}\) and has \(\otimes^{\prime}\) as tensor bifunctor \(\mathscr{A}^{\prime}\times\mathscr{A}^{\prime}\to\mathscr{A}^{\prime}\), and such that \(\alpha=\alpha^{\prime}\) when restricted to \(k\). In this work, we present a version of this constructions for tensor triangulated categories. To us, a tensor triangulated category is a \(k\)-linear triangulated category \(\mathscr{T}\) equipped with a rigid symmetric monoidal category structure \(\boxtimes\). In particular we should focus on tensor triangulated categories coming from algebraic geometry, meaning that the underlying triangulated category is the derived category of perfect complexes \(Perf(X)\) on a nicely behaved scheme \(X\). Our motivation comes from an interest in tensor triangulated geometry as initiated by Balmer in [1, 1], where a general construction is described to produce a locally ringed space \(Spc(\mathscr{T},\boxtimes)\) from a general tensor triangulated category \((\mathscr{T},\boxtimes)\). While this space only depends on the unit object and the bifunctor in the monoidal structure on \(\mathscr{T}\), we believe that there might be useful information on the categorical side of things which might reflect information which relates the geometry of \(X\) with respect to its derived category of perfect complexes \(Perf(X)\). In order to proceed with our constructions it is necessary to work with dg-enhancements of our triangulated categories. A dg-category is a category enriched over \(\mathscr{C}(k)\). It is known that our derived categories \(Perf(X)\) have unique dg-enhancements, meaning a dg-category \(\mathscr{T}\) such that its homotopy category \(H^{0}(\mathscr{T})\) is equivalent as a triangulated category to \(Perf(X)\) where \(H^{0}(\mathscr{T})\) is the category with the same objects as \(\mathscr{T}\) and for any two objects \(X,Y\in H^{0}(\mathscr{T})\), \(Hom_{H^{0}(\mathscr{T})}(X,Y):=H^{0}(Hom_{\mathscr{T}}(X,Y))\). The reason we need to work at the level of dg-categories is that we will be exploiting Toen's Morita theorem from [10]. Recall that the classical Morita theorem completely characterizes additive cocontinuous functors between module categories \(R-Mod\), \(S-Mod\) as being equivalent to the category of \(R-S\)-bimodules. We want then to characterize the exact bifunctors \(\boxtimes:\mathscr{T}\times\mathscr{T}\to\mathscr{T}\) that form part of a tensor triangulated category structure \((\mathscr{T},\boxtimes)\) in terms of certain bimodules. However, as it is well known, triangulated categories don't always behave sufficiently well and no reasonable version of this theorem holds for triangulated functors. At the level of dg-categories however one such theorem is true when one takes into account equivalences of dg-categories up to weak equivalences in a certain model category structure on the category of dg-categories. Inspired by [11] where classical Morita theory was used to characterize additive closed symmetric monoidal category structures on a category of \(R\)-modules, we use Toen's Morita theorem to do the same for tensor triangulated category structures in terms of a structure we will call perfect pseudo dg-tensor structures on a given dg-category \(\mathscr{T}\). These structures can be thought of as a lift of tensor triangulated category structures to the dg-categorical level up to a given homotopy. We will proceed as follows: In section 2 we will go over the general theory of dg-categories to properly estate the Morita theorem for dg-categories, then we will give a definition for pseudo dg-tensor category structures (2.33) and perfect pseudo dg-tensor category structures (2.35) by using n-fold dg-bimodules (2.30), we will then show that these structures induce tensor triangulated categories at the homotopy level (2.36) and give conditions for a bifunctor \(\boxtimes\) part of a tensor triangulated category on a triangulated category \(\mathscr{T}\) structure to lift to a perfect pseudo dg-tensor category structure on a dg-enhancement of \(\mathscr{T}\). Finally, in section 3 we will define the Davydov-Yetter double complex associated to a perfect pseudo dg-tensor category structure \(\Gamma\) (3.12), the Davydov-Yetter cohomology of \(\Gamma\), \(HDY^{*}(\Gamma)\), and will prove our main theorem: **Theorem 1.1**.: _(Theorem 3.15) Let \(\mathscr{T}\) be a dg-category and let \(\Gamma\) be a pseudo dg-tensor structure on \(\mathscr{T}\). Then to any element of \(HDY^{4}_{\text{dg}}(\mathscr{T})\) we can associate an equivalence class of infinitesimal deformations of order 1 of the associativity condition of \(\Gamma\)._ Through this work unless stated otherwise, \(k\) will denote a commutative ring, we denote by \(\mathscr{C}(k)\) the category of cochain complexes on \(k\). By scheme or variety we mean a scheme or variety over \(k\). The results in this work were obtained as part of the author's PhD thesis at the Laboratoire J.A. Dieudonne at the Universite Cote d'Azur. The author would like to thank his advisor Carlos Simpson for many discussions and to Ivo Dell'Ambrogio and Bertrand Toen for their careful and valuable comments on the thesis manuscript. The PhD thesis was partially financed by the CONACyT-Gobierno Frances 2018 doctoral scholarship. ## 2. Lifting tensor triangulated categories We will start this section by giving a quick review on the homotopy theory of dg-categories as developed by Toen in [10]. Our main goal is to be able to give a presentation on how the Morita theorem applies to derived categories of quasi-separated compact schemes. Afterwards we will take inspiration from the work of Hovey in [11] and present a characterization of a lift of a tensor triangulated category in term of bimodules at the dg-level by using Toen's Morita theorem. Let us very briefly give the following series of definitions for a category of noncommutative derived schemes. This theory plays no substantial role in the following but it is an appropriate general category in which we can work. The reader can restrict themselves to working with dg-categories of perfect complexes coming from smooth projective varieties. **Definition 2.1**.: _A (derived) noncommutative scheme \(X\) over \(k\) is a k-linear dg-category equivalent to \(Perf_{dg}(R)\) where \(R\) is a cohomologically bounded dg-algebra over \(k\)._ A morphism of noncommutative schemes \(X,Y\) is simply a quasi-functor \(F:X\to Y\). Together with these morphisms we have a category of noncommutative schemes over \(k\) which we denote by \(NCSch_{k}\). **Definition 2.2**.: _A noncommutative scheme \(X=Perf_{dg}(R)\) is proper if \(\bigoplus_{p\in\mathbb{Z}}H^{p}(Hom_{X}(M,N))\) are finite dimensional \(k\)-vector spaces for any two perfect modules \(M,N\in X\)._ It can be shown that this is equivalent to the cohomology algebra \(\bigoplus_{p\in\mathbb{Z}}H^{p}(R)\) being finite dimensional. Furthermore, if \(X\) is a separated scheme, then it is proper if and only if its dg-category \(Perf_{dg}(X)\) is proper. **Definition 2.3**.: _A noncommutative scheme \(X\) is regular if its triangulated category has a strong generator._ Similarly, when \(X\) is a quasi-compact and separated scheme then it is regular as a noncommutative scheme if and only if it can be covered by open affine's \(Spec(R_{i})\) with each \(R_{i}\) being of finite global dimension. **Definition 2.4**.: _A noncommutative scheme \(X\) is smooth over \(k\) if the bimodule \((x,y)\mapsto X(x,y)\) is a compact object of \(X\otimes X^{op}\)-Mod._ As with the rest of the properties, in the case where \(X\) is of finite type, a commutative scheme is smooth if and only if its category of perfect modules is smooth. Finally we have: **Definition 2.5**.: _We say a dg-category is saturated if it is proper, smooth and pretriangulated and its triangulated category is idempotent complete._ A version of the story we have told so far about derived categories of smooth projective varieties regarding Serre functors, semi-orthogonal decompositions and even of Toen's Morita theorem can be reproduced for noncommutative schemes. We refer to [10] for a more through summary of the theory developed so far. ### Preliminaries of dg-categories The contents of this subsection are merely expository, a more detailed outlook of the theory can be consulted for example in [12, 13, 14]. We start with some basic definitions: **Definition 2.6**.: _A dg-category \(\mathscr{T}\) is a category enriched over the closed monoidal category \(\mathscr{C}(k)\), the category of cochain complexes over \(k\)._ _Similarly, let \(\mathscr{T},\mathscr{T}^{\prime}\) be dg-categories, a dg-functor \(\mathscr{F}:\mathscr{T}\to\mathscr{T}^{\prime}\) is a functor enriched over \(\mathscr{C}(k)\)._ By considering enriched functors and enriched natural transformations we can then form a dg-category of functors \(Fun_{dg}(\mathscr{T},\mathscr{T}^{\prime})\) between dg-categories \(\mathscr{T}\) and \(\mathscr{T}^{\prime}\). **Definition 2.7**.: _Let \(\mathscr{T}\) be a dg-category. The homotopy category \(H^{0}(\mathscr{T})\) of \(\mathscr{T}\) is the category with the same class of objects as \(\mathscr{T}\) and, if \(x,y\in H^{0}(\mathscr{T})\) we let the Hom sets be given as_ \[Hom_{H^{0}(\mathscr{T})}(x,y):=H^{0}(Hom_{\mathscr{T}}(x,y))\] _The homotopy functor \(H^{0}(\mathscr{F}):H^{0}(\mathscr{T})\to H^{0}(\mathscr{T}^{\prime})\) is the induced functor at the level of homotopy categories._ Just as we do in categories, for any given dg-category \(\mathscr{T}\) there exists an opposite dg-category \(\mathscr{T}^{op}\) given by the same objects as those of \(\mathscr{T}\) and whose Hom complexes are, for two objects \(x,y\in\mathscr{T}^{op}\), the cochain complex \(Hom_{\mathscr{T}}(y,x)\). We are mainly interested in working with a module theory for dg-categories in the same way we do for regular or dg-algebras. **Definition 2.8**.: _Let \(\mathscr{T}\) be a dg-category, the category of (right) dg-modules over \(\mathscr{T}\) is the dg-category of dg-functors \(\mathscr{T}^{op}\to\mathscr{C}(k)\). We will denote this dg-category by \(\mathscr{T}^{op}-Mod\)._ _Analogously we have a notion of left \(\mathscr{T}\)-module if we consider functors \(\mathscr{T}\to\mathscr{C}(k)\). This category will be in turn denoted simply by \(\mathscr{T}-Mod\)._ Let \(\mathscr{T}\) be a dg-category and \(x\in\mathscr{T}\) an object, then the module given by \[h^{x}:=Hom_{\mathscr{T}}(x,\_):\mathscr{T}^{op}\to\mathscr{C}(k)\] which associates \(y\in\mathscr{T}\) the cochain complex \(Hom_{\mathscr{T}}(x,y)\). It is natural to consider the following definition **Definition 2.9**.: _Let \(\mathscr{T}\) be a dg-category, the Yoneda embedding is the dg-functor:_ \[h^{\ast}:\mathscr{T}\to\mathscr{T}^{op}-Mod\] _Which maps \(x\in\mathscr{T}\) to \(h^{x}\)_ We can justify this nomenclature as it is possible to show that this dg-functor is full and faithful in the sense that there is an isomorphism of complexes for any \(x,y\in\mathscr{T}\) \[Hom_{\mathscr{T}^{op}-Mod}(h^{x},h^{y})\cong Hom_{\mathscr{T}}(x,y)\] . We also have a co Yoneda embedding by the dg-functor \[h_{\_}\colon\mathscr{T}^{op}\to\mathscr{T}-Mod\] Which takes \(x\in\mathscr{T}\) and maps it to the left module \(Hom_{\mathscr{T}}(\_,x)\colon\mathscr{T}\to\mathscr{C}(k)\). As usual we call a right \(\mathscr{T}\)-module representable if it is equivalent in \(\mathscr{T}^{op}-Mod\) to a right module of the form \(h^{x}\) for some \(x\in T\). As is the case for enriched categories, one can define a tensor product \(\otimes\) of dg-categories which makes \(dg-cat\) into a closed monoidal category. **Definition 2.10**.: _If \(\mathscr{T},\mathscr{T}^{\prime}\) are dg-categories, we define their tensor product as the dg-category \(\mathscr{T}\otimes\mathscr{T}^{\prime}\) with objects given by pairs \((x,x^{\prime})\in\mathscr{T}\times\mathscr{T}^{\prime}\), and cochain complexes between pairs of objects_ \[Hom_{\mathscr{T}\otimes\mathscr{T}^{\prime}}((x,x^{\prime}),(y,y^{\prime})) :=Hom_{\mathscr{T}}(x,y)\otimes_{\mathscr{C}(k)}Hom_{\mathscr{T}^{\prime}}(x ^{\prime},y^{\prime})\] _And with composition here given entry-wise in the obvious way._ Now that we are able to take these tensor products, we can consider a particular sort of dg-module which will play an essential role in our theory. **Definition 2.11**.: _Let \(\mathscr{T},\mathscr{T}^{\prime}\) be dg-categories. A left module \(\mathscr{T}\otimes\mathscr{T}^{\prime op}\to\mathscr{C}(k)\) will be called a dg-bimodule over \(\mathscr{T}\) and \(\mathscr{T}^{\prime}\)._ Equivalences between enriched categories turn out to be too strong of a requirement for our goal of a Morita theorem for dg-categories. It is for this reason that there was a need to develop a homotopy theory for these categories so that we would obtain the right notion of equivalence. This was done in the language of model category structures. In analogy to the simplicial category setting, we will now describe the Dwyer-Kan model category structure on the category of (small) dg-categories over \(k\), \(dg-cat_{k}\). We will not only need to put a model structure on the whole category of dg-categories, but we will also be interested in putting model category structures on specific sorts of \(\mathscr{C}(k)\)-enriched categories in a way that makes the enrichment compatible with the model structure. It is in this interaction and compatibility between the global homotopy category of dg-categories and the internal model structure that we can put on a single specific \(\mathscr{C}(k)\)-enriched category where the core of some arguments will take place so it is important to describe both in some detail. We are in particular interested in putting a model category structure on the dg-category of dg-modules over a dg-category \(\mathscr{T}\). This model structure on \(\mathscr{T}\)-Mod can be explicitly described by declaring a morphism \(f:\mathscr{F}\to\mathscr{G}\) of \(\mathscr{T}\)-modules as a weak equivalence if for any \(x\in\mathscr{T}\), the induced morphism \(f_{x}:\mathscr{F}(x)\to\mathscr{G}(x)\) is a quasi-isomorphism. We say that \(f:\mathscr{F}\to\mathscr{G}\) is a fibration if the morphism \(f_{x}\) is a fibration in \(\mathscr{C}(k)\). With this structure and using the fact that \(\mathscr{C}(k)\) is a cofibrantly generated with generating cofibrations \(I\) and generating trivial cofibrations \(J\), then \(\mathscr{T}\)-Mod is also cofibrantly generated with generating cofibrations given by those morphisms of \(\mathscr{T}\)-modules of the form \[h^{x}\otimes_{\mathscr{C}(k)}C\overset{id\otimes f}{\longrightarrow}h^{x} \otimes_{\mathscr{C}(k)}D \tag{2.1}\] Where \(x,y\in\mathscr{T}\) and \(f:C\to D\in I\). As \(\mathscr{T}\)-Mod is a dg-category itself, the internal Hom object will simply be the Hom cochain complex objects of \(\mathscr{T}\)-Mod and the tensoring by \(\mathscr{C}(k)\) is defined degree-wise. Let us denote by \(Ho(\mathscr{M})\) the homotopy category of a model category \(\mathscr{M}\), defined as the localization at the class of weak equivalences \(W\) of \(\mathscr{M}\), in other words \(Ho(\mathscr{M}):=\mathscr{M}[W^{-1}]\). We should also consider the full category \(Int(\mathscr{M})\) of fibrant and cofibrant objects of \(\mathscr{M}\). One important thing to notice about dg-modules over a dg-category \(\mathscr{T}\) is that the dg-modules \(h^{x}\in\mathscr{T}\)-Mod are fibrant as all cochain complexes \(Hom_{\mathscr{T}}(y,x)\) are fibrant in the model category of \(\mathscr{C}(k)\). This implies that the Yoneda embedding of Definition 2.9 factorizes through \(Int(\mathscr{T}-\text{Mod})\). **Definition 2.12**.: _Let \(\mathscr{T}\) be a dg-category. A dg-module \(\mathscr{F}\in\mathscr{T}^{op}\)-Mod is called quasi-representable if it is equivalent in \(Ho(\mathscr{T}^{op}-\text{Mod})\) to a module of the form \(h_{x}\)_ We denote the collection of such modules by \((\mathscr{T}-Mod)^{rqr}\). We can now move on to describe the Dwyer-Kan model category structure on \(dg-cat_{k}\). **Definition 2.13**.: _Let \(\mathscr{T},\mathscr{T}^{\prime}\) be dg-categories over \(k\). We say that a dg-functor \(\mathscr{F}:\mathscr{T}\to\mathscr{T}^{\prime}\) is a weak equivalence if_ 1. _It is quasi-fully faithful. This means that the induced cochain complex morphism_ \(Hom_{\mathscr{T}}(x,y)\to Hom_{\mathscr{T}^{\prime}}(\mathscr{F}(x),\mathscr{F }(y))\) _is a quasi-isomorphism for all_ \(x,y\in\mathscr{T}\)_._ 2. _It is quasi-essentially surjective if_ \(H^{0}(\mathscr{F}):H^{0}(\mathscr{T})\to H^{0}(\mathscr{T}^{\prime})\) _is essentially surjective. So for any_ \(x^{\prime}\in H^{0}(\mathscr{T}^{\prime})\)_, there exists_ \(x\in\mathscr{T}\) _such that_ \(\mathscr{F}(x)\simeq x^{\prime}\) _in_ \(H^{0}(\mathscr{T}^{\prime})\)_._ _A dg-functor \(\mathscr{F}:\mathscr{T}\to\mathscr{T}^{\prime}\) is on the other hand a fibration if_ 1. _The induced morphism of complexes_ \(Hom_{\mathscr{T}}(x,y)\to Hom_{\mathscr{T}^{\prime}}(\mathscr{F}(x),\mathscr{ F}(y))\) _is a fibration in the model structure of unbounded complexes._ 2. _For any isomorphism_ \(u^{\prime}:x^{\prime}\to y^{\prime}\in H^{0}(\mathscr{T}^{\prime})\) _and any_ \(y\in H^{0}(\mathscr{T})\) _such that_ \(\mathscr{F}(y)=y^{\prime}\)_, there is an isomorphism_ \(u:x\to y\) _in_ \(H^{0}(\mathscr{T})\) _such that_ \(H^{0}(\mathscr{F})(u)=u^{\prime}\)_._ These classes of morphisms in the category of dg-categories form a model category structure called the Dwyer-Kan model structure on \(dg-cat_{k}\). The homotopy category \(dg-cat_{k}[w^{-1}]\), where w is the class of weak equivalences above, of this structure will be denoted by \(H_{qe}\). We say that a dg-functor \(f\in H_{qe}(\mathscr{T},\mathscr{T}^{\prime})\) between two dg-categories \(\mathscr{T}\), \(\mathscr{T}^{\prime}\) is a quasi-functor. Unfortunately the Dwyer-Kan model structure is not a monoidal model category with the tensor product we have discussed. It is however possible to construct a derived tensor product of dg-categories by using a cofibrant replacement. That is, we define a bifunctor \[\_\otimes^{\mathbb{L}}\_:dg-Cat\times dg-Cat\to dg-Cat\] defined by \[\mathscr{T}\otimes^{\mathbb{L}}\mathscr{T}^{\prime}:=Q(\mathscr{T})\otimes \mathscr{T}^{\prime}\] where \(Q(\_)\) is a cofibrant replacement and \(\otimes\) is the tensor product of dg-categories. Importantly, the monoidal structure defined by this tensor product is closed and Toen gives the following characterization of the internal Hom object. **Theorem 2.14**.: _The monoidal category \((H_{qe},\otimes^{\mathbb{L}})\) is closed and for any two dg-categories \(\mathscr{T},\mathscr{T}^{\prime}\) there is a natural isomorphism in \(H_{qe}\)_ \[\mathbb{R}\underline{Hom}(\mathscr{T},\mathscr{T}^{\prime})\simeq Int( \mathscr{T}\otimes^{\mathbb{L}}\mathscr{T}^{\prime op}-Mod^{rqr})\] In particular we have that \[\widehat{\mathscr{T}}:=Int(\mathscr{T}^{op}-Mod)\simeq\mathbb{R}\underline{ Hom}(\mathscr{T}^{op},Int(\mathscr{C}(k)))\] in the homotopy category. Alternatively, we might write \(\mathscr{T}_{pe}\) instead of \(\widehat{\mathscr{T}_{pe}}\) specially in the case when working over a field, where there is no need to consider the \((\widehat{\_})\) operator, or when there is no chance of confusion. Let us denote too by \(\mathbb{R}\underline{Hom}_{c}(\widehat{\mathscr{T}},\mathscr{T}^{\prime})\) the category of continuous dg-functors between the two dg-categories \(\mathscr{T},\mathscr{T}^{\prime}\). By this we mean those functors \(\widehat{\mathscr{T}}\rightarrow\mathscr{T}^{\prime}\) such that they commute with direct sums when passing to the homotopy category \(H_{qe}\). **Definition 2.15**.: _We call \(\mathbb{R}\underline{Hom}_{c}(\widehat{T},\widehat{T}^{\prime})\) the dg-category of Morita morphisms from \(\mathscr{T}\rightarrow\mathscr{T}^{\prime}\), and perfect Morita morphisms the dg-category \(\mathbb{R}\underline{Hom}(\widehat{T}_{pe},\widehat{T}^{\prime}_{pe})\)_ We can now formulate the main theorem of this section **Theorem 2.16**.: _[_10_, Theorem 7.1]_ _Let \(\mathscr{T}\) be a dg-category and let \(y:\mathscr{T}\rightarrow\mathscr{T}^{op}-Mod\) denote the Yoneda embedding. For any other dg-category \(\mathscr{S}\), we have_ 1. _The pullback functor_ \(y^{*}:\mathbb{R}\underline{Hom}_{c}(\widehat{\mathscr{T}},\widehat{\mathscr{ S}})\rightarrow\mathbb{R}\underline{Hom}(\mathscr{T},\widehat{\mathscr{T}})\) _is an isomorphism in_ \(H_{qe}\)_._ 2. _The pullback functor_ \(y^{*}:\mathbb{R}\underline{Hom}(\widehat{\mathscr{T}}_{pe},\widehat{\mathscr{ S}}_{pe})\rightarrow\mathbb{R}\underline{Hom}(\mathscr{T},\widehat{\mathscr{S}}_{pe})\) _is an equivalence in_ \(H_{qe}\)__ Using this result we arrive at the more well known form of the result, which can be thought as **Corollary 2.17**.: _Let \(\mathscr{T}\) and \(\mathscr{S}\) be two dg-categories, then there exists a natural isomorphism in \(H_{qe}\)_ \[\mathbb{R}\underline{Hom}_{c}(\widehat{\mathscr{T}},\widehat{\mathscr{S}}) \simeq\widehat{\mathscr{T}^{op}\otimes^{\mathbb{L}}\mathscr{S}}\] Before we are able to continue with the concrete cases that interest us, let us quickly recall the notion of dg-enhancements for a triangulated category. **Definition 2.18**.: _Let \(\mathscr{K}\) be a triangulated category. We say that a dg-category \(\mathscr{T}\) is a dg-enhancement of \(\mathscr{K}\) if there exists a triangulated equivalence_ \[\epsilon:H^{0}(\mathscr{T})\to\mathscr{K}\] **Definition 2.19**.: _Let \(\mathscr{K}\) be a triangulated category and let \(\mathscr{T}\) and \(\mathscr{S}\) two enhancements \(\epsilon:H^{0}(\mathscr{T})\to\mathscr{K},\epsilon^{\prime}:H^{0}(\mathscr{S}) \to\mathscr{K}\). We say that \(\mathscr{K}\) has a unique enhancement if there is a quasi-functor \(f:\mathscr{T}\to\mathscr{S}\) such that \(H^{0}(f)\) is an equivalence of triangulated categories._ Importantly, let us consider \(\mathscr{T}\) a dg-category and \(\mathscr{T}\)-Mod the dg-category of \(\mathscr{T}\)-modules. It can be shown then that the category \(H^{0}(\mathscr{T}-\text{Mod})\) can be equipped with a natural triangulated category structure. We then have: **Definition 2.20**.: _Let \(\mathscr{T}\) be a dg-category. We say that it is pretriangulated if its image under the functor \(H^{0}(y^{\mathscr{T}}):H^{0}(\mathscr{T})\to H^{0}(\mathscr{T}-Mod)\) is a triangulated subcategory._ **Definition 2.21**.: _If \(\mathscr{T}\) is a dg-category, we let \(\mathscr{T}^{pre-tr}\) be the smallest pretriangulated full dg-subcategory of \(\mathscr{T}-Mod\). In this way we are adding cones, direct sums and all that might be missing from the original dg-category \(\mathscr{T}\)_ . Let us denote the triangulated category \(H^{0}(\mathscr{T}^{pre-tr})\) by \(tri(\mathscr{T})\), and by \(perf(\mathscr{T})\) the full subcategory of compact objects in \(tri(\mathscr{T})\). **Remark 2.22**.: _Given a dg-category \(\mathscr{T}\), the triangulated category \(perf(\mathscr{T})\) is enhanced by \(\widehat{\mathscr{T}_{pe}}\)._ **Definition 2.23**.: _Let \(\mathscr{T}\) be a dg-category and let \(\mathscr{S}\subset\mathscr{T}\) be a full sub dg-category. The quotient \(\mathscr{T}/\mathscr{S}\) is the dg-category with the same collection of objects as \(\mathscr{T}\) and such that for every \(s\in\mathscr{S}\) we add a morphism \(s\to s\) in degree \(-1\) so that \(d(s\to s)=Id_{s}\)_ With this definition of the quotient of dg-categories it is possible to show that \(H^{0}(\mathscr{T})/H^{0}(\mathscr{S})\simeq H^{0}(\mathscr{T}/\mathscr{S})\), and we can now see that the derived category of an abelian category \(\mathscr{A}\) can be enhanced by the quotient \(C(\mathscr{A})/Ac(\mathscr{A})\). The same can be done about bounded ( below, above, and both ) derived categories by simply taking the corresponding subcategories of \(C(\mathscr{A})\) and by consequence of \(Ac(\mathscr{A})\). We can consider the derived category of a general dg-category \(\mathscr{T}\) (cf. [10]). Let \(Ac(\mathscr{T})\) be the dg-subcategory of dg-modules \(\mathscr{T}-Mod\) of those modules acyclic on every object. We have then: **Definition 2.24**.: _Let \(\mathscr{T}\) be a dg-category, the derived category \(D(\mathscr{T})\) is the quotient \(H^{0}(\mathscr{T}-Mod/Ac(\mathscr{T}))\)._ This derived category is always triangulated as it is a Verdier quotient of the triangulated category \(H^{0}(\mathscr{T}-Mod)\). For a scheme we have that there always exists a dg-enhancement. The question of uniqueness can be deduced from Theorem B in [10]. Now that we have seen that our spaces and spaces of geometric nature can be enhanced in great generality, we are still left with the question of whether a triangulated functor can be lifted to a functor between dg-enhancements. To be more explicit, **Definition 2.25**.: _Let \(\mathscr{K},\mathscr{K}^{\prime}\) be triangulated categories with dg-enhancements \(\mathscr{T},\mathscr{T}^{\prime}\). We say that an exact functor \(\mathscr{F}:\mathscr{K}\to\mathscr{K}^{\prime}\) has a dg-lift if there exists a morphism \(f\in H_{qe}(\mathscr{T},\mathscr{T}^{\prime})\) such that \(H^{0}(f)=F\)._ From now on let us fix and denote by \(Perf_{dg}(X)\) a dg-enhancement of a derived category of perfect complexes over a space \(X\), and similarly we put \(QCoh_{dg}(X)\) for a dg-enhancement of the derived category of quasi coherent sheaves on \(X\). A consequence of Theorem 2.16 is the following **Theorem 2.26**.: _Let \(X,Y\) be two quasi-compact and separated schemes over k, assume that one is flat over \(k\). Then there exists an isomorphism in \(H_{qe}\)_ \[\mathbb{R}\underline{Hom}_{c}(QCoh_{dg}(X),QCoh_{dg}(Y))\simeq QCoh_{dg}(X \times_{k}Y)\] As a corollary of this, in the smooth case we have **Theorem 2.27**.: _Let \(X\) and \(Y\) be two smooth and proper schemes over k. Then there exists an isomorphism in \(H_{qe}\)_ \[\mathbb{R}\underline{Hom}(Perf_{dg}(X),Perf_{dg}(Y))\simeq Perf_{dg}(X\times_ {k}Y)\] Essentially what this theorem is telling us is that there is a correspondence in the smooth case between dg-lifts of exact functors between derived categories of perfect complexes and Fourier-Mukai transforms given by kernels in \(Perf_{dg}(X\times_{k}Y)\). Recall that a Fourier-Mukai transform \(F:D^{b}(X)\to D^{b}(Y)\) is a functor equivalent to a functor of the form \[\mathbb{R}p_{*}(\mathbb{R}q^{*}(\_\otimes^{\mathbb{L}}K))\] where \(K\in D^{b}(X\times Y)\) and \(p:X\times Y\to Y\), \(q:X\times Y\to X\) are the projections to each factor. **Remark 2.28**.: _In general, however, it is not true that exact functors between triangulated categories can be lifted not even in nice geometric settings. In [11] an explicit example is given of such exact functor that is not of Fourier-Mukai type._ Our focus in this work is entirely in this geometric setting rather than in arbitrary triangulated categories. It can be shown that the the dg-category \(QCoh_{dg}(X)\) enhancing the derived category of quasi-coherent sheaves on a variety \(X\) has a compact generator. This is an useful property as there is an equivalence in \(H_{qe}\) \[QCoh_{dg}(X)\simeq\widehat{A}_{X}\] where \(A_{X}\) is a dg-algebra seen as a one-object dg-category. Using this characterization we can describe the object corresponding to \[\mathscr{F}\in\mathbb{R}\underline{Hom}(Perf_{dg}(X),Perf_{dg}(Y)).\] First, by [10, Lemma 8.11] there is an equivalence in \(H_{qe}\) \[\mathbb{R}\underline{Hom}(A_{X},\widehat{A}_{Y})\simeq\widehat{A_{X}\otimes \join{\raise 1.0pt\hbox{$\mathbb{Q}$}}\raise 1.0pt\hbox{$\mathbb{Q}$}}_{k}^{ \perp}A_{Y}^{op}\simeq QCoh_{dg}(X\times_{k}Y).\] Then to any \(\mathscr{F}\) as above, we associate the dg-bimodule \(M\in\mathbb{R}\underline{Hom}(A_{X},\widehat{A}_{Y,pe})\) given by \(M(*):=\mathscr{F}((\widehat{A}_{X,pe})(*,\_))\). For example, if we let \(\mathscr{F}:A_{pe}\to A_{pe}\in\mathbb{R}\underline{Hom}(A_{pe},A_{pe})\) where \(A\) is a dg-algebra, the \(A_{pe}^{op}\)-module given by \(\mathscr{F}(A)\) has a left \(A\)-action since there is a chain complex morphism \[A\to End(\mathscr{F}(A))\] And as we can identify the representable module \(\mathscr{T}^{op}(y,\_)\simeq\mathscr{F}(A)\) with the object \(y\in\mathscr{T}^{op}\) which in turns has a right action by \(A\), we obtain our right and left actions by \(A\). Furthermore we know that this bimodule induces a quasi-functor equivalent to \(\mathscr{F}\) by the assignment \[M\mapsto\mathscr{F}(A)\otimes M\] Where the tensor product is a tensor product of \(\mathscr{F}(A)\) seen as a right module and \(M\in A-Mod\) seen as left module. This in turn means that there exists an isomorphism when passing to \(H^{0}(A_{pe})\) between \(\mathscr{F}(A)\otimes M\) and \(\mathscr{F}(M)\) for all \(M\in A_{pe}\). With this idea in mind we will give a characterization of bimodules which produce tensor triangulated category structures at the homotopy level. ### Pseudo dg-tensor structures Let us take a moment to recall the general construction of tensor products between dg-modules. Let \(\mathscr{T},\mathscr{R},\mathscr{S}\) be cofibrant ( in the Dwyer-Kan model structure ) dg-categories and let \(\mathscr{F}\in\mathscr{T}\otimes\mathscr{S}^{op}-Mod\), \(\mathscr{G}\in\mathscr{S}\otimes\mathscr{R}-Mod\). **Definition 2.29**.: _We define the tensor product \(\mathscr{F}\otimes_{\mathscr{T}}\mathscr{G}\in\mathscr{T}\otimes\mathscr{R}-Mod\) as, for any \(t\in\mathscr{T}\), \(r\in\mathscr{R}\), the chain complex calculated as the cokernel of_ \[\bigoplus_{x,y\in\mathscr{S}}\mathscr{F}(t,x)\otimes_{k}\mathscr{S}(y,x) \otimes_{k}\mathscr{G}(y,r)\to\bigoplus_{z\in\mathscr{S}}\mathscr{F}(t,z) \otimes_{k}\mathscr{G}(z,r),\] _where the morphism takes a homogeneous element \(v\in\mathscr{F}(t,x)\), an homogeneous element \(u\in\mathscr{G}(y,r)\) and a homogeneous morphism \(f\in\mathscr{S}(y,x)\) to the homogeneous element_ \[\mathscr{F}(t,f)(v)\otimes u-(-1)^{|v||u|}v\otimes\mathscr{G}(f,r)(u)\] _With this construction in mind we define the following notion of n-fold dg-bimodules over a dg-category \(\mathscr{T}\)_ **Definition 2.30**.: _Let \(\mathscr{T}\) be a dg-category. An n-fold dg-bimodule over \(\mathscr{T}\) is a dg-module \(\mathscr{F}\in\mathscr{T}^{\otimes n}\otimes\mathscr{T}^{op}-Mod\)._ In particular a 0-fold dg-bimodule is nothing but a \(\mathscr{T}^{op}-\text{module}\) and a 1-fold bimodule is what we usually call a bimodule over \(\mathscr{T}\). A morphism of n-fold dg-bimodules is simply a morphism of dg-modules and we have then a dg-category denoted by \(Bimod^{n}_{dg}(\mathscr{T})\) with n-fold dg-bimodules as objects and morphism objects given by morphisms of dg-modules. Notice that the permutation group \(\Sigma_{n}\) acts on \(Bimod^{n}_{dg}\) by switching the tensor product \(\mathscr{T}^{\otimes n}\) and so if \(\mathscr{F}\in Bimod^{n}_{dg}(\mathscr{T})\), \(\sigma\in\Sigma_{n}\) is a permutation and \(x\to y\in\mathscr{T}\) where this \(\mathscr{T}\) lies in the k-th slot of the product \(\mathscr{T}\otimes\cdots\otimes\mathscr{T}\), \(x\to y\) induces a morphism of dg-modules So that \(x,y\) now lie in the \(\sigma(k)\)-th slot of \(\sigma\mathscr{F}\). To avoid confusion and ease the reading we follow Hovey's notation and introduce dummy variables to keep track of which \(\mathscr{T}\) factor is being taken into account. For example if \(\mathscr{F}\) and \(\mathscr{G}\) are 3-fold dg-bimodules and \(\sigma=(31)\), a morphism \(\eta:\mathscr{F}\rightarrow\sigma\mathscr{G}\) is better expressed as \[\eta:\mathscr{F}_{X,Y,Z}\rightarrow\mathscr{G}_{Z,Y,X}\] to indicate that the action of any morphisms \(x\to y\in\mathscr{T}\) at the first slot is now carried to an action to the third one on \(\mathscr{G}\). Using the tensor product of bimodules described in 2.29 we see that there exists, for any pair of natural numbers \(n,m\) a way to tensor n-fold bimodules with m-fold bimodules. If \(\mathscr{F}\in Bimod^{n}_{dg}(\mathscr{T})\) and \(\mathscr{G}\in Bimod^{m}_{dg}(\mathscr{T})\) then we form the tensor product \(\mathscr{F}\otimes_{\mathscr{T}}\mathscr{G}\) by using the leftmost \(\mathscr{T}\) factor in \(\mathscr{G}\) with the right \(\mathscr{T}\) factor of \(\mathscr{G}\). In the case we would want to take this tensor product with any of the other \(\mathscr{T}\) right factors of \(\mathscr{F}\) we can simply consider a permutation \(\sigma\) which permutes the n-th factor with the factor we want to tensor with. When using the notation above we are able to drop \(\sigma\) from our expression as it is implied from the order of the subindices which permutation we are operating. To keep track of which factor is being used to form the tensor product we will extend the notation for morphisms and write for a 2-fold dg-bimodule \(\mathscr{F}\), a 3-fold dg-bimodule \(\mathscr{G}\) \[\mathscr{F}_{\mathscr{G},X}\otimes\mathscr{G}_{Y,Z,W}\] To denote we are forming a 4-fold dg-bimodule by using the first left factor of \(\mathscr{G}\) to produce the tensor product with the right factor of \(\mathscr{F}\). In our particular case we are for the moment only interested in categories of modules over the a dg-algebra \(A\) seen as a single object dg-category and so what we are describing is simply the theory of dg-bimodules over a dg-algebra with multiple compatible left multiplications and the tensor product described here is just the usual tensor product of right and left modules. We will make use of Theorem 2.16 to show that if have a functor \(\mathscr{F}:\mathscr{K}^{n}\to\mathscr{K}\) from a product of a triangulated category \(\mathscr{K}\) such that it is a triangulated functor in each variable, then it is possible to find an appropriate quasi-functor and make it correspond an n-fold dg-bimodule. Having established our notation we now present a homotopical dg-module version of Theorem 2.1 in [10] **Theorem 2.31**.: _Let \(A\) be a dg-algebra and let \(\boxtimes:H^{0}(A_{pe})\times H^{0}(A_{pe})\to H^{0}(A_{pe})\) be an exact functor in each variable. Suppose that for every object \(M\in H^{0}(A_{pe})\), the triangulated functors_ \[M\boxtimes_{-}:H^{0}(A_{pe})\to H^{0}(A_{pe})\] \[\_\boxtimes M:H^{0}(A_{pe})\to H^{0}(A_{pe})\] _both have unique dg-enhancements \(R_{M}\) and \(L_{M}\) respectively._ _Then \(L_{A}(A)\) is a 2-fold dg-bimodule and for any \(N\in A_{pe}\) we have_ \[H^{0}(L_{A}(A)\otimes M\otimes N)\simeq M\boxtimes N\] Proof.: We have to make repeated use of Theorem 2.16. First let \(R_{M}\in\mathbb{R}\underline{Hom}(A_{pe},A_{pe})\) be an enhancement of \(M\boxtimes_{-}\), we know by the derived Morita theorem that this quasi-functor corresponds to a bimodule given by \(R_{M}(A)\). We know by the theorem that for any \(N\), \(H^{0}(R_{M}(A)\otimes N)\simeq M\boxtimes N\). Now consider the functor \(\_\boxtimes A\) which by hypothesis has a unique enhancement \(L_{A}\), by using Theorem 2.16 again, we know this quasi-functor corresponds to a dg-bimodule \(L_{A}(A)\) and that for any M we have that \(H^{0}(L_{A}(A)\otimes M)\cong M\boxtimes A\). However, this latter object is isomorphic to \(H^{0}(R_{M}(A)\otimes A)\) and as this is a natural isomorphism on \(M\) then the functor \(R\_(A)\otimes A\) defined by \(M\mapsto R_{M}A\otimes A\) is too an enhancement of \(\_\boxtimes A\). By uniqueness of the enhancement of \(\_\boxtimes A\) we have then \(L_{A}(A)\otimes\_\simeq R\_(A)\otimes A\) which corresponds to a right quasi-representable \(A\)-bimodule. However, \(R_{M}(A)\) is already a bimodule for every \(M\), in other words \(R_{A}(A)\) has two compatible dg-bimodule structures (in the sense these two actions on the same side by \(A\) commute) and so we can consider as an object in \(Bimod^{2}_{dg}(A_{pe})\). Finally we obtain the required isomorphism: \[H^{0}(L_{A}(A)\otimes M\otimes N)\cong H^{0}(R_{M}(A)\otimes N)\cong M \boxtimes N\] Even though in general the existence of dg-lifts is not guaranteed as we mentioned before, in our particular context the existence of the lift is only a mild assumption. Indeed as we are coming from a dg-algebra induced from perfect complexes over a smooth projective variety, in that case it is a consequence of the derived Morita theorem that a triangulated functor is of Fourier-Mukai type if and only if it has a dg-lift. While the condition is a necessary one, as it is unknown whether every triangulated functor that we consider is of Fourier-Mukai type, we find this condition not too strong to impose. The uniqueness of the lift, however is a stronger condition as we know for sure there are examples of non-uniqueness of the integral kernels that determine these transforms. With this result in mind, we are finally off to approach tensor triangulated categories as being induced by 2-fold bimodules together with structure maps and their corresponding coherence conditions. Before going further let us remark the following **Remark 2.32**.: _Since any exact functor between derived categories of smooth projective varieties has a right and left adjoints, if \(\boxtimes\) is a tensor product of a tensor triangulated category in such triangulated category, the hypothesis of exactness in each variable implies automatically that the symmetric monoidal structure is closed. Let us remark that any of the triangulated functors that show up in our context are \(k\)-linear._ Let us present the following definition in obvious analogy with the usual (lax) symmetric monoidal category axioms: **Definition 2.33**.: _A pseudo dg-tensor structure in a dg-category \(\mathscr{T}\) consists on the data:_ 1. _A 2-fold dg-bimodule_ \(\Gamma\in Bimod^{2}_{dg}(\mathscr{T})\)__ 2. _An object_ \(U\in\mathscr{T}^{op}-Mod\) _called the unit._ 3. _Morphisms of dg-bimodules_ \(\alpha_{X,Y,Z}:\Gamma_{X,\Gamma}\otimes\Gamma_{Y,Z}\to\Gamma_{\Gamma,Z}\otimes \Gamma_{X,Y}\in Bimod^{3}_{dg}(\mathscr{T})\)__._ 4. _A morphism of dg-bimodules_ \(\ell_{X}:\Gamma_{U,X}\otimes U\to\mathscr{T}-Mod\in Bimod^{1}_{dg}(\mathscr{T})\)_._ 5. _A morphism of dg-bimodules_ \(r_{X}:\Gamma_{X,U}\otimes U\to\mathscr{T}-Mod\in Bimod^{1}_{dg}(\mathscr{T})\)_._ 6. _A morphism_ \(c_{X,Y}:\Gamma_{X,Y}\to\Gamma_{Y,X}\) _of dg-bimodules._ _We require that the morphisms \(\alpha_{X,Y,Z}\), \(u_{X}\) and \(c_{X,Y}\) are all isomorphisms when passing to the homotopy category \(H^{0}(\mathscr{C}(k))\), for all \(X,Y,Z\in\mathscr{T}^{op}-Mod\). Furthermore we require the following homotopy data satisfying conditions:_ 1. _(Associativity) A morphism_ \(\eta\in Hom^{-1}(\Gamma_{X,\Gamma}\otimes\Gamma_{Y,\Gamma}\otimes\Gamma_{Z,W}, \Gamma_{\Gamma,W}\otimes\Gamma_{\Gamma,Z}\otimes\Gamma_{X,Y})\) _such that_ \(\alpha^{0}_{\Gamma_{X,Y,Z,W}}\circ\alpha^{0}_{X,Y,\Gamma_{Z,W}}-\alpha^{0}_{X,Y,Z}\otimes Id_{W}\circ\alpha^{0}_{X,\Gamma_{Y,Z,W}}\circ Id_{X}\otimes\alpha _{Y,Z,W}=d(\eta)\)_._ 2. _(Unit) A morphism_ \(\mu\in Hom^{-1}(\Gamma_{X,\Gamma}\otimes\Gamma_{U,Y},\Gamma_{\Gamma,Y}\otimes \Gamma_{X,U})\) _such that_ \(\ell^{0}_{X}\otimes Id_{Y}\circ\alpha^{0}_{X,U,Y}-Id_{X}\otimes\ell^{0}_{Y}=d(\mu)\)__ 3. _(Symmetry) The composition_ \(c_{X,Y}\circ c_{Y,X}\) _is the identity in_ \(H^{0}(\mathscr{T}-Mod)\)_._ 4. _(Unit symmetry) There is_ \(\kappa\in Hom^{-1}(\Gamma_{X,U},X)\) _such that_ \(\ell_{X}\circ c_{X,U}-r_{X}=d(\kappa)\) _._ 5. _(Compatibility between associativity and symmetry) There is_ \(\lambda\in Hom^{-1}(\Gamma_{\Gamma,Z}\otimes\Gamma_{X,Y},\Gamma_{Y,\Gamma} \otimes\Gamma_{Z,X})\) _such that_ \(Id_{Y}\otimes c_{X,Z}\circ\alpha_{X,Y,Z}\circ c_{X,Y}\otimes Id_{Z}-\alpha_{Y,Z, X}\circ c_{X,\Gamma_{Y,Z}}\circ\alpha_{X,Y,Z}=d(\lambda)\)_._ Here \(\mathscr{T}-Mod\) denotes the denotes the \(\mathscr{T}-\mathscr{T}\)-bimodule \[(x,y)\mapsto Hom_{\mathscr{T}-Mod}(y,x).\] Our requirement that structure morphisms \(\alpha,u,c\) are invertible in the homotopy category implicitly implies that the degree \(0\) part of these morphisms is a cycle. These are structures up to As the coherence conditions are troublesome to keep track, let us use our graphical notation to draw the diagrams we require This is nothing but the pentagon axiom in a monoidal category except we don't require this composition to commute but only to commute up to the homotopy \(\eta\), \(d(\eta)\). Similarly, we can obtain analogous diagrams corresponding to the rest of the coherence conditions, we hope that the associativity diagram above sufficiently explain the equations from Definition 2.33. We should interpret this diagram as each branch in the tree coming from tensoring by \(\Gamma\) and letting us tensor by a single left \(\mathscr{T}\)-module on each end. **Remark 2.34**.: _We could alternatively present the previous construction from an operadic point of view. See for example [1, Chapter 1] for a presentation of close ideas in this language. The reader should keep in mind that an incompatibility in nomenclature._ We will show that given a pseudo dg-tensor structure on a dg-category \(\mathscr{T}\), we can induce a tensor triangulated category structure on \(\widehat{T}_{pe}\) under certain conditions. By abuse of notation and when there is no chance for confusion, we denote by \(\Gamma\) the pseudo dg-tensor structure \((\Gamma,U,\alpha,\ell,r,c)\). An immediate question to consider is how to declare two such structures as equivalent. Let us say that two such pseudo dg-tensor category structures \(\Gamma,\Gamma^{\prime}\) are _tensor triangulated equivalent_ if they induce the same tensor triangulated structure on \(H^{0}(\mathscr{T}-Mod)\). This is a somewhat weak notion as it is entirely possible for two non equivalent 2-fold dg-bimodules to produce this same tensor triangulated structures when passing to the homotopy category, or even just the defining morphisms can be different at higher degrees while inducing the same structure homotopically. This is the reason this lift to the dg-setting has to be thought of as a truncated version. Let us be more precise in our claims above, first we need to introduce the following refinement of our definition above: **Definition 2.35**.: _A pseudo dg-tensor structure \(\Gamma\) in a dg-category \(\mathscr{T}\) is called perfect if the 2-fold dg-bimodule \(\Gamma\in Bimod^{2}_{dg}(\mathscr{T})\) is right quasi-representable and for every \(X,Y\in\mathscr{T}^{op}-Mod\), \(\Gamma\otimes X\otimes Y\) is quasi-represented by a perfect \(\mathscr{T}^{op}\)-module._ **Lemma 2.36**.: _A perfect pseudo dg-tensor structure \(\Gamma\) on a dg-category \(\mathscr{T}\) induces a tensor triangulated category structure on \(H^{0}(\widehat{\mathscr{T}_{pe}})\)._ Proof.: The functor \(H^{0}(\Gamma_{X,Y}\otimes\_\otimes\_) induces a bifunctor on \(H^{0}(\mathscr{T}_{pe})\). Indeed let us denote by \(X\boxtimes Y\) the equivalence class of perfect modules quasi-representing \(\Gamma_{X,Y}\otimes X\otimes Y\), by hypothesis we know \(H^{0}(X\boxtimes Y)\) is a perfect object and fixing \(X\) or \(Y\) we get a quasi-representable dg-bimodule which induces triangulated functors \[\_\boxtimes Y:H^{0}(\mathscr{T}_{pe})\to H^{0}(\mathscr{T}_{pe})\] and \[X\boxtimes\_:H^{0}(\mathscr{T}_{pe})\to H^{0}(\mathscr{T}_{pe})\] Using the dg-bimodule morphisms \(\alpha_{X,Y,Z}\), \(\ell_{X}\), \(r_{X}\), \(c_{X,Y}\) we obtain morphisms \(X\boxtimes(Y\boxtimes Z)\to(X\boxtimes Y)\boxtimes Z\), \(U\boxtimes X\to X\), \(X\boxtimes U\to X\), and \(X\boxtimes Y\to Y\boxtimes X\). The equations in Definition 2.15 encode the coherence conditions for this monoidal category with product functor \(\boxtimes\). For example, the associativy can be seen from the pentagon diagram 2.2 as explained above. Indeed, the condition that they must commute up homotopy means that when passing to the homotopy category \(H^{0}(\widehat{T}_{pe})\) they will commute in the usual sense. We have to remark a couple things. The first one is that for us, it is necessary to include both right and left unit conditions in the pseudo dg-tensor structure even as if the existence of \(c_{X,Y}\) implies that one can obtain one from the other in the 1-categorical setting. In our case however it is necessary to keep track of them as separate entities. The second thing to mention is that seeing our lemma as a dg-version of Theorem 2.3 of [10] we need to remark that the converse does not hold as-is. Indeed as liftings of objects and morphisms in a triangulated category to a dg-enhancement are far from being unique we cannot expect to have unique -up to isomorphism- dg-bimodules, structure maps and homotopies inducing a certain tensor triangulated category. As in the abelian case we can too encode what a lift of a triangulated tensor endofunctor would be in our dg-setting, and as such a morphism between perfect pseudo dg-tensor structures on a given dg-category \(\mathscr{T}\). **Definition 2.37**.: _Let \(\mathscr{T}\) be a dg-category and let \(\Gamma\) and \(\Lambda\) be perfect pseudo dg-tensor structures on \(\mathscr{T}_{pe}\) with units \(U\) and \(U^{\prime}\) respectively. A pseudo dg-tensor functor between \(\Gamma\) and \(\Lambda\) consists of,_ 1. _A dg-bimodule_ \(\Phi\)__ 2. _A morphism of dg-modules_ \(u:U\to\Phi\otimes U^{\prime}\)__ 3. _A morphism of dg-bimodules_ \(f:\Gamma_{\Phi,\Phi}\otimes\Phi\otimes\Phi\to\Phi\otimes\Lambda\)__ _Such that these two morphisms are isomorphisms when passing to \(H^{0}\). Furthermore, we need the following coherence conditions_ 1. _There exists_ \(w\in Hom^{-1}(\Gamma_{\Phi,U^{\prime}}\otimes\Phi\otimes U,X)\) _such that_ \(Id_{\Phi}\otimes\ell\circ f\circ Id_{\Gamma}\otimes Id_{\Phi}\otimes T_{U,\Phi }\circ Id_{\Gamma}\otimes u\otimes Id_{\Phi}-\ell\otimes Id_{X}=d(w)\)__ 2. _There exists_ \(e\in Hom^{-1}(\Gamma_{\Phi,\Phi}\otimes\Phi\otimes\Phi,\Phi\otimes\Lambda)\) _such that_ \(Id_{\Phi}\otimes c_{\Lambda}\circ f-f\circ c_{\Gamma}\otimes T_{\Phi,\Phi}=d(e)\)__ 3. _There exists_ \(a\in Hom^{-1}(\Gamma_{\Phi,\Gamma}\otimes\Gamma_{\Phi,\Phi}\otimes\Phi \otimes\Phi\otimes\Phi,\Phi\otimes\Lambda\otimes\Lambda)\) _such that_ \(f\circ T_{\Phi\otimes\Lambda,\Phi}\circ Id_{\Gamma}\otimes f\otimes Id_{\Phi} \circ\alpha_{\Phi,\Phi}\otimes T_{\Phi,\Phi}-f\circ T_{\Phi\otimes\Lambda, \Phi}\circ f\otimes Id_{\Lambda}\otimes Id_{\Phi}\otimes\alpha=d(a)\)__ The structure morphisms and coherence conditions above in the definition are nothing but the structural morphisms and coherence conditions of a monoidal functor with the equivalent underlying category written in terms of bimodules and morphisms between them, with the only difference as in the structure of a pseudo dg-tensor structure being that we have to specify a given homotopy. The proof of the following is straightforward: **Lemma 2.38**.: _Let \(\mathscr{T}\) be a dg-category and let \(\Gamma\) and \(\Lambda\) be perfect pseudo dg-tensor structures on \(\mathscr{T}_{pe}\). Then a pseudo dg-tensor functor \(\Phi\) from \(\Gamma\) to \(\Lambda\) induces a tensor triangulated functor \(\mathscr{F}_{\Phi}:H^{0}(\mathscr{T}_{pe},\Gamma)\to H^{0}(\mathscr{T}_{pe},\Lambda)\), where \(H^{0}(\mathscr{T}_{pe},\Gamma)\) and \(H^{0}(\mathscr{T}_{pe},\Lambda)\) denote the tensor triangulated categories induced by \(\Gamma\) and \(\Lambda\) respectively_ Proof.: We saw from Lemma 2.36 that \(\Gamma\) and \(\Lambda\) produces tensor triangulated structures on \(H^{0}(\mathscr{T}_{pe})\) and by Toen's Morita theorem, a triangulated functor \(\mathscr{T}_{pe}\to\mathscr{T}_{pe}\) corresponds to a \(\mathscr{T}\)-bimodule \(\Phi\). The structural morphisms \(u\) and \(f\) of \(\Phi\) being isomorphisms in \(H^{0}(\mathscr{T}_{pe})\), and the coherence conditions imply that the induced functor between triangulated categories is a symmetric monoidal functor. Composition of functors corresponds to tensor product of bimodules. There exists a canonical identity pseudo dg-tensor functor which is given by the dg bimodule \(\mathscr{T}-Mod\) (meaning the dg-bimodule which sends \(X\in\mathscr{T}^{op},Y\in\mathscr{T}\) to \(\mathscr{T}(X,Y)\)) and structural and coherence conditions all given by the canonical isomorphisms \(U\to\mathscr{T}-Mod\otimes U^{\prime}\) and \(\Gamma\otimes\mathscr{T}-Mod\otimes\mathscr{T}-Mod\to\mathscr{T}-Mod\otimes\Gamma\). Two pseudo dg-tensor functors \(\Phi\) and \(\Phi^{\prime}\) are said to be equivalent if there exists a morphism of bimodules \(\Phi\to\Phi^{\prime}\) such that \(H^{0}(\Phi)\to H^{0}(\Phi^{\prime})\) is an isomorphism and is compatible with the morphisms \(u\) and \(f\) in the obvious homotopical sense. We do not describe these natural transformations in detail as we will not be needing the coherence conditions of them but only knowledge that the usual diagrams commute up to a given homotopy. Previously we said that two perfect pseudo dg-tensor structures were tensor triangulated equivalent if they gave rise to equivalent tensor triangulated structures. We have the following lemma that compares the two equivalence notions: **Lemma 2.39**.: _Let \(\mathscr{T}\) be a dg-category, two pseudo dg-tensor structures \(\Gamma\), \(\Lambda\) are tensor triangulated equivalent if there exists a pseudo dg-tensor functor \(\Phi\) from \(\Gamma\) to \(\Lambda\) given by a dg-bimodule which is invertible under the tensor product of bimodules._ Proof.: Suppose the bimodule \(\Phi\) is invertible under the tensor product of dg-bimodules, so there exists \(\Phi^{\prime}\) such that \(\Phi\otimes\Phi^{\prime}\) is the identity \(\mathscr{T}-Mod\) as a bimodule. If \(U\) and \(U^{\prime}\) are the units of \(\Gamma\) and \(\Lambda\) respectively, then we have equivalences \(U\to\Phi\otimes\Phi^{\prime}\otimes U\), and \(\Gamma\otimes(\Phi\otimes\Phi^{\prime})\otimes(\Phi\otimes\Phi^{\prime})\to( \Phi\otimes\Phi^{\prime})\otimes\Lambda\). Which are equivalent then to giving an equivalence \(U\to U^{\prime}\) and \(\Gamma\to\Lambda\), and so for any \(X,Y\in\mathscr{T}_{pe}\), \(\Gamma\otimes X\otimes Y\simeq\Lambda\otimes X\otimes Y\), and similarly for the condition about the unit, and then \(H^{0}(\Gamma\otimes X\otimes Y)\cong H^{0}(\Lambda\otimes X\otimes Y)\). As pointed out before, it is entirely possible that the structures differ at higher degrees and we only need the existence of pseudo dg-tensor functors \(\Phi\) and \(\Phi^{\prime}\) such that \(H^{0}(\Phi\otimes\Phi^{\prime})\simeq H^{0}(\mathscr{T}-Mod)\). Let us illustrate what we have so far with an example **Example 2.1**.: _Let \(X\) be a smooth projective variety and let us suppose that \(D^{b}(X)\) has a full strong exceptional collection \(\{E_{1},\ldots,E_{m}\}\). In this case as we know, the object_ \[E:=\bigoplus E_{i}\] _Is a compact generator and we have thus a homotopy equivalence of dg-categories_ \[End(E)-Mod_{dg}\simeq D^{b}_{dg}(X)\] _between the dg-category of dg-modules over \(E\) and a dg-enhancement of \(D^{b}(X)\). As the exceptional collection is strong there are no higher Ext groups and so this endomorphism algebra is supported in degree 0. Furthermore, we have the following description of this algebra_ \[\begin{pmatrix}M_{1,1}&M_{1,2}&\ldots&M_{1,m}\\ 0&M_{2,2}&\ldots&M_{2,m}\\ \vdots&0&\ddots&\vdots\\ 0&0&\ldots&k\end{pmatrix}\] _Where \(M_{ii}=k\), and \(M_{ij}\) is a right module over \(M_{jl}\) for every \(l\) and a left module over \(M_{il}\) for any \(l\). As we can think of this dg-algebra concentrated in degree 0 as a regular k-algebra, we know that the category of dg-modules over it is simply the category of \(E\)-chain complexes. The usual derived tensor product of \(D^{b}(X)\) can be lifted to a 2-fold dg-bimodule over \(E\) and it corresponds to the dg-bimodule given by_ \[Hom(E\otimes_{X}^{\mathbb{L}}E,E)\cong\bigoplus Hom(E_{i}\otimes_{X}^{\mathbb{L}}E _{j},E)\cong\bigoplus_{i,j,l}Hom(E_{i}\otimes_{X}^{\mathbb{L}}E_{j},E_{l})\] _The best case scenario we can expect is for the variety to have a Picard group isomorphic to \(\mathbb{Z}\) and the full strong exceptional collection to be composed of line bundles, in which case one might have a good chance of describing the 2-fold dg-bimodule \(\Gamma\) corresponding to \(\otimes_{X}^{\mathbb{L}}\)._ _Let us consider an example of this situation and put \(X=\mathbb{P}^{1}\), by using Beilinson's exceptional collection \(\{\mathscr{O}_{X},\mathscr{O}_{X}(1)\}\). The 2-fold bimodule would then correspond to:_ \[\cdots\to 0\to\begin{pmatrix}0&0\\ 0&0\\ 0&0\\ k&0\end{pmatrix}\to\begin{pmatrix}k&k^{2}\\ 0&k\\ 0&k\\ 0&0\end{pmatrix}\to 0\to\ldots\] _Centered in degree 0. Similarly we can calculate the unit object \(U\) which corresponds to_ \[\begin{pmatrix}k&k^{2}\\ 0&0\end{pmatrix}.\] Following Hovey's paper we can deduce a few things about the classification of tensor triangulated categories on \(H^{0}(A_{pe})\) based on conditions about our dg-algebra \(A\). As an application of our encoding of tensor triangulated categories through perfect dg-tensor structures at a dg-enhancement we have the following result which is a derived version of a result of Hovey ([11, Prop 4.1] ). **Definition 2.40**.: _Let \(\mathscr{P}\) be a class of morphisms of chain complexes. We say that it is homotopically replete if for any \(f\in\mathscr{P}\) such that there is a square_ _Where the vertical morphisms induce homotopy equivalences, then the morphism \(g\) is in \(\mathscr{P}\)_ **Proposition 2.41**.: _Let \(\mathscr{T}\) be a dg-category, and \(\Gamma\) a perfect dg-pseudo tensor structure. Let \(\mathscr{P}\) is a class of homotopically replete morphisms of chain complexes such that if \(f\in\mathscr{P}\) then \(X\otimes f\) and \(f\otimes Y\), as morphisms of the underlying complexes, are in \(\mathscr{P}\) for any pair of dg-modules \(X,Y\). If \(f\) is a morphism of left \(\mathscr{T}\)-modules then \(f\in\mathscr{P}\) if and only if the morphism \(\Gamma\otimes f\) of the underlying complexes is in \(\mathscr{P}\). Similarly \(g\) a morphism of right \(\mathscr{T}\)-modules, then \(g\in\mathscr{P}\) if and only if \(g\otimes\Gamma\) is in \(\mathscr{P}\)._ Proof.: We have that \(g\simeq T-Mod\otimes g\) hence as the morphism \(\Gamma\otimes U\otimes g\to\mathscr{T}-Mod\otimes g\) induces an isomorphism at the \(H^{0}\) -level, then \(\Gamma\otimes U\otimes g\) is in \(\mathscr{P}\). Similarly we have \(\Gamma\otimes g\otimes U\). since \(\Gamma\otimes g\) is in \(\mathscr{P}\) by hypothesis, so is \(g\). The proof for the other structure on the left is similar. With this result in hand, it can be shown that **Corollary 2.42**.: _Let \(\mathscr{T}\) be a dg-category and let \(\Gamma\) be a perfect pseudo dg-tensor structure on \(\mathscr{T}\). Then let \(f\) be a morphism of \(\mathscr{T}\)-Modules and \(g\) a morphism of \(\mathscr{T}^{op}\)-Modules. Then_ 1. \(f\otimes\Gamma\simeq 0\) _if and only if_ \(f\simeq 0\)_. Similarly,_ \(\Gamma\otimes g\simeq 0\) _if and only if_ \(g\simeq 0\)__ 2. \(H^{0}(f\otimes\Gamma)\) _is an isomorphism if and only if_ \(H^{0}(f)\) _is an isomorphism. Similarly for_ \(g\)_._ 3. \(H^{0}(f\otimes\Gamma)\) _is a surjection if and only if_ \(H^{0}(f)\) _is a surjection._ Proof.: Follows from morphisms \(f\simeq 0\), homotopy equivalences and homotopy surjections forming a homotopically replete class. As a corollary to this then we have that **Corollary 2.43**.: _Let \(A\) be a dg-algebra and let \(\Gamma\) be a perfect pseudo dg-tensor structure, then \(\Gamma\) is faithful as a \(H^{0}(A)\)-module with either multiplication structure._ Proof.: We let \(a\in H^{0}(A)\) a nonzero class. Then this induces a morphism \(A\to A\) of dg-modules given by multiplication, which in turn induces a morphism \(\Gamma\otimes H^{0}(A)\to\Gamma\otimes H^{0}(A)\), then the the fact that \(a\) is not the zero class, implies the induced morphism is also not homotopically zero, thus \(\Gamma\) is faithful as a \(H^{0}(A)\)-module. ## 3. Davydov-Yetter cohomology In [11, 20] Davydov and Yetter introduced independently the concept of a deformation for tensor structures, one directly by deforming certain structural properties of the tensor category and the other by deforming monoidal functors and deforming the structural conditions it must admit. In this case the deformation of the identity functor seen as a monoidal functor takes the place of the deformation theory of the tensor structure directly. Davydov-Yetter cohomology mainly parametrizes the associativity structural isomorphisms and provides an obstruction for these deformations. One important result that is obtained via this theory is Ocneanu's rigidity theorem which establishes that for fusion categories, which geometrically one can think of corresponding to a collection of points, admit no deformations of their tensor structures. We quickly review the classical Davydov-Yetter cohomology construction for tensor categories and then we extend it to the case of categories of dg-modules over a dg-algebra \(A\) equipped with a perfect pseudo dg-tensor structure. We will see that deformations of the associativity structural morphism can be described in an analogous manner to the abelian situation. We finish the section with an investigation of the general deformation problem of tensor triangulated structures via our dg-enhancements and see how they relate to our extended Davydov-Yetter cohomology. For the remaining of this article and unless we say so, \(k\) denotes an algebraically closed field of char \(0\). ### The abelian case Let us start by recalling a number of basic definition from the theory of tensor categories. We refer to the canonical reference [1]. **Definition 3.1**.: _A tensor category is a k-linear rigid abelian monoidal category with biexact tensor product. We say that it is finite if it is equivalent to a category of finite representations of a finite dimensional algebra._ To a given finite tensor category we will assign a chain complex in the following way: Through this section \(\otimes\) denotes the monoidal product of a tensor category \(\mathscr{A}\). For any collection of objects \(X_{1},\dots,X_{n}\in\mathscr{A}\) we denote by \(\otimes^{n}(X_{1},\dots,X_{n})\) the full right parenthesization \[X_{1}\otimes(X_{2}\otimes(\dots(X_{n-1}\otimes X_{n})\dots)\] Similarly we denote by \({}^{n}\otimes\) the full left parenthesization \[(\dots(X_{1}\otimes X_{2})\dots)\otimes X_{n-1})\otimes X_{n}\] For \(n=1\) this assignment is simply the identity \(\mathscr{A}\to\mathscr{A}\), and for \(n=0\) then \(\otimes^{0}\) is the constant functor determined by the monoidal unit in \(\mathscr{A}\). Given any parenthesization \(X\) of a product of a collection of objects \(X_{1},\dots,X_{n}\) there always exists a sequence of isomorphisms from \(X\) to \(\otimes^{n}(X_{1},\dots,X_{n})\) and to \({}^{n}\otimes(X_{1},\dots,X_{n})\). Given any coherent endomorphism \(f\) between any parenthesization of objects \(X_{1},\dots,X_{n}\), we will write \(\underline{f}\) to denote the morphism \(f\) composed and precomposed by the coherent associative isomorphisms. For example, if \(f:X_{1}\otimes((X_{2}\otimes X_{3})\otimes X_{4})\to(X_{1}\otimes X_{2})\otimes( X_{3}\otimes X_{4})\) then \[\underline{f}:((X_{1}\otimes X_{2})\otimes X_{3})\otimes X_{4} \to X_{1}\otimes((X_{2}\otimes X_{3})\otimes X_{4})\] \[\to(X_{1}\otimes X_{2})\otimes(X_{3}\otimes X_{4}) \to X_{1}\otimes(X_{2}\otimes(X_{3}\otimes X_{4}))\] This turns any such coherent morphism into a natural transformation in \(Nat({}^{n}\otimes,\otimes^{n})\). The idea behind this operation is so that we can turn any morphism between different parenthesizations of a collection of objects \(X_{1},\dots,X_{n}\) into the abelian group of morphisms \({}^{n}\otimes(X_{1},\dots,X_{n})\to\otimes^{n}(X_{1},\dots,X_{n})\). We however still need to keep track of the sign of this padding by associators, and so the morphism \(\underline{f}\) comes with the signature \((-1)^{|sgn(f)|}\) where \(sgn(f)\) denotes minimal number of associators \(\alpha_{X,Y,Z}\) necessary to take a morphism \(f\) into a morphism \(f:^{n}\otimes(X_{1},\dots,X_{n})\to\otimes^{n}(X_{1},\dots,X_{n})\). For example the morphism 3.1 has sign equal to \(2\) as we need to compose by the inverse of \(Id_{X_{1}}\otimes\alpha_{X_{2},X_{3},X_{4}}\) and then by \(\alpha_{X_{1}\otimes X_{2},X_{3},X_{4}}\). **Definition 3.2**.: _Let \(\mathscr{A}\) be a finite tensor category, the Davydov-Yetter complex \(DY^{*}(\mathscr{A})\) is the chain complex defined in degree \(n\) by \(Nat({}^{n}\otimes,\otimes^{n})\). If \(f\in DY^{n}\) a homogeneous element, the differential \(d^{n}:DY^{n}\to DY^{n+1}\) is defined, on a set of objects \(X_{1},\dots,X_{n+1}\) by_ \[d^{n}(f):=Id_{X_{1}}\otimes f_{X_{1},\dots,X_{n}}+\Sigma_{i}(-1)^{i}f_{X_{1}, \dots,X_{i}\otimes X_{i+1},\dots,X_{n+1}}+(-1)^{n+1}f_{X_{1},\dots,X_{n}} \otimes Id_{X_{n+1}}\] It is a routine calculation to see that \(d^{2}=0\) and so this forms a chain complex. In degree \(3\) for example, we can calculate the component of the Davydov-Yetter complex consists of natural transformations \[f_{X_{1},X_{2},X_{3}}:(X_{1}\otimes X_{2})\otimes X_{3}\to X_{1}\otimes(X_{2} \otimes X_{3})\] And has differential \(d^{2}(f)_{X_{1},X_{2},X_{3},X_{4}}\) given by \[Id_{X_{1}}\otimes f_{X_{1},X_{2},X_{3}}-f_{X_{1}\otimes X_{2},X_{3},X_{4}}+f_ {X_{1},X_{2}\otimes X_{3},X_{4}}-f_{X_{1},X_{2},X_{3}\otimes X_{4}}+f_{X_{1},X _{2},X_{3}}\otimes Id_{X_{4}}\] With this definitions we can now define the Davydov-Yetter cohomology for finite tensor categories **Definition 3.3**.: _Let \(\mathscr{A}\) be a finite tensor category, the Davydov-Yetter cohomology \(HDY^{*}(\mathscr{A})\) is the cohomology of the the Davydov-Yetter complex \((DY^{*},d^{*})\)_ Let us see by a hand calculation what the third cohomology group looks like **Example 3.1**.: _The kernel of \(d^{3}\) is composed of those natural transformations \(f_{X_{1},X_{2},X_{3}}\) such that_ \[Id_{X_{1}}\otimes f_{X_{1},X_{2},X_{3}}-f_{X_{1}\otimes X_{2},X_{3},X_{4}}+f_ {X_{1},X_{2}\otimes X_{3},X_{4}}-f_{X_{1},X_{2},X_{3}\otimes X_{4}}\] \[+f_{X_{1},X_{2},X_{3}}\otimes Id_{X_{4}}=0\] _While the image of \(d^{2}\) is_ \[Id_{X_{1}}\otimes f_{X_{1},X_{2}}-f_{X_{1}\otimes X_{2},X_{3}}+f_{X_{1},X_{2} \otimes X_{3}}-f_{X_{1},X_{2}}\otimes Id_{X_{3}}\] As we said, we are interested in deformations of the associativity constraint of a finite tensor category \(\mathscr{A}\), which is a coherent morphism \(a_{X_{1},X_{2},X_{3}}:(X_{1}\otimes X_{2})\otimes X_{3}\to X_{1}\otimes(X_{2} \otimes X_{3})\). We have then to say what we mean precisely by a deformation of this structure. In general the idea is that we should replace the coefficient ground field \(k\) by an algebra that we interpret to represent a small neighborhood of it. Classically in the deformation theory of algebras we are interested in deforming by considering infinitesimal deformations of order n, so by replacing \(k\) with the local ring \(k[x]/x^{n+1}\). We think of this ring as an augmented k-algebra, and so equipped with a morphism \(k[x]/x^{n+1}\to k\) in the usual way. We always have a morphism in the other direction \(i:k\to k[x]/x^{n+1}\). **Definition 3.4**.: _An nth order deformation of a finite tensor category \(\mathscr{A}\) over a field \(k\) is a finite tensor category \(\{\!\!\{\mathscr{A}\}\!\}\) over \(k[x]/x^{n+1}\) such that there is a monoidal equivalence \(\mathscr{F}\) to \(\mathscr{A}\), after restriction of scalars \(\_\otimes k\) of \(Hom\) objects._ **Definition 3.5**.: _We say that two deformations are equivalent if there is a monoidal equivalence between them, with its underlying functor equal to the identity functor and such that the restriction of scalars of this monoidal equivalence is the identity after restriction of scalars._ In other words, as \(Hom\) objects of the deformation \(\{\!\!\{\mathscr{A}\}\!\}\) are modules over \(k[x]/x^{n}\), the restriction of scalars functor given by tensoring by \(k\) over \(k[x]/x^{n}\) gives us an object of \(k\)-vector spaces. To classify deformation classes of the associativity ( for example ) means that we are looking to classify associativity with coefficient in \(k[x]/x^{n}\) which reduce to our original monoidal structure, and then classify the possible monoidal equivalences between them in the sense that we defined above. To say that we are deforming the associativity condition means that we are looking at deformations such that all the structural morphisms are given by extension of scalars except possibly for the associativity conditions. So for example deforming only the associativity condition means that the symmetry condition \(X\otimes Y\to Y\otimes X\) in the deformation is exactly the extension by scalars given by \(k[x]/x^{n}\to k\). **Theorem 3.6**.: _[_1_, Theorem 2.2]_ _Let \(\mathscr{A}\) be a finite tensor category, then there is a bijection between the 3rd Davydov-Yetter cohomology \(HDY^{3}(\mathscr{A})\) and equivalence classes of first order deformations of the associativity condition of \(\mathscr{A}\)._ A fact is that the obstruction to these deformations to extend to infinitesimal deformations of second degree seem to be given by the 4th Davydov-Yetter cohomology group (cf. [1]). When the category is semisimple we can say something about the deformation theory in this case: **Theorem 3.7**.: _(Ocneanu's rigidity theorem) Let \(\mathscr{A}\) be a semisimple finite tensor category, then \(HDY^{*}(\mathscr{A})=0\)._ It is important to mention is that the hypothesis of the theorem cannot be omitted, as there exist explicit examples of non semisimple finite tensor categories with nontrivial Davydov-Yetter cohomology groups. In [1, Proposition 3.1] it is shown that the collection of tensor autoequivalences of a finite tensor category is an affine algebraic group over \(k\). Concretely, **Theorem 3.8**.: _Let \(\mathscr{A}\) be a finite tensor category of \(k\). Then \(Aut(\mathscr{A})\) has a natural structure of an affine algebraic group over \(k\)._ Our goal for the following is to define a Davydov-Yetter cohomology for tensor triangulated structures and study some of their properties in light of the work done in the abelian case. ### Deforming tensor triangulated structures In the previous section we saw how to construct a complex such that its cohomology behaves as the tangent space of a finite tensor category, in particular with respect to the associativity structural morphisms. Here we will construct a similar complex by using our perfect pseudo dg-tensor structures (see 2.33). From now on we will be interested in working with the category smooth proper derived noncommutative schemes (see 2.1) \(NCSch_{sm}^{pr}\). As we have a homotopical equivalence between the derived categories of interest and a subcategory of such objects we exploit this to work our definitions at the dg-enhancement level. In this sense, whenever we refer to a dg-category \(\mathscr{T}\) it is understood to be an object in \(NSch_{sm}^{pr}\). We remark too that in general when working with dg-categories and dg-categories of dg-modules the change of basis operation \(\_\otimes^{\mathbb{L}}A\) for a dg-algebra \(A\) does not in general preserve categories of perfect complexes, in the following however we try to keep notation as simple as possible and so we write \(\otimes A\) for the operation \(\_\otimes^{\mathbb{L}}A_{pe}\) which takes a dg-category and sends it to the dg-category tensored by the dg-algebra \(A\) (in the homotopy category of dg-categories using the derived tensor product) after passing by its perfect closure. Let us now commence to define our deformation complexes for our lifted tensor triangulated structures. **Definition 3.9**.: _Let \(\mathscr{T}\) be a dg-category and \(\Gamma\) a perfect pseudo dg-tensor structure. For any \(n\in\mathbb{N}\) we denote by \({}^{n}\Gamma\) the full left parenthesization of the 2-fold dg-bimodule \(\Gamma\). This means we use the first multiplication to tensor \(\Gamma\) with itself n-1 times by using this multiplication structure repeatedly._ _This means that if_ \(X_{1},\dots,X_{n}\in\mathscr{T}_{pe}\) _then when taking tensor products we have_ \[{}^{n}\Gamma:X_{1},\dots,X_{n}\mapsto\Gamma_{\Gamma,X_{1}}\otimes\Gamma_{ \Gamma,X_{2}}\otimes\Gamma_{\Gamma,X_{3}}\otimes\dots\otimes\Gamma_{X_{n-1}, X_{n}}\otimes X_{n}\otimes X_{n-1}\] _Similarly we have_ \(\Gamma^{n}\)_, which for a collection_ \(X_{1},\dots,X_{n}\in\mathscr{T}_{pe}\)__ \[{}^{n}\Gamma:X_{1},\dots,X_{n}\mapsto\Gamma_{X_{1},\Gamma}\otimes X_{1} \otimes\Gamma_{X_{2},\Gamma}\otimes\dots\otimes\Gamma_{X_{n-1},X_{n}}\otimes X _{n-1}\otimes X_{n}\] **Definition 3.10**.: _Let \(\Gamma\) be a perfect pseudo dg-tensor structure for the dg-algebra \(A\) associated to a tensor triangulated structure on \(H^{0}(A_{pe})\). Let \(DY^{*,*}_{dg}\) be the double chain complex defined by_ \[DY^{n,*}_{dg}(A):=Bimod^{n}_{dg}(^{n-1}\Gamma,\;\Gamma^{n-1})^{*-1}\] _The complex of morphisms of dg-functors between the n-fold bimodule of totally left parenthesized product of \(\Gamma\) with itself and the n-fold bimodule of totally right parenthesized product of \(\Gamma\). The vertical differential \(d^{n,m}_{v}:DY^{n,m}_{dg}(A)\to DY^{n,m+1}_{dg}(A)\) is the differential given by the \(Bimod(\;,\;)\) dg-functor. The horizontal differential \(d^{n,m}_{h}:DY^{n,m}_{dg}(A)\to DY^{n+1,m}_{dg}(A)\) is given, for \(\eta^{*}\in DY^{n,m}_{dg}(A)\) and a collection \(X_{1},\ldots,X_{n}\in A_{pe}\)_ \[\begin{array}{c}d^{n,m}_{h}(\eta)_{X_{1},\ldots,X_{n}}=\\ Id_{X_{1}}\otimes\eta^{m}_{\Gamma\otimes_{\mathscr{C}(k)}X_{2}\otimes X_{3}, \ldots,X_{n}}+\\ \Sigma_{i}(-1)^{i}\eta^{m}_{X_{1},\ldots,\Gamma\otimes_{\mathscr{C}(k)}X_{i} \otimes X_{i+1},\ldots,X_{n}}+(-1)^{n+1}\eta^{m}_{X_{1},\ldots,X_{n-1}} \otimes Id_{X_{n}}\end{array}\] A low degree example would be \[\begin{array}{c}\vdots\\ \cdots\xrightarrow{}Bimod^{2}_{dg}(\Gamma,\Gamma)^{2}\xrightarrow{d^{2,3}_{h}} Bimod^{3}(^{2}\Gamma,\Gamma^{2})^{2}\xrightarrow{}\cdots\\ \cdots\xrightarrow{}Bimod^{2}(\Gamma,\Gamma)^{1}\xrightarrow{d^{2,2}_{h}} Bimod^{3}(^{2}\Gamma,\Gamma^{2})^{1}\xrightarrow{}\cdots\\ \vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots \hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots \hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots \hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots \hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots \hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots \hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots \hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots \hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\vdots\hskip 142.26378pt\] To make sense of the expression given by the differential, we proceed as in the abelian case, where the sacrifice that had to be made by working with non strict monoidal categories was that we had to introduce the padding construction to form additive groups of morphisms between different parenthesized products of a collection of objects. In this situation one could instead pass to a strict category using MacLane's coherence result and have that these objects are equal and there is no need for padding. In our context however our only option is padding our morphisms, so let us explain how do we proceed: Recall that the associativity coherence for a pseudo dg-tensor structure is given by a morphism of dg-bimodules \(\alpha_{X,Y,Z}:\Gamma_{\Gamma,Z}\otimes\Gamma_{X,Y}\rightarrow\Gamma_{X,\Gamma} \otimes\Gamma_{Y,Z}\) such that it becomes invertible when passing to \(H^{0}(\mathscr{C}(k))\) for every triple of objects \(X,Y,Z\). Just as before whenever we have a morphism between two parenthesizations we would like to pad it so that it becomes a morphism in \(Bimod^{*}(^{n}\Gamma,\Gamma^{n})(A)\). Let us take then \(n\in\mathbb{N}\) and a partition \(n=n_{1}+\cdots+n_{k}\). A parenthesization of length \(n\in\mathbb{N}\) of \(\Gamma\) is a tensor product of factors of the form \({}^{n_{i}}\Gamma\) and \(\Gamma^{n_{j}}\) along any of the two multiplications of \(\Gamma\), for \(0\leq i,j\leq k\). Whenever we have a morphism between two such parenthesizations, \[f\in Bimod^{*}(^{n_{1}}\Gamma\otimes\cdots\otimes\Gamma^{n_{k}},\Gamma^{n_{ 1}^{\prime}}\otimes\cdots\otimes\Gamma^{n_{l}^{\prime}})\] , we would like to turn this into a morphism in \(Bimod^{*}(^{n}\Gamma,\Gamma^{n})\) The obstruction to do this as we did before is that we only know that the coherent morphism \(\alpha\) is invertible when taking \(0\)th chain cohomology \(H^{0}\). It is not enough to say that as this is invertible then in each step of the padding to consider a lift of the inverse \(\alpha^{-1}\) as there are many and since we are only dealing with the truncation in low degree, the complexes in higher degree can change a lot from one another. So in reality what we must do is choose once and for all the inverse \(\alpha^{-1}\) making \(\alpha\) invertible. Once the morphism is chosen we proceed as before, and then whenever we have for example a morphism \[f:\Gamma_{X_{1},\Gamma}\otimes\Gamma_{\Gamma,X_{4}}\otimes\Gamma_{X_{2},X_{3} }\rightarrow\Gamma_{\Gamma,\Gamma}\otimes\Gamma_{X_{1},X_{2}}\otimes\Gamma_{ X_{3},X_{4}}\] We can compose by products of the identity between dg-bimodules \(\Gamma\rightarrow\Gamma\) and \(\alpha\) and \(\alpha^{-1}\). In this way we can obtain a morphism \[\underline{f}:\Gamma_{\Gamma,X_{4}}\otimes\Gamma_{\Gamma,X_{3}} \otimes\Gamma_{X_{1},X_{2}}\rightarrow\Gamma_{X_{1},\Gamma}\otimes\Gamma_{ \Gamma,X_{4}}\otimes\Gamma_{X_{2},X_{3}}\] \[\rightarrow\Gamma_{\Gamma,\Gamma}\otimes\Gamma X_{1},X_{2} \otimes\Gamma_{X_{3},X_{4}}\rightarrow\Gamma_{X_{1},\Gamma}\otimes\Gamma_{X_{ 2},\Gamma}\otimes\Gamma_{X_{3},X_{4}}\] Now that we have defined our double complex, we can define **Definition 3.11**.: _Let \(\mathscr{T}\) be a dg-category and \(\Gamma\) a perfect pseudo dg-tensor structure, the Davydov-Yetter complex is the total complex \(Tot(DY_{dg}^{*,*})(\mathscr{T})\) of the double complex \(DY_{dg}^{*,*}(\mathscr{T})\) with the usual differential \(d_{tot}:=d_{v}+(-1)^{|v||h|}d_{h}\). Where \(\mid v\mid\) and \(\mid h\mid\) denote the degree of the differentials \(d_{h}\) and \(d_{v}\)._ We put \({}^{0}\Gamma=\Gamma^{0}\) the bimodule \(Id\) corresponding to the identity pseudo functor, and \({}^{-1}\Gamma=\Gamma^{-1}\) is the unit object \(U\), and \({}^{n}\Gamma=\Gamma^{n}=0\) for any \(n\leq-2\). As before we calculate the cohomology of the dg-Davydov-Yetter complex **Definition 3.12**.: _Let \(\mathscr{T}\) be a dg-category and \(\Gamma\) a perfect pseudo dg-tensor structure and consider its Davydov-Yetter complex \(Tot(DY_{dg}^{*,*})(\mathscr{T})\). The total cohomology of this complex is the Davydov-Yetter cohomology of the perfect pseudo dg-tensor structure and we denote it by \(HDY^{*}_{dg}(\mathscr{T})\) or by \(HDY^{*}_{dg}(\mathscr{T},\Gamma)\) if there is ambiguity in which perfect pseudo dg-tensor structure is being considered._ In lower degrees we have components of the total complex given by \[\begin{split} Tot^{3}(DY^{*,*}_{dg})=DY^{3,0}_{dg}(\mathscr{T}) \oplus DY^{2,1}_{dg}(\mathscr{T})\oplus DY^{1,2}_{dg}(\mathscr{T})\oplus DY^ {0,3}_{dg}(\mathscr{T})=\\ Bimd^{3}(^{2}\Gamma,\Gamma^{2})^{-1}\oplus Bimd^{2}(\Gamma, \Gamma)^{0}\oplus Bimd(\mathscr{T}_{pe},\mathscr{T}_{pe})^{1}\oplus\mathscr{T }_{pe}(U,U)^{2}\end{split}\] And \[\begin{split} Tot^{4}(DY^{*,*}_{dg})=DY^{4,0}_{dg}(\mathscr{T}) \oplus DY^{3,1}_{dg}(\mathscr{T})\oplus DY^{1,3}_{dg}(\mathscr{T})\oplus DY^ {2,2}_{dg}(\mathscr{T})\\ \oplus DY^{3,1}_{dg}(\mathscr{T})\oplus DY^{4,0}_{dg}=\\ Bimd^{4}(^{3}\Gamma,\Gamma^{3})^{-1}\oplus Bimd^{3}(^{2}\Gamma, \Gamma^{2})^{0}\oplus Bimd^{2}(\Gamma,\Gamma)^{1}\\ \oplus Bimd^{1}(\mathscr{T}_{pe},\mathscr{T}_{pe})^{2}\oplus \mathscr{T}_{pe}(U,U)^{3}\end{split}\] Let us calculate the action of the total differential in these degrees. If we let \(\eta^{3}=(\eta^{3,1},\eta^{2,1},\eta 1,2,\eta^{0,3})\) then \[\begin{split} d^{3}_{tot}(\eta^{3})=\\ (d_{h}(\eta^{3,1}),d_{h}(\eta^{2,1})+d_{v}(\eta^{2,0}),d_{v}(\eta^ {2,1}+d_{h}(\eta^{2,1}),\\ d_{v}(\eta^{2,1})+d_{h}(\eta^{0,3}),d_{v}(\eta^{0,3}))=\\ (\eta^{3,0}_{X,Y,Z}\otimes Id_{W}-\eta^{3,0}_{\Gamma\otimes X \otimes Y,Z,W}+\eta^{3,0}_{X,\Gamma\otimes y\otimes Z,W}-\eta^{3,0}_{X,Y, \Gamma\otimes Z\otimes W}+Id_{X}\otimes\eta^{3,0}_{Y,Z,W},\\ \eta^{2,1}_{X,Y}\otimes Id_{Z}-\eta^{2,1}_{\Gamma\otimes X\otimes Y,Z}+\eta^{2,1}_{X,\Gamma\otimes Y\otimes Z}+Id_{X}\otimes\eta^{2,1}_{Y,Z}+d_ {v}(\eta^{3,0}),\\ \eta^{1,2}_{X}\otimes Id_{Y}-\eta^{1,2}_{\Gamma\otimes X\otimes Y }+Id_{X}\otimes\eta^{2,1}_{Y}+d_{v}(\eta^{2,1}),\\ d_{v}(\eta^{0,3}))\end{split}\] And to calculate the kernel of \(d^{4}_{tot}\), we see that it corresponds to those \(\eta^{4}=(\eta^{4,0},\eta^{3,1},\eta^{2,2},\eta^{1,3},\eta^{0,4})\) such that \[\begin{split} d^{4}_{tot}(\eta^{4})=\\ (d_{h}(\eta^{4,0}),-d_{h}(\eta^{3,1})+d_{v}(\eta^{4,0}),d_{h}(\eta^ {2,2})+d_{v}(\eta^{3,1}),d_{h}(\eta^{1,3})+d_{v}(\eta^{2,2}),\\ d_{h}(\eta^{0,4})+d_{v}(\eta^{1,3},d_{v}(\eta^{0,4}))=\\ (Id_{X}\otimes\eta^{4,0}_{Y,Z,W,R}-\eta^{4,0}_{\Gamma\otimes X \otimes Y,Z,W,R}+\eta^{4,0}_{X,\Gamma\otimes Y\otimes Z,W,R}-\eta^{4,0}_{X,Y,\Gamma\otimes Z\otimes W,R}\\ +\eta^{4,0}_{X,Y,Z,\Gamma\otimes W\otimes R}-\eta^{4,0}_{X,Y,Z,W} \otimes Id_{R},\\ -Id_{X}\otimes\eta^{3,1}_{Y,Z,W}+\eta^{3,1}_{\Gamma\otimes X \otimes Y,Z,W}-\eta^{3,1}_{X,\Gamma\otimes Y\otimes Z,W}+\eta^{3,1}_{X,Y, \Gamma\otimes Z\otimes W}\\ -\eta^{3,1}_{X,Y,Z}\otimes Id_{W}+d_{v}(\eta^{4,0}),\\ Id_{X}\otimes\eta^{2,2}_{Y,Z}-\eta^{2,2}_{\Gamma\otimes X\otimes Y,Z}+\eta^{2,2}_{X,\Gamma\otimes Y\otimes Z}-\eta^{2,2}_{X,Y}\otimes Id_{Z}+d_ {v}(\eta^{3,1}),\\ Id_{X}\otimes\eta^{1,3}_{Y}-\eta^{1,3}_{\Gamma\otimes X\otimes Y }+\eta^{1,3}_{X}\otimes Id_{Y}+d_{v}(\eta^{2,2}),\\ Id_{X}\otimes U+d_{v}(\eta^{1,3}),d_{v}(\eta^{0,4}))=0\end{split}\] We should describe what we mean by a deformation of the structure, \(\Gamma\). For this we need **Definition 3.13**.: _Let \(\mathscr{T}\) be a dg-category, \(\Gamma\) a perfect pseudo dg-tensor structure on \(\mathscr{T}\). An n-th order deformation of \(\Gamma\) consists of a perfect pseudo dg-tensor structure \(\langle\!\{\Gamma\}\!\rangle\) on \(\mathscr{T}\otimes_{k}k[x]/x^{n+1}\) such that_ \[i^{*}\{\!\{\Gamma\}\!\}:=\{\!\{\Gamma\}\!\}\otimes_{k[x]/x^{n+1}}k\] _is a perfect pseudo dg-tensor structure equivalent to \(\Gamma\)._ **Definition 3.14**.: _We say two n-th order infinitesimal deformations of a perfect pseudo dg-tensor structure \(\Gamma\) are equivalent if there is a pseudo dg-tensor functor \(\Phi\) in \(\mathscr{T}\otimes_{k}k[x]/x^{n}\) such that its restriction \(\Phi\otimes_{k[x]/x^{n+1}}k\) is equivalent to the pseudo dg-tensor functor given by the identity dg-bimodule \(\mathscr{T}_{pe}\)._ Just as before, we will say that a deformation of a perfect pseudo dg-structure \(\Gamma\) is a deformation of the associativity condition if the deformation \(\langle\!\{\Gamma\}\!\rangle\) has as structure morphisms for the unit and symmetry conditions equivalent equivalent to \(\ell\otimes k[x]/x^{n+1}\),\(r\otimes k[x]/x^{n+1}\), and \(c\otimes k[x]/x^{n+1}\), while we allow for the associativity coherence condition to possibly be different. The following is an analogue result to Theorem 3.6 **Theorem 3.15**.: _Let \(\mathscr{T}\) be a dg-category and let \(\Gamma\) be a pseudo dg-tensor structure on \(\mathscr{T}\). Then to any element of \(HDY^{4}_{dg}(\mathscr{T})\) we can associate an equivalence class of infinitesimal deformations of order 1 of the associativity condition of \(\Gamma\)._ Proof.: Let us recall that \[DY^{4}_{dg}(\mathscr{T})=Bimod^{4}(^{3}\Gamma,\Gamma^{3})^{-1} \oplus Bimod^{3}(^{2}\Gamma,\Gamma^{2})^{0}\] \[\oplus Bimod^{2}(\Gamma,\Gamma)^{1}\oplus Bimod^{1}(\mathscr{T} _{pe},\mathscr{T}_{pe})^{2}\oplus\mathscr{T}_{pe}(U,U)^{3}\] As we calculated, the kernel of \(d^{4}_{dg}\) consists of those \(\eta\in DY^{4}_{dg}(\mathscr{T})\) such that the equation 3.2 is equal to zero. We see that the first component \[Id_{X}\otimes\!\eta^{4,0}_{Y,Z,W,R}-\eta^{4,0}_{\Gamma\otimes X\otimes Y,Z,W,R} +\eta^{4,0}_{X,\Gamma\otimes Y\otimes Z,W,R}-\eta^{4,0}_{X,Y,\Gamma\otimes Z \otimes W,R}+\eta^{4,0}_{X,Y,Z,\Gamma\otimes W\otimes R}-\eta^{4,0}_{X,Y,Z,W} \otimes\!Id_{R}\] is a morphism of 4-fold bimodules, \(\eta^{4,0}\) satisfying an hexagon condition. The second component \[-Id_{X}\otimes\eta^{3,1}_{Y,Z,W}+\eta^{3,1}_{\Gamma\otimes X\otimes Y,Z,W}-\eta ^{3,1}_{X,\Gamma\otimes Y\otimes Z,W}+\eta^{3,1}_{X,Y,\Gamma\otimes Z\otimes W }-\eta^{3,1}_{X,Y,Z}\otimes Id_{W}+d_{v}(\eta^{4,0})\] on the other hand, is a 3-fold bimodule morphism, \(\eta^{3,1}\in Bimod^{3}(^{2}\Gamma,\Gamma^{2})^{0}\) satisfying the pentagon diagram condition up to the homotopy \(d_{v}(\eta^{4,0})\). This is precisely the condition we require as a coherence condition for the associator morphism of a pseudo dg-tensor structure. To be more precise, we will consider a perfect pseudo dg-tensor structure on \(\mathscr{T}\otimes k[x]/x^{2}\) given by the 2-fold dg-bimodule \(\Gamma^{k[e]}:=\Gamma\otimes_{k}k[x]/x^{2}\) where every one of the structural morphisms of Definition 2.33 are given by the extension of scalars \(\otimes_{k}k[x]/x^{2}\) except the associator. Our goal is to define a new associator \(\{\!\!\{\!\!\{\alpha\}\!\}\!\}\) which will restrict back to the associator \(\alpha\). Let us write then \(\{\!\!\{\!\!\{\alpha\}\!\}\!\}:=\alpha+\eta^{3,1}x\in Bimod({}^{2}\Gamma^{k[e]}, \Gamma^{k[3]\;(2)})^{0}\). And so, obtain a natural morphism \[\{\!\!\{\!\{\alpha\}\!\}\!\}_{X,Y,Z}:\Gamma^{k[e]}_{\Gamma^{k[e]},Z}\otimes \Gamma^{k[e]}_{X,Y}\to\Gamma^{k[e]}_{X,\Gamma^{k[e]}}\otimes\Gamma^{k[e]}_{Y,Z}.\] We need to check that this morphism satisfies the pentagon identity. \[-Id_{X}\otimes\{\!\!\{\!\!\{\alpha\}\!\}\!\}_{Y,Z,W}\circ\{\!\!\{\!\!\{\alpha\} \!\}\!\}_{X,\Gamma^{k[e]}_{Y,Z},W}\circ\{\!\!\{\!\!\{\alpha\}\!\}\!\}_{X,Y,Z} \otimes Id_{W}+\{\!\!\{\!\!\{\alpha\}\!\}\!\}_{X,Y,\Gamma^{k[e]}_{Z,W}}\circ \{\!\!\{\!\{\alpha\}\!\}\!\}_{\Gamma^{k[e]}_{X,Y},Z,W}=d_{v}(\eta^{4,0})\] Recall to add morphisms between different parenthesizations we need first to pad the morphism in the sense that we need to compose and pre-compose by the associativity morphism \(\alpha\otimes k[x]/x^{2}\) and a fixed choice of an inverse \(\alpha^{-1}\otimes k[x]/x^{2}\). We do this in such a way that addition of morphisms is always between the leftmost parenthesization and the rightmost one. We proceed as in the abelian case and see that since the associator \(\alpha\) of the pseudo dg-tensor structure \(\Gamma\) already satisfies the pentagon up to homotopy and the relation \(x^{2}\), our associator \(\{\!\!\{\!\{\alpha\}\!\}\!\}\) satisfies the pentagon diagram up to homotopy because \(\alpha\) already satisfies the condition, and as we are working with coefficients in \(k[x]/x^{2}\) then all that is left is the expression involving \(\alpha\) and the morphisms \(\eta^{3,1}\) which corresponds to our padding of morphisms and then we obtain the condition \(d_{h}^{3}(\eta^{3,1})=d_{v}(\eta^{4,0})\). Now we would like to see that two such deformations \(\{\!\!\{\!\{\alpha\}\!\}\!\},\{\!\{\!\{\alpha\}\!\}\!\}^{\prime}\) given as above are equivalent then they come from a pseudo dg-tensor functor induced equivalence of perfect pseudo dg-tensor structures. As in the abelian case the underlying functor we are looking for is the identity functor and so our dg-bimodule \(\Phi\) is nothing but the bimodule \(\mathscr{T}_{pe}\). This means that we are looking for a morphism \[\Gamma\otimes k[x]/x^{2}\simeq\Gamma\otimes\mathscr{T}_{pe}\otimes\mathscr{T} _{pe}\otimes k[x]/x^{2}\to\mathscr{T}_{pe}\otimes\Gamma\otimes k[x]/x^{2} \simeq\Gamma\otimes k[x]/x^{2}\] of the form \[Id+\beta x\] where \[\beta:\Gamma\to\Gamma\] is a morphism of dg-bimodules. We write \(\beta\) then as any dg-bimodule morphism such that \(\beta^{0}=\eta^{2,1}\), and writing the associativity condition for the pseudo dg-tensor we see that \(\{\!\{\!\{f\}\!\}\!\}:=Id+\beta x\) satisfies this condition if \(\eta^{2,1}\) is in the image of \(d_{tot}^{2}\), as the second component of \(d_{tot}^{2}\) gives us precisely the coherence diagram up to homotopy that \(\{\!\{\!\{f\}\!\}\!\}\) has to satisfy. Indeed as the identity dg-bimodule \(\mathscr{T}_{pe}\) is a pseudo dg-tensor functor the identity \(\mathscr{T}_{pe}\to\mathscr{T}_{pe}\) satisfies the associativity condition and as in the abelian case we see that the only remaining morphisms are those composed with the associativity condition of the pseudo dg-tensor structure, this in turn corresponds to a padding operation and thus we obtain precisely the expression in the second component of the image of \(d_{tot}^{2}\) The converse of this theorem don't seem to hold in general and likely require either a more general and coherent setting in which the deformation and space of tensor structures takes place, or under stricter conditions for the pseudo dg-tensor structure itself. At this point we ignore what the higher coherence conditions appearing both in the kernel and the image of the differentials \(d_{tot}\) represent in the context of deformations of the lifts of tensor triangulated structures. In all likeness an approach where the lift is meant to produce a tensor structure itself in the dg-enhancement is the correct setting in which one ought to take these deformations. Having said that as our motivation was kept in line with the tensor structure at the triangulated category level and this is the reason for the brute truncation of these tensor structures. In the abelian case we saw a few more things, namely we had Ocneanu's rigidity theorem, or [1, Proposition 3.21] exhibiting Davydov-Yetter cohomology as a tangent space of the moduli functor of tensor structures on a given finite tensor category \(\mathscr{A}\). Additionally in the same paper Batanin and Davydov investigate a good deal of the deformation theory of tensor structures, like a Lie algebra structure with many of the usual expected properties. We refer to their work for further details. In our case we expect that a dg version of Ocneanu's rigidity theorem for our perfect pseudo dg-tensor category structures should hold in the same spirit, but a concrete formulation of such a theorem must then deal again with the precise nature of the truncation we are performing in our structure. Let us mention the work of Panero-Shoikhet in [10] in which they do a thorough analysis of the deformation theory of monoidal dg-categories through a Davydov-Yetter cohomology construction. In contrast with our approach, theirs deals with entirely with the strict theory of dg-categories, that is, without considering homotopy equivalences. The concept of monoidal dg-category seems to be relatively elusive in the literature, with different authors meaning different things, often but not always with an implicit understanding that these definitions should be equivalent in some way. Our pretriangulated dg-categories equipped with a given pseudo dg-tensor category structure can be, as hinted already a number of times through this work, as a truncated version of a true homotopy monoidal dg-category in the Morita model category structure. The following formal statement reflecting this was suggested to the author by Bertrand Toen. **Theorem 3.16**.: _Let \(Ho_{2}(dg-cat_{k})\) be the 2-category given by the homotopy category of the 2-truncation \(\tau_{\leq 2}N(dg-cat_{k})[W^{-1}]\) of the \((\infty,1)\)-category \(N(dg-cat_{k})[W^{-1}]\) of dg-categories. Let \(\mathscr{M}\) be a dg-category equivalent to a derived noncommutative scheme then any perfect pseudo dg-tensor structure \(\Gamma\) on \(\mathscr{M}\) induces an associative monoid in \(Ho_{2}(dg-cat_{k})\) and to any associative monoid structure on \(\mathscr{M}\) in \(Ho_{2}(dg-cat_{k})\) induces a possibly non unique perfect pseudo dg-tensor structure on \(\mathscr{M}\)._ Where \(W\) is the class of Morita equivalences (See [20] for the definition of the Morita model category structure). A moduli space interpretation of this deformation theory would be of most interest to us and we hope to come back to the question in future work. On the other hand, we can give a dg version of Theorem 3.8 which can be thought as a local moduli space result. **Theorem 3.17**.: _Let \(\mathscr{T}\) be a dg-category, \(A\) a k-algebra, a perfect module \(U\) and \(\Gamma\) a 2-fold dg-bimodule over \(\mathscr{T}\otimes^{\mathbb{L}}A\) with finite global dimension. Then the set \(TTS_{A}(\Gamma)\) of perfect pseudo dg-tensor structures over \(\mathscr{T}\otimes^{\mathbb{L}}A\) which has \(\Gamma\) as a 2-fold dg-bimodule and \(U\) as a unit, has a structure of a quotient of an affine scheme by an algebraic affine scheme._ Proof.: As both \(\Gamma\) and \(U\) are fixed, what we are looking for is simply families of morphisms \(\alpha:\Gamma\otimes\Gamma\to\Gamma\otimes\Gamma\), \(u:\Gamma\otimes U\to\mathscr{T}_{pe}\), and \(c:\Gamma\to\Gamma\) we use that we are under the assumption that our dg-category \(\mathscr{T}\) is proper and so locally perfect, which means that every complex of morphisms is bounded and of finite dimension as \(k\)-vector spaces. The polynomial equations that determine morphisms \(\alpha\), \(c\) and \(u\) together with the differentials in every chain complex determine then an affine scheme and so do they when we restrict them by imposing the coherence conditions. For two such structures, \(\Gamma_{1}\), \(\Gamma_{2}\) to be equivalent, we need an invertible pseudo dg-tensor functor \(\Phi\) from, say, \(\boxtimes_{1}\) to \(\boxtimes_{2}\) ( using Lemma 2.39 ). Meaning that we have \(\Phi^{\prime}\) from \(\boxtimes_{2}\) to \(\boxtimes_{1}\) such that \(\Phi\otimes\Phi^{\prime}\) is isomorphic to the dg-bimodule \(\mathscr{T}_{pe}\). As the dg-bimodule \(\Gamma\) is the same in both structures \(\Gamma_{1}\) and \(\Gamma_{2}\), any such pseudo dg-tensor functor is determined by its structural morphisms \[\Gamma\to\Gamma\] and \[\Gamma\otimes U\to\Gamma\] Just as before the complexes are bounded and finite dimensional \(k\)-vector spaces and then they determine an affine scheme. Using composition as the group operation, taking the identity pseudo dg-tensor functor \(T_{pe}\) as the identity element and noting that every pseudo dg-tensor functor has an inverse, we get that the affine scheme of these equivalences forms an algebraic group. We have then a quotient of the space of pseudo dg-tensor structures with 2-fold dg-bimodule \(\Gamma\) and unit \(U\) by the affine group scheme of equivalences between them.
2307.00727
Binary Interaction Can Yield a Diversity of Circumstellar Media around Type II Supernova Progenitors
Recent observations of supernovae (SNe) have indicated that a fraction of massive stars possess dense circumstellar medium (CSM) at the moment of their core collapses. They suggest the presence of additional activities of the SN progenitor driving the enhancement of the mass-loss rate, and some physical processes attributing to single star's activities have been considered. In this study, we carry out binary evolutionary simulations of massive stars with the aim of investigating the CSM structure. We show that the mass-transfer rate in a binary can increase at the beginning of the Roche lobe overflow, and this enhancement would be associated with the structure of the CSM before the explosion. We also illustrate that depending on the orbital period of the binary, the density structure of the CSM can have a diverse distribution including shell-like and cliff-like structures. These characteristic structures appear within the lengthscale of $\sim 10^{17}\,{\rm cm}$ and could be traced by long-term observations of SNe, if the slow velocity of the CSM is assumed ($\sim 10\,{\rm km}\,{\rm s}^{-1}$). Our results highlight the importance of binary interaction in the aspect of reproducing the diversity of the CSM configuration.
Tomoki Matsuoka, Ryo Sawada
2023-07-03T03:13:09Z
http://arxiv.org/abs/2307.00727v2
Binary Interaction Can Yield a Diversity of Circumstellar Media around Type II Supernova Progenitors ###### Abstract Recent observations of supernovae (SNe) have indicated that a fraction of massive stars possess dense circumstellar medium (CSM) at the moment of their core collapses. They suggest the presence of additional activities of the SN progenitor driving the enhancement of the mass-loss rate, and some physical processes attributing to single star's activities have been considered. In this study, we carry out binary evolutionary simulations of massive stars with the aim of investigating the CSM structure. We show that the mass-transfer rate in a binary can increase at the beginning of the Roche lobe overflow, and this enhancement would be associated with the structure of the CSM before the explosion. We also illustrate that depending on the orbital period of the binary, the density structure of the CSM can have a diverse distribution including shell-like and cliff-like structures. These characteristic structures appear within the lengthscale of \(\sim 10^{17}\,\mathrm{cm}\) and could be traced by long-term observations of SNe, if the slow velocity of the CSM is assumed (\(\sim 10\,\mathrm{km}\,\mathrm{s}^{-1}\)). Our results highlight the importance of binary interaction in the aspect of reproducing the diversity of the CSM configuration. + Footnote †: journal: ApJ ## 1 Introduction Mass loss from massive stars is one of the important elements in their stellar evolution so that it has a significant influence on the evolutionary characteristics and the fates of themselves (e.g., Yoon, 2015). Understanding the physical properties of mass loss from massive stars would be a clue toward completing the construction of stellar evolution theory. Particularly, mass-loss activities prior to core-collapse supernovae (SNe) would be imprinted in the physical property of the circumstellar medium (CSM). This can contribute to the radiative source in SNe (see e.g., Kiewe et al., 2012; Smith, 2017; Chevalier and Fransson, 2017), and thus understanding the configuration of the CSM can also promote the precise modeling of SNe. Thanks to the recent development of transient surveys and rapid follow-up observation systems, it has been recognized that there is a diversity of the CSM configuration inferred for SN progenitors. For instance, it is implicated through optical flash spectroscopy experiments that a Type II SN progenitor is encompassed by a dense CSM confined within small lengthscale (\(\sim 10^{15}\,\mathrm{cm}\). See e.g., SN 2013fs, SN 2023ixf, Yaron et al., 2017; Jacobson-Galan et al., 2023). This is also supported by theoretical modelings of the early-phase optical light curve (Forster et al., 2018; Morozova et al., 2017; Moriya et al., 2017), as well as the detection of precursor activities before the SN (e.g., SN 2020tlf, Jacobson-Galan et al., 2022). Another example is that a few SNe are considered to undergo CSM interaction in the late phase (\(\sim\) several years after the explosion. See e.g., Kilpatrick et al., 2021, 2022). As an extreme example of the late-phase CSM interaction, it is argued that there are several objects exhibiting the transformation of the SN type from a stripped-envelope SN to a Type IIn SN (Margutti et al., 2017; Tinyanont et al., 2019; Tartaglia et al., 2021; Chandra et al., 2020), or showing the rebrightening of radio emission (Mauerhan et al., 2018; Balasubramanian et al., 2021; Maeda et al., 2023). These objects are speculated to have shell-structured CSM detached from the progenitor, highlighting the possible mass ejection of the pro genitor in the past (see the discussion in Maeda et al., 2023). The CSM structure deviating from the steady wind configuration is thought to attribute to an enhanced mass-loss activity before the explosion. Since canonical stellar evolution theories do not expect a significant increase in the mass-loss rate before the SN explosion (Langer, 2012), several mechanisms attempting the explanation of the increase in the mass-loss rate have been proposed, including the excitation of gravity wave in the convective core (e.g., Quataert and Shiode, 2012; Shiode and Quataert, 2014; Smith and Arnett, 2014; Fuller, 2017; Morozova et al., 2020; Wu and Fuller, 2021). It should be noted that these attempts have focused on the properties of the stellar evolutionary behavior as a single star. In this study, we simulate the stellar evolution in a binary system to calculate the pre-SN mass-loss rate of massive stars and propose the scenario that binary interaction can enhance the mass-loss rate from the binary system before the explosion. It is widely known that most massive stars are involved in a binary system that is about to undergo binary interaction (Sana et al., 2012). As the star evolves with time it experiences the expansion or contraction of the stellar radius at the moment of switching of the nuclear burning phases. If the star is involved in a tight binary so that the mass transfer between the stars can initiate, the mass exchange or the escape from the binary can affect the subsequent evolution of the primary star. In addition, the gas expelled from the binary system can distribute around the SN progenitor as a CSM. Simulations of stellar evolution in binary systems have been examined in previous studies (e.g., Ouchi and Maeda, 2017; Laplace et al., 2020, 2021), but none of them have discussed CSM formation based on the mass-loss histories obtained therein. Through the demonstrations in this paper, we illustrate the variety of the CSM molded by binary interaction and suggest the necessity to take into consideration the contribution of binary interaction to the formation of the CSM. This paper is organized as follows. In Section 2 the setup and procedure of our simulations are described. In Section 3 we show that for binary systems with specific parameter sets, a significant increase in the mass-loss rate before the SN explosion can be expected. We also demonstrate that the synthesized mass-loss history model can reproduce the inhomogeneous CSM structure deviating from the steady wind. In Section 4 we discuss the uncertainty involved in our results, especially focusing on the dependence of the primary star's initial mass. Finally, we summarize the content of this paper in Section 5. ## 2 Models and Method Our study can be divided into two main parts: the estimation of the mass loss rate by binary evolution calculations using MESA, and the reconstruction of the CSM from this mass-loss history by hydrodynamic calculations using PLUTO. We show the setup of MESA for the binary evolution calculation in Section 2.1, and the results of the mass-loss rate estimation in Section 3.1. Similarly, the setup of the hydrodynamic calculations in PLUTO is shown in Section 2.2, and the results of the reconstruction of the CSM structure are in Section 3.2. ### Setup of Stellar Evolution code MESA We use stellar evolution code MESA in revision 15140 (Paxton et al., 2011, 2013, 2015, 2018, 2019) to solve stellar evolutions of both two stars in the binary from the zero-age main sequence (ZAMS). The computational setup and input parameters in MESA are mainly based on Ouchi and Maeda (2017), with some references to Yoon et al. (2010, 2017) and Laplace et al. (2020, 2021). Hereafter, we describe below the notable parts of this study. We start the stellar evolutionary simulations with a non-rotating star for all the models. The initial ZAMS mass of the primary star is set to \(M_{1}=12M_{\odot}\), which is a typical mass of the progenitor of a Type II supernova (e.g., Smartt, 2009). The secondary star mass is fixed to \(M_{2}=10.8M_{\odot}\) and the corresponding initial mass ratio is then \(q=M_{2}/M_{1}=0.9\). We treat this mass ratio because, as described in Ouchi and Maeda (2017), small mass ratios (\(q<0.5\)) cause unstable mass transfer, which makes computations difficult1. The initial orbital period is parametrized among \(P_{\rm orb}=1100,1300,1500,1700\) days. We adopt these values as the period for reproducing the progenitor of a Type II supernova affected by the binary interaction (Ouchi and Maeda, 2017). We stop our simulation at the moment of the central carbon depletion \(t_{\rm end}=t_{\rm C,dep}\); specifically, our simulations are terminated at the moment when the central carbon mass fraction of the primary star falls below \(10^{-6}\) (see also Section 2.1.3). This prescription is enough for our purpose because it takes only \(\sim 10\) years from the moment of the central carbon depletion to the core collapse \(t_{\rm cc}\) (see Appendix A and Figure A1 for the detail), which is shorter than stellar behaviors under consideration. Footnote 1: It should be noted that the mass ratio of Galactic O stars follows the uniform distribution, including binary systems with \(q<0.5\)(e.g., Sana et al., 2012). All the models were assumed at solar metallicity (\(Z=0.02\), where \(Z\) is the mass fraction of elements heavier than helium; Asplund et al., 2009). We employed the opacity tables from OPAL (Iglesias & Rogers, 1996). #### 2.1.2 Atmosphere The radius of a star is defined as the location satisfying \(\tau=2/3\). In MESA, the hydrostatic equilibrium equation assuming the plane-parallel limit is integrated with the Eddington grey \(T-\tau\) relation (Eddington, 1926). This allows us to obtain the surface boundary condition. #### 2.1.3 Nuclear reaction network We employ the nuclear reaction network provided by MESA under the name co_burn.net, which includes \({}^{1}\)H, \({}^{3}\)He, \({}^{4}\)He, \({}^{12}\)C, \({}^{14}\)N, \({}^{16}\)O, \({}^{20}\)Ne, \({}^{24}\)Mg, and \({}^{28}\)Si. This is applicable for stellar evolutions up to the oxygen-burning phase (Timmes, 1999). We adopt the nuclear reaction rates in the default version of the MESA, taken from NACRE (Angulo et al., 1999) and JINA REACLIB (Cyburt et al., 2010). We note that after the central carbon depletion, the dynamical timescale of the star can be shorter than the lifetime left, and even the behavior of the primary star as a single star remains a matter of debate (e.g., Woosley et al., 2002; Arnett & Meakin, 2011). For this reason, we use a simple and computationally inexpensive network sufficient to calculate stellar evolution until the carbon-burning phase. #### 2.1.4 Mixing For the treatment of convective mixing, we used the mixing length theory (MLT) approximation (Henyey et al., 1965) with a mixing-length parameter of \(\alpha_{\rm MLT}=2.0\). The Ledoux criterion for convection is used, and semi-convection following Langer et al. (1985) is employed with an efficiency parameter \(\alpha_{\rm sc}=1.0\)(Yoon et al., 2010). We apply the overshooting for the convective core and shell experiencing hydrogen burning, and also for the convective core and shell where no significant burning takes place. We take an overshooting parameter as \(0.016H_{p}\), where \(H_{p}\) is the pressure scale height evaluated at the radius near the boundary of the convective core. #### 2.1.5 Wind For the stellar wind, we follow the 'Dutch' wind scheme implemented in MESA with a scaling factor fixed to 1.0. The 'Dutch' wind scheme in MESA combines the results from several papers. For cool stars with effective temperatures \(T_{\rm eff}<10^{4}\) K, we apply the wind scheme in de Jager et al. (1988). For stars with effective temperatures \(T_{\rm eff}>10^{4}\) K, we use that of Vink et al. (2001) for hydrogen-rich stars (with the surface mass fraction of hydrogen \(Y_{\rm H,surf}>0.4\)), while for hydrogen-poor stars (\(Y_{\rm H,surf}<0.4\)) we then use the recipe of Nugis & Lamers (2000). #### 2.1.6 Binary System In binary systems, we consider only non-conservative mass transfer. Our calculations include the loss of angular momentum due to the binary motion, such as mass loss. This causes the period of the binary system to evolve in time from the initial period, but these detailed dependencies are beyond the scope of this paper. The Roche lobe radius is calculated following the Eggleton (1983) method as follows \[R_{\rm RL}=\frac{0.49q^{2/3}}{0.6q^{2/3}+\ln{(1+q^{1/3})}}\;a\, \tag{1}\] where \(q\) is the mass ratio and \(a\) is the binary separation expressed as \[a=\left(\frac{G(M_{1}+M_{2})P^{2}}{4\pi^{2}}\right)^{1/3}. \tag{2}\] When one of the stars in the binary system initiates Roche lobe overflow, we implicitly compute the mass-transfer rate using the prescription described in Kolb & Ritter (1990). Here, we introduce the accretion efficiency parameter \(\beta\)(Tauris & van den Heuvel, 2006), which prescribes the fraction of the gas accreting onto the secondary star relative to the gas lost from the primary star due to the binary stripping. In other words, we have the following relationship between the total mass-loss rate of the primary star \(\dot{M}_{1}\), the mass-transfer rate between the binary system \(\dot{M}_{\rm tr}\), and the mass outflow rate into the CSM \(\dot{M}_{\rm CSM}\); \[\dot{M}_{1} =\dot{M}_{\rm wind,1}+\dot{M}_{\rm tr}\, \tag{3}\] \[\dot{M}_{\rm CSM} =\dot{M}_{\rm wind,1}+(1-\beta)\dot{M}_{\rm tr}\, \tag{4}\] where \(\dot{M}_{\rm wind,1}\) is the mass-loss rate due to the stellar wind of the primary star. We neglect the contribution of the stellar wind from the secondary star because it remains in the main sequence during the simulations and the mass-loss rate is not high. ### Hydrodynamics of the CSM formation We adopt the code PLUTO(Mignone et al., 2007, 2012) to solve the equations of hydrodynamics in one-dimensional spherical coordinates. We prepare the simulation domain from the inner boundary radius \(r_{\rm in}=V_{w}(t_{\rm cc}-t_{\rm end})\) to \(10\) pc and divide the domain into 256 meshes in the logarithmic scale where \(V_{w}\) is the value of the velocity of the gas flowing from the binary system. Here we assume \(t_{\rm cc}-t_{\rm end}=10\,\)years (see also Section 2.1). As an initial profile we consider the thermodynamic quantities \(\rho=1.6\times 10^{-24}\,\)g\(\,\)cm\({}^{-3},T=10^{4}\,\)K to mimic warm interstellar medium (Draine, 2010). The outflow condition is applied for the outer boundary in the simulation domain. By injecting the gas from the inner boundary following the model for the mass-loss history obtained in Section 2.1, we can construct the hydrodynamical structure of the CSM. We need to assume the value of \(V_{w}\); there is uncertainty as to what the realistic values of the velocity of the gas escaping from the binary system should be. \(V_{w}\sim\mathcal{O}(10)\,\)km\(\,\)s\({}^{-1}\) would be expected if the escape velocity of the SN progenitor gives a main contribution, while \(V_{w}\sim 1000\,\)km\(\,\)s\({}^{-1}\) might be possible in a case where the gas is coming from the compact secondary star. We examine the value of \(V_{w}=10,100\), and \(1000\,\)km\(\,\)s\({}^{-1}\) and apply these parameters for the simulation of the CSM formation in the binary models in Section 2.1. The CSM density at the inner boundary radius \(r_{\rm in}\) is given as \[\rho_{\rm inj}=\frac{\dot{M}_{1}}{4\pi r_{\rm in}^{2}V_{w}}. \tag{5}\] The structure within the inner boundary would be determined by the progenitor activity after \(t\sim t_{\rm cc}\), which we do not investigate in detail. Our numerical simulations based on the abovementioned setup enable us to examine the large-scale structure of the CSM, and the similar examinations have been done in some previous studies, especially in the context of the numerical simulations of supernova remnants (Tenorio-Tagle et al., 1990, 1991; Dwarkadas, 2005, 2007; Yasuda et al., 2021, 2022; Matsuoka et al., 2022). However, the analytic treatment on the time-dependent wind dynamics presented in Piro and Lu (2020) is also applicable to the formation of the CSM around the SN progenitor if we limit the radius scale within less than \(\sim\,\) pc. We refer readers to Appendix B for the comparison between numerical and analytical treatments. ## 3 Result In this section, we describe the results of our stellar evolutionary simulation and the expected CSM configuration. For convenience, we define the look-back time \(t_{\rm lb}=t_{\rm cc}-t\) as an indicator of the evolutionary phase of the star. ### Binary evolution Figure 1 shows the time evolution of the mass loss rate (\(\dot{M}_{1}\)) and Roche lobe radius (\(R_{\rm Rl}\)) of the primary star in the binary models with the orbital period \(P_{\rm orb}=1100,1300,1500,1700\,\)days. The time evolution of the mass-loss rate can be interpreted in the basic framework of the binary evolution (e.g., Ouchi and Maeda, 2017). We can see that the models with \(P_{\rm orb}=1100,1300,1500\,\)days experience a drastic increase in the mass-loss rate. The timings of the increase in \(\dot{M}_{1}\) coincide with the moment when the primary star fills the Roche lobe radius. Our simulations suggest that even without introducing the single star's activity arising eruptions, an SN progenitor involved in a binary can boost the mass-loss rate from the system at the moment of the expansion of the stellar radius. Firstly, we start the detailed discussion in the model with \(P_{\rm orb}=1300\,\)days as an example. The left panels in Figure 1 show the time evolution of characteristic mass-loss rates and the radii in the primary star. They show that the primary star in this binary system experiences a mass loss by the stellar wind in the hydrogen-burning phase (\(t_{\rm lb}<10^{6}\,\)years). Then, due to the expansion of the stellar radius after the end of the core hydrogen burning, the stellar wind mass loss rate increases by 30 times around \(t_{\rm lb}\sim 10^{6}\,\)years, while the Roche lobe overflow does not occur at the time in this model. At \(t_{\rm lb}\sim 2\times 10^{4}\,\)years the helium burning in the stellar core ends, and the primary star expand again (see the bottom right panel in Figure 1). At this timing, the stellar radius reaches the Roche lobe radius and the binary mass transfer begins. The mass transfer rate then instantaneously increases up to \(10^{-2}\,M_{\odot}\,\)yr\({}^{-1}\), and the mass of the primary star reduces down to \(M_{1}\sim 6\,M_{\odot}\). The second bump in the mass transfer history can be also seen at \(t_{\rm lb}\sim 5000\,\)years. This could be associated with the single stellar activity of the primary star, since the evolution of the single star with the same initial mass (black dashed line in the left bottom panel of Figure 1) also shows the slight expansion of the stellar radius at \(t_{\rm lb}\sim 5000\,\)years (see also Section 4). After that, the Roche lobe overflow continues until the core collapse, while the mass-transfer rate is regulated down to \(\sim 10^{-5}\,M_{\odot}\)yr\({}^{-1}\). Here we note the comparison between the evolutions of a single star and the star in a binary. The radius of the primary star in the binary follows the same path as that evolving as a single star until it first fills the Roche lobe radius. This implies that the timing when a star first fills the Roche lobe radius can be roughly understood through the comparison between the Roche lobe radius and the radial evolution in the single stellar evolution. Finally, Figure 1 (right) shows the dependence of mass loss properties on the orbital period \(P_{\rm orb}\). In all binary models, the primary star will eventually become a Type II SN progenitor because the primary star retains the hydrogen envelope until the core collapse (see Table 1). We can see the trend that the longer the initial orbital period \(P_{\rm orb}\), the further the timing of the Roche lobe overflow is delayed. When the initial orbital period is too long (\(P_{\rm orb}>1700\,\)days), the mass loss rate is roughly the same as the mass loss rate in single stellar evolution, since the binary does not experience Roche lobe overflow (see dashed black line and solid cyan line in the right panel of Figure 1). It is possible to compare the binary models with \(P_{\rm orb}=1100,\,1300,\,\)and\(\,1500\,\)days to the single stellar evolution. All binary models follow the same evolutionary path as the single stellar model until \(t_{\rm lb}\sim 10^{6}\,\)years. The model of \(P_{\rm orb}=1500\,\)days follows almost simi \begin{table} \begin{tabular}{c|c c c c} \hline \hline initial period & final mass & H-envelope mass & CSM mass for \(V_{w}=10{\rm km\,s^{-1}}\) & CSM mass for \(V_{w}=1000{\rm km\,s^{-1}}\) \\ \(P_{\rm orb}\) [days] & \(M_{1,f}\) [\(M_{\odot}\)] & \(M_{\rm env,f}\) [\(M_{\odot}\)] & \(M_{\rm CSM}\) [\(M_{\odot}\)] & \(M_{\rm CSM}\) [\(M_{\odot}\)] \\ \hline 1100 & 4.8 & 1.3 & 1.4 & 0.3 \\ 1300 & 5.9 & 2.4 & 5.0 & 0.6 \\ 1500 & 7.2 & 3.7 & 3.8 & 0.9 \\ 1700 & 10.8 & 7.3 & 0.05 & 0.02 \\ \hline single star & 10.8 & 7.3 & 0.05 & 0.02 \\ \hline \hline \end{tabular} Note. – The ‘final mass’ and ‘H-envelope mass’ is defined as the primary star’s mass at the end of the calculation (i.e., at the time of the central carbon depletion of the primary star). The CSM mass is defined as the enclosed mass within the radius at which the CSM distribution reconstructed through analytical treatment becomes compatible with that with the numerical simulation. See Appendix B. \end{table} Table 1: Summary of physical quantities in this system, especially for the terminal state of the primary star. Figure 1: Left: Time evolution of the mass-loss history (top) and the characteristic radii (bottom) as a function of the look-back time for the binary models with \(P_{\rm orb}=1300\,\)days. In the top panel the mass-loss rate is decomposed into the components of the stellar wind and the mass transfer to the secondary star in the top panel. In the bottom panel, the time evolutions of the Roche lobe radius and the stellar radius of the primary star are illustrated, in addition to the \(12\,M_{\odot}\) single star’s radius. Right: The total mass-loss rate histories and the Roche lobe radii for the binary models with \(P_{\rm orb}=1100,1300,1500,1700\,\)days are shown. lar evolution to the model of \(P_{\rm orb}=1300\) days discussed above, except that the Roche lobe overflow starts slightly later due to the larger Roche lobe radius. The model with \(P_{\rm orb}=1100\) days undergoes a Roche lobe overflow after the end of the core hydrogen burning at \(t_{\rm lb}\sim 10^{6}\) years, which leads to the release of about \(\sim 7\,M_{\odot}\) of the hydrogen outer layer. It then experiences another Roche lobe overflow at \(t_{\rm lb}\sim\) a \({\rm few}\times 10^{4}\) years, following a path different from that in single stellar evolution. However, we should emphasize that it is no coincidence that \(P_{\rm orb}=1100\) days model exhibits the mass-loss enhancement at \(t_{\rm lb}\sim 10^{4}\) years similarly to the other binary models (\(P_{\rm orb}=1300,1500\) days). Normally, helium burning in the stellar core ends at \(\sim 10^{4}\) years before the core collapses (e.g., Woosley et al., 2002). At this time, the primary star expands again, and thus we can see an enhancement of mass loss. We note the model of \(P_{\rm orb}=1700\) days follows the same mass loss history as the single stellar model since the binary separation remains larger than the Roche lobe radius in the whole of the lifetime of the primary star. Therefore, while we conducted the CSM reconstruction even for the binary model with \(P_{\rm orb}=1700\) days, we do not discuss the result of the model in the next section. ### Reconstruction of CSM Figure 2 shows the density structures of the CSM for the binary models assuming \(V_{w}=10\) km s\({}^{-1}\). We can see that the shell-like distribution is standing out at \(\sim\mathcal{O}(10^{17})\) cm in the models with \(P_{\rm orb}=1300,1500\) days. The corresponding look-back time is around \(t_{\rm lb}\sim 10^{4}\) years and it is easily found that at that time the mass-loss rate is enhanced by orders of magnitude, compared to that expected for the steady wind of a red supergiant (RSG). We suggest that binary interaction can give rise to the formation of the shell-like CSM structure. The dependence of the CSM structure on the wind velocity is also worth investigating, which is shown in Figure 3. We can see that higher wind velocity leads to the widely distributed and thin-density structure of the CSM. As \(V_{w}\) becomes faster, the wind can reach the lengthscale farther from the SN progenitor, while its density becomes smaller (see equation 5). We note that the shell-like component in the model with slower wind velocity lies in the smaller lengthscale. Hereafter we discuss the possibility that our results can give explanations for some observational implications. If the wind velocity is as slow as orders of \(\sim 10\)km s\({}^{-1}\), then the location of the mass shell observed in the model with \(P_{\rm orb}=1300,1500\) days falls in \(\sim 10^{17}\) cm. This could be an origin for the observational signature of late-time SN-CSM interaction seen in several Type II SNe because the SN shock begins interacting with the shell at the radius of \(\sim 10^{17}\) cm a few years after the explosion (Kilpatrick et al., 2022; Maeda et al., 2023). There are also suggestions that some stripped Figure 3: Density structures of the CSM in the binary models of \(P_{\rm orb}=1500\) days with the velocity varied among \(10,100\), and \(1000\) km s\({}^{-1}\). The dashed line within \(r<3\times 10^{15}\) cm in the model of \(V_{w}=1000\) km s\({}^{-1}\) denotes the component of the CSM with its corresponding look-back timescale less than \(t_{\rm cc}\), during which we do not explore. The black dashed line shows a steady wind structure expected for a red supergiant, the same as the one in Figure 2. Figure 2: Density structures of the CSM in the binary models with \(P_{\rm orb}=1100,1300,1500\) days reconstructed on the assumption of the wind velocity \(V_{w}=10\) km s\({}^{-1}\). The dashed line stands for the typical steady wind distribution expected for a red supergiant (\(\dot{M}=10^{-6}\,M_{\odot}\)yr\({}^{-1}\), \(V_{w}=10\) km s\({}^{-1}\)). envelope SNe produce a sign of the interaction between the massive shell-like CSM in the late phase of the SNe (Margutti et al., 2017; Mauerhan et al., 2018; Balasubramanian et al., 2021). In order to reproduce the SN progenitor with the hydrogen envelope stripped away, however, we need to consider a tighter binary than our models. Such a situation makes the binary evolutionary simulation difficult and we postpone the treatment of the stripped-envelope SNe. As for the model with \(P_{\rm orb}=1100\) days, not shell-like but the cliff-like structures in the CSM are appearing at the radii of \(2\times 10^{17}\) cm and \(10^{18}\) cm. This may be also intriguing since the cliff-like structure leads to the attenuation of the CSM interaction signature in the late phase of SNe. Indeed, SN 1993J exhibits a steep dimming in the radio emission several years after the explosion (Weiler et al., 2007). The possible explanation is that the progenitor of SN 1993J had a cliff-like structure at the lengthscale of \(\sim 10^{17}\) cm which had been molded by the binary interaction activity. In addition, this binary interaction scenario is consistent with the fact that SN 1993J is a Type IIb SN originating from a massive star suffered from the stripping of the envelope by the companion (e.g., Maund et al., 2004). Yet, we note that these explanations, including for the shell-like structure, require the constraint that the wind velocity should be as slow as \(\sim\mathcal{O}(10)\) km s\({}^{-1}\). ## 4 Discussion In this study, we fixed the value of the initial primary mass to clarify the physics of the CSM formation. Our results show that the CSM properties depend on the evolution of the primary star radius \(R_{1}\), as well as several physical parameters such as \(P_{\rm orb}\) and \(V_{w}\). Obviously, the radial evolution of the primary star depends on the choice of the initial primary mass, and the variation of the primary mass expands the diversity of stellar evolution itself. Here, we present a series of models in which the initial primary mass is parametrized, and discuss the effect of the variation of the primary mass on the behavior of the mass-loss history and the resultant CSM structure qualitatively. Figure 4 shows the time evolution of the mass loss rate (\(\dot{M}_{1}\)) adopted for the primary star masses \(M_{1}=14.4,\,15,\,16.2\,M_{\odot}\). Other physical/binary parameters are fixed; the orbital period \(P_{\rm orb}=1900\) days and the secondary mass \(M_{2}=13.5\,M_{\odot}\). We have confirmed that in all of these models, the primary star will eventually become a Type II SN progenitor encompassed by a hydrogen envelope. We can see that the models with \(M_{1}=15\,M_{\odot}\) experience the drastic increase in the mass-loss rate at \(t_{\rm fb}\sim\) a few \(\times 10^{3}\) years. Also, the models with \(M_{1}=16.2\,M_{\odot}\) shows the multiple episodes in the mass-loss history around \(t_{\rm fb}\sim 3\times 10^{4}\), \(10^{4}\) and \(10^{2}\) years. On the other hand, in the models with \(M_{1}=14.4\,M_{\odot}\), no mass-loss gain was observed even though the same orbital period and secondary mass are employed. In the evolutionary stage of a massive star after the carbon burning, the stellar radius fluctuates due to single stellar activity, such as shell flashes and core convection. When the star is evolving as a single star, these activities would not be reflected in the mass-loss history unless they induce mass loss from the surface of the star (e.g., Quataert and Shiode, 2012; Shiode and Quataert, 2014). This is true even when the binary system is considered and the Roche lobe radius is sufficiently larger than the primary stellar radius (the case of \(M_{1}=14.4\,M_{\odot}\)). However, when the Roche lobe radius becomes comparable to the stellar radius of the primary star, the radial variation of the star would be reflected in the mass loss due to Roche lobe overflow, and thus the complicated evolution of the mass-loss rate would be raised. We propose that single stellar activities that involve the expansion of the stellar radius but do not induce mass eruptions, may also contribute to mass loss if the star is in a binary system. Thus, understanding the massive star's evolution after the carbon-burning phase would be important even for seeking the origin of the dense-CSM structure around SN progenitors. The mass-loss histories drawn in Figure 4 are informative even for the formation of the resultant CSM structure. As examples, we take up the models with \(M_{1}=15.0\,M_{\odot}\) and \(M_{1}=16.2\,M_{\odot}\). First, we can see that the mass-loss rate seen in the model with Figure 4: Time evolution of mass-loss rates in binary systems with \(M_{1}=14.4\,M_{\odot},15.0\,M_{\odot}\), and \(16.25\,M_{\odot}\). The secondary star’s mass and the orbital period are set to \(M_{2}=13.5\,M_{\odot}\) and \(P_{\rm orb}=1900\) days, respectively. \(M_{1}=15.0\,M_{\odot}\) is elevated to \(\sim 10^{-3}\,M_{\odot}\,\mathrm{yr}^{-1}\) from a few thousand years before the explosion. This is comparable to those previously inferred for Type IIn SNe (e.g., Kiewe et al., 2012; Moriya et al., 2014), and in good agreement with the discussion in Ouchi & Maeda (2017). If we consider the slow velocity of the CSM (\(V_{w}=10\,\mathrm{km}\,\mathrm{s}^{-1}\)), the CSM in this model is expected to have a cliff-like structure; a steep density drop at the radius of \(\sim 10^{17}\,\mathrm{cm}\). This lengthscale may be consistent within a factor with the implication of Katsuda et al. (2016), where some Type IIn SNe are suggested to be encompassed by a torus-like CSM truncated at around several times of \(10^{16}\,\mathrm{cm}\). Another is the model with \(M_{1}=16.2\,M_{\odot}\), whose synthesized CSM structure is displayed in Figure 5. The mass-loss history of this model is characterized by multiple enhancements of the mass-loss rate. The last mass-loss episode happens at \(t_{\mathrm{lb}}\sim 50\,\mathrm{years}\), and the gas expelling from the binary system at that episode would reach out to \(\sim 10^{15}\,\mathrm{cm}\). We remark that this lengthscale is compatible with that inferred for the confined CSM, implicated in the infant Type II SNe (Yaron et al., 2017). Yet, we note a caveat that our model is not quantitatively tuned to the observational implication of SN 2013fs, and the discussion should be limited qualitatively. Nevertheless, we advocate that the binary interaction episode arising \(\lesssim 100\,\mathrm{years}\) before the explosion can leave an inhomogeneous structure in the vicinity of the SN progenitor, and suggest the binary interaction in the final evolutionary phase as a possible explanation of the signature of the CSM interaction observed in the infant SNe. ## 5 Summary In this paper, we have calculated mass-loss rates from a massive stars' binary system on the basis of non-conservative mass transfer during Roche lobe overflow. We demonstrate that the mass-transfer rate can vary with time according to the expansion or contraction of the stellar radius. If the Roche lobe overflow begins immediately after the exhaustion of the nuclear burning fuel in the core (e.g., hydrogen or helium), the mass-transfer rate can be intensively enhanced. In the framework of the non-conservative mass transfer in the binary, the time variability of the mass-transfer rate would be directly associated with the spatial inhomogeneity of the CSM density structure. By making use of hydrodynamics simulations we also showed that the abovementioned enhancement of the mass-transfer rate can emerge as the shell-like or cliff-like structures in the CSM. Particularly if the wind velocity is as slow as orders of \(10\,\mathrm{km}\,\mathrm{s}^{-1}\), the characteristic radii in these CSM structures fall on \(\sim 10^{17}\,\mathrm{cm}\) and could contribute to the observational signatures traced by long-term observations of SNe. We also discussed the dependence of the primary mass on the resulting binary evolutions and CSM structures. We found the stellar parameter set that can reproduce the significant increase in the mass-transfer rate, which could be consistent with implications for Type IIn SNe. Another parameter set is found that reproduces the mass-loss episode in the last 100 years before the explosion. This allows us to synthesize the inhomogeneous structure with the length and density scales compatible with the confined CSM proposed for infant Type II SNe, although fine tuning of the binary parameters would be required. Our binary evolution models would highlight the possible scenario that explains the diversity of the CSM morphology inferred for SN progenitors. MESA revision 15140 (Paxton et al., 2011, 2013, 2015, 2018, 2019), PLUTO (Mignone et al., 2007, 2012) The authors thank Ryoma Ouchi and Keiichi Maeda for providing the original calculation code, and Yudai Suwa, Kenta Hotokezaka, and Daisuke Toyouchi for their helpful comments. This work has been supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI grants 20H01904 (TM), 21K13964, and 22KJ0528 (RS). Numerical simulations in this study were carried out in the supercomputer cluster Yukawa Figure 5: The synthesized CSM density structure for the model with \(M_{1}=16.2\,M_{\odot},M_{2}=13.5\,M_{\odot}\), and \(P_{\mathrm{orb}}=1900\,\mathrm{days}\), assuming the wind velocity \(V_{w}=10\,\mathrm{km}\,\mathrm{s}^{-1}\). Gray shaded region illustrates the possible parameter space for the CSM properties inferred for SN 2013fs (Yaron et al., 2017). 21 equipped with Yukawa Institute for Theoretical Physics. ## Appendix A Lifetime of the star in its final stage To simplify the nucleosynthesis network calculations, we have approximated the carbon depletion time to the time when the primary star undergoes the core collapse (see Section 2.1). To check the validity of this assumption, we calculate the further evolution of a star with \(M_{\rm ZAMS}=12\,M_{\odot}\) after the carbon depletion in the core. Figure 11 shows the result of the calculation up to the central carbon depletion stage adopted in this study (dashed black line) and up to the time of the core collapse (solid red line). The 'central carbon depletion stage' is defined as when the central carbon abundance depletes below \(10^{-6}\), as described in the text, and 'core-collapse' is defined as when the infall-velocity in the central iron core exceeds \(10^{7}\) cm s\({}^{-1}\). The horizontal axis shows the logarithmic scale of the look-back time of the star measured from the core collapse. As can be seen in Fig. 11, the result up to the central carbon depletion stage is stopped at a central temperature of \(\log T_{\rm c}=9.1\) (i.e. \(T\approx 1\times 10^{9}\) [K]), while the results up to the core collapse stage reach a central temperature of \(\log T_{\rm c}=9.8\) (i.e. \(T_{\rm c}>5\times 10^{9}\) [K]). Figure 11 shows that the remaining time between the central carbon depletion phase and the core collapse is \(\sim 10\) years. Our discussion in the main text was focusing on the stellar properties with \(t_{\rm lb}\gg 10\) years. This indicates that it is reasonable to approximate the central carbon depletion phase as the moment of the core collapse. ## Appendix B Comparison between the analytic and numerical solution of the CSM structure While we performed the numerical simulations of the CSM formation in Section 3.2, it is also possible to rely on the analytical description of the wind structure. Piro and Lu (2020) presented the framework describing the hydrodynamical structure of the CSM originating from the time-dependent mass-loss history. Comparing these two methods allows us to confirm the application limit of the analytical treatment in the context of CSM formation. Figure 12 shows the comparison between our numerical model and the analytical treatment with the binary parameter of \(M_{1}=12\,M_{\odot},M_{2}=10.8\,M_{\odot},P_{\rm orb}=1300\) days. It is observed that these two treatments are compatible with each other within \(r\sim 10^{18}\) cm. The corresponding look-back time of this radius is \(\sim 0.1\) Myr, indicating that the time variation of the mass-loss rate after the helium exhaustion in the core can be directly associated with the density variation of the CSM. In this model, actually, the mass-loss rate is greatly enhanced immediately after the helium exhaustion, and this feature is clearly appearing as the shell-like structure around several times \(10^{17}\,\mathrm{cm}\). We confirm the validity of the analytical treatment of time-dependent wind propagation in Piro and Lu (2020). In the outer region from \(\sim 10^{18}\,\mathrm{cm}\), the external pressure from the static interstellar medium prevents the propagation of the wind from the progenitor (for the details see e.g., Weaver et al., 1977; Matsuoka et al., 2022).
2306.00368
Dynamical Inflation Stimulated Cogenesis
We propose a minimal setup that realises dynamical inflection point inflation, and, using the same field content, generates neutrino masses, a baryon asymmetry of the universe, and dark matter. A dark $SU(2)_D$ gauge sector with a dark scalar doublet playing the role of inflaton is considered along with several doublet and singlet fermions sufficient to realise multiple inflection points in the inflaton potential. The singlet fermions couple to SM leptons and generate neutrino masses via the inverse seesaw mechanism. Those fermions also decay asymmetrically and out of equilibrium, generating a baryon asymmetry via leptogenesis. Some of the fermion doublets are dark matter, and they are produced via inflaton decay and freeze-in annihilation of the same fermions that generate the lepton asymmetry. Reheating, leptogenesis, and dark matter are all at the TeV scale.
Debasish Borah, Arnab Dasgupta, Daniel Stolarski
2023-06-01T06:00:38Z
http://arxiv.org/abs/2306.00368v2
# Dynamical Inflation Stimulated Cogenesis ###### Abstract We propose a minimal setup that realises dynamical inflection point inflation, and, using the same field content, generates neutrino masses, a baryon asymmetry of the universe, and dark matter. A dark \(SU(2)_{D}\) gauge sector with a dark scalar doublet playing the role of inflaton is considered along with several doublet and singlet fermions sufficient to realise multiple inflection points in the inflaton potential. The singlet fermions couple to SM leptons and generate neutrino masses via the inverse seesaw mechanism. Those fermions also decay asymmetrically and out of equilibrium, generating a baryon asymmetry via leptogenesis. Some of the fermion doublets are dark matter, and they are produced via freeze-in annihilation of the same fermions that generate the lepton asymmetry. Reheating, leptogenesis, and dark matter are all at the TeV scale. Introduction The cosmic microwave background (CMB) observations show that our universe is homogeneous and isotropic on large scales up to an impressive accuracy [1; 2]. Such observations lead to the so called horizon and flatness problems which remain unexplained in the description of standard cosmology. The theory of cosmic inflation that posits a phase of rapid accelerated expansion in the very early universe was proposed in order to alleviate these problems [3; 4; 5]. While there are several viable inflationary models discussed in the literature, not all of them are motivated from particle physics point of view. Here we consider the approach where the inflation potential arises from Coleman-Weinberg corrections to scalar potential in a well-motivated particle physics setup [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19] naturally leading to a low scale inflation. In particular, we build on dynamical inflection point inflation [16] which has an inflection point in the inflaton potential due the vanishing of the scalar quartic \(\beta\)-function. This zero easily occurs if the inflaton has both gauge and Yukawa couplings. The Standard Model (SM) of particle physics, while extremely successful, has several shortcomings that make it unable to describe our universe. In particular, the minimal renormalizable Standard Model cannot accommodate neutrino masses, does not have a dark matter (DM) candidate, and cannot give rise to a sufficiently large baryon asymmetry of the universe (BAU). All three of these are now very well established and require physics beyond the Standard Model. In this work we use the ingredients _already provided_ by the dynamical inflation point inflection scenario to solve all of these problems in a unified framework. Our model consists of a dark \(SU(2)_{D}\) gauge sector with several vectorlike fermion doublets and singlets. There is a doublet scalar responsible for spontaneous symmetry breaking of the dark gauge symmetry that also plays the role of inflaton. The number of dark sector fields not only dictates the shape of the inflaton potential via Coleman-Weinberg corrections, but can also give rise to dark matter and baryon asymmetry of the universe. The fermion singlets couple to some doublet fermions via the inflaton field to become massive and decouple from the beta functions at low energy. The singlet fields can also couple to the lepton doublets via the SM Higgs doublet, which after the introduction of a bare Majorana mass term for dark sector fermions, can lead to the origin of light neutrino masses via the inverse seesaw mechanism. Interestingly, this bare Majorana mass term of dark sector fermions has a strict upper bound from the requirement of successful Coleman-Weinberg inflation, thereby connecting the seesaw scale with inflationary dynamics. While the heavy dark sector fermions dictate the origin of neutrino mass and BAU via leptogenesis [20], the remaining dark sector fermions which do not couple to the inflaton field can remain light. They do contribute to the \(\beta\)-functions down to their mass threshold and also make up the dark matter of the universe. Due to the relative heaviness of the dark \(SU(2)\) gauge boson that provides the portal from the dark matter to the right handed neutrinos, the relic abundance of DM is generated via the freeze-in mechanism [21]. Finally, the SM Higgs also gets is mass from the inflaton vacuum expectation value. This paper is organised as follows. In section II, we introduce the model and detail the inflationary dynamics. In section III, we briefly discuss the origin of light neutrino masses via the inverse seesaw mechanism. In section IV we discuss the details of cogenesis of the baryon asymmetry of the universe and the dark matter abundance. We conclude in section V, and we review the standard inflation formulae that we have used in appendix A. The model In this section we review the mechanism of dynamical inflection point inflation and describe the ingredients of the model. We seek an inflaton potential can arise from a field theory, has sub-Planckian field excursion, and satisfies the constraints from observations of the CMB. Our starting point is inspired by the Coleman-Weinberg potential [22], \[V(\Phi)\sim\lambda(\Phi)\Phi^{4}, \tag{1}\] which has long been studied as a possible inflation model [5]. This potential has a plateau that easily gives rise to slow roll as well as a global minimum that the field will naturally roll towards. For sub-Planckian field excursions, the scalar spectral index \(n_{s}\) is controlled by the second derivative of the potential (see Eqs. (10) and (11)). For a single scalar field we have \(\lambda(\Phi)\sim\log(\Phi/f)\), but such a model gives values of \(n_{s}\) that are smaller than the observed values [6]. Therefore, one must engineer a smaller second derivative at the point in field space where the cosmological scales leave the horizon. This naturally occurs in potentials where the second derivative can vanish, namely those that have an inflection point. Potentials of the type in Eq. (1) have inflection points if the \(\beta\)-function for the quartic coupling, \(\beta_{\lambda}\), has a zero [16]. This can occur if \(\Phi\) has a gauge charge and couples to fermions. In order to get a suitable inflaton potential, one needs two zeros in \(\beta_{\lambda}\) that are parameterically separated. This ensures that the inflection point is sufficiently far from the minimum so that there can be 60 e-folds of inflation. Given the above considerations, the field content for the model is shown in Table 1, which very similar to the model in [16]. There is a dark \(SU(2)\) gauge symmetry, and we impose a global \(Z_{3}\times Z_{2}\) symmetry.1 The \(Z_{2}\) symmetry ensures stability of the dark matter candidate, and we will add small soft breaking of the \(Z_{3}\) in section III. Here, \(\Phi\) plays the role of the inflaton, the \(\psi_{i}\) are dark matter, and linear combinations of the \(\psi_{1}\) and \(\chi\) are the right handed neutrinos that give neutrinos mass and generate the observed baryon asymmetry via leptogenesis. Footnote 1: \(Z_{3}\times Z_{2}\) is isomorphic to \(Z_{6}\), but because of the breaking pattern, using the description of the former is simpler. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \((2s+1)\) & \(n_{f}\) & \(SU(2)_{D}\) & \(Z_{3}\times Z_{2}\) \\ \hline \(\psi_{iL}\) & 2 & \(N_{\psi}\) & \(\mathbf{2}\) & \((1,-1)\) \\ \(\psi_{iR}\) & 2 & \(N_{\psi}\) & \(\mathbf{2}\) & \((1,-1)\) \\ \(\psi_{1L}\) & 2 & 1 & \(\mathbf{2}\) & \((\omega,1)\) \\ \(\psi_{1R}\) & 2 & 1 & \(\mathbf{2}\) & \((\omega^{2},1)\) \\ \(\chi_{iL}\) & 2 & 2 & \(\mathbf{1}\) & \((\omega^{2},1)\) \\ \(\chi_{iR}\) & 2 & 2 & \(\mathbf{1}\) & \((\omega,1)\) \\ \(\Phi\) & 1 & 1 & \(\mathbf{2}\) & \((1,1)\) \\ \hline \end{tabular} \end{table} Table 1: Particle content beyond the SM. Here \(s\) is the particle spin, \(n_{f}\) is the number of flavors, and all fermions are written in Weyl notation. The factor \(\omega=e^{2i\pi/3}\) is used to represent \(Z_{3}\) group elements. The Lagrangian for the dark sector necessary for the inflection point inflation is given as: \[-\mathcal{L}_{\text{Dark}} \supset\left(\sum_{i=1}^{2}y_{iA}\,\overline{\psi_{1L}}\,\Phi\,\chi _{iR}+\sum_{i=1}^{2}y_{iB}\,\overline{\psi_{1L}}\,\widetilde{\Phi}\,\chi_{iR}+ \sum_{i=1}^{2}y^{\prime}_{iA}\,\overline{\psi_{1L}}\,\Phi\,\chi_{iL}^{c}+\sum_ {i=1}^{2}y^{\prime}_{iB}\,\overline{\psi_{1L}}\,\widetilde{\Phi}\,\chi_{iL}^{c}\right.\] \[+\sum_{i=1}^{2}\tilde{y}_{iA}\,\overline{\chi_{iL}}\,\Phi^{\dagger }\,\psi_{1R}+\sum_{i=1}^{2}\tilde{y}_{iB}\,\overline{\chi_{iL}}\,\widetilde{ \Phi}^{*}\,\psi_{1R}+\sum_{i=1}^{2}\tilde{y}^{\prime}_{iA}\,\overline{\chi_{iL }}\,\Phi^{\dagger}\,\psi_{1L}^{c}+\sum_{i=1}^{2}\tilde{y}^{\prime}_{iB}\, \overline{\chi_{iL}}\,\widetilde{\Phi}^{*}\,\psi_{1L}^{c}\] \[+\left.\sum_{i=2}^{N_{\psi}+1}M\,\overline{\psi_{iL}}\,\psi_{iR} +\text{h.c.}\right)+\frac{\lambda_{\Phi}}{4}|\Phi|^{4}+\frac{\lambda_{H}}{4}| H|^{4}+\lambda_{\Phi H}|\Phi|^{2}|H|^{2}, \tag{2}\] where \(\widetilde{\Phi}=i\sigma^{2}\Phi^{*}\) and \(\widetilde{\Phi^{*}}=i\Phi^{\dagger}\sigma^{2}\). This is the most general renormalizable Lagrangian for the dark sector fields consistent with the global and gauge symmetries. The Coleman-Weinberg-type inflation scenario requires that the mass term for \(\Phi\) be absent or very small, and we have thus not written it in Eq. (2). In the same spirit, we also do not write a bare mass term for the Higgs. The only hard mass term is that for the \(\psi_{i}\), the dark matter states. For simplicity we assume that \(M\) is proportional to the identity in \(\psi_{i}\) flavour space. Mass terms for \(\psi_{1}\) and \(\chi_{i}\) are forbidden by the \(Z_{3}\) symmetry. Those fields do have Yukawa couplings to the inflaton \(\Phi\). The \(\psi_{1L}\) and \(\psi_{1R}\) are two Weyl doublets, while there are two flavours of \(\chi_{iL}\) and \(\chi_{iR}\). Therefore, when \(\Phi\) settles to its non-zero minimum, these states will form four Dirac fermions whose mass is proportional to \(\langle\Phi\rangle\). To simplify the analysis, we take the following ansatz for the 16 different Yukawa couplings: \[y^{\prime}=\tilde{y}^{\prime}=0\,,\,\,\,y_{1A}=y_{2B}=\tilde{y} _{1A}=\tilde{y}_{2B}=0\] \[y_{2A}=\tilde{y}_{2A}\equiv y\,,\,\,\,y_{1B}=\tilde{y}_{1B}\equiv \tilde{y}\,,\,\,\,y\gg\tilde{y} \tag{3}\] This is equivalent to choosing the Yukawa matrix to be diagonal in a certain basis. There are two Dirac fermions which we call \(N_{h}\) with mass \(y\langle\Phi\rangle\), and two much lighter Dirac fermions with mass \(\tilde{y}\langle\Phi\rangle\) which we call \(N_{l}\). The \(N_{l}\) are the right handed neutrinos and will have masses comparable to the reheating temperature after inflation. The \(N_{h}\), which couple much more strongly to the inflaton, will have important loop contributions to the inflaton potential. As this is a Coleman Weinberg-type model, we also suppress the bare mass term for the Higgs. This mass is then generated dynamically by the \(\lambda_{\Phi H}\) operator in the Lagrangian. In order to get the correct Higgs mass, we require \[\lambda_{\Phi H}\sim-\frac{m_{H}^{2}}{\langle\Phi\rangle^{2}} \tag{4}\] which, as we will see, means that we will have \(\lambda_{\Phi H}\sim 10^{-17}\). As such, it will play no role in the phenomenology besides dynamical generation of the Higgs mass parameter. At tree-level, the inflaton potential is simply \(\lambda_{0}\Phi^{4}\), but of course the loop corrections are necessary. At one loop, the \(\beta\)-function for the quartic is given by: \[\beta_{\lambda_{\Phi}}=\kappa\left(\frac{9}{8}g^{4}-4y^{4}-4\tilde{y}^{4}-(9g^ {2}-8y^{2}-8\tilde{y}^{2})\lambda_{\Phi}+2\lambda_{\Phi H}^{2}+24\lambda_{ \Phi}^{2}\right), \tag{5}\] where \(\kappa=(16\pi^{2})^{-1}\), \(g\) is the \(SU(2)_{D}\) gauge coupling, and \(y\) and \(\tilde{y}\) are the Yukawa couplings (see Eq. (3)), and we have taken \(\tilde{y}\ll y\). For \(\beta_{\lambda_{\Phi}}\) to have zeros, we need \(y\simeq g\). Furthermore, in order to get the correct amplitude for the matter power spectrum, \(\lambda_{\Phi}\) must be very small, and we therefore use the parametric regime \(y\simeq g\gg\lambda_{\Phi}\). From Eq. (4), we also have that \(\lambda_{\Phi H}\lesssim\lambda_{\Phi}\). Therefore, we can neglect the terms in \(\beta_{\lambda_{\Phi}}\) that depend on either of the scalar quartics (and we can also ignore terms proportional to \(\tilde{y}\)). The running of \(\lambda_{\Phi}\) is thus controlled by the running of \(g\) and \(y\) whose \(\beta\)-functions are given by \[\beta_{g} =-\kappa\left(\frac{43}{6}-\frac{2}{3}n_{f}\right)g^{3},\] \[\beta_{y} =\kappa\left(\frac{7}{2}y^{3}+\frac{1}{2}y\tilde{y}^{2}-\frac{9} {4}g^{2}y\right), \tag{6}\] where \(n_{f}\) is the number of Dirac fermion doublets kinematically accessible at a given scale.2 Footnote 2: We include the \(\tilde{y}\) dependence in these equations for completeness, but can ignore its effects. We construct a model with two separate zeros for \(\beta_{\lambda_{\Phi}}\) with a massive threshold of fermions [16] such that above the threshold, the gauge coupling has \(\beta_{g}>0\), while below the threshold the theory is asymptotically free, \(\beta_{g}<0\). If at the threshold \(g\lesssim y\) and \(g\) runs faster than \(y\) both above and below the threshold, then this will achieve the desired behaviour for \(\lambda_{\Phi}\) as a function of energy scale. In the Lagrangian given in Eq. (2), the threshold is given by \(M\). As shown in Table 1, we choose \(n_{f}=1\) below the scale \(M\), and \(n_{f}=N_{\psi}+1\) above the scale, with the requirement that \(N_{\psi}\geq 10\) so that \(\beta_{g}>0\) above the scale \(M\). We can now integrate the one-loop \(\beta\)-functions to get leading-log solutions to the running couplings in the limit where terms containing \(\lambda_{\Phi}\) and \(\lambda_{\Phi H}\) in Eq. (5) can be ignored. If we define our reference scale \(\phi_{0}\) as the scale where \(\beta_{\lambda_{\Phi}}=0\), then at that scale we have3 Footnote 3: This is the one-loop condition for the inflection point and will get modified at higher order. For example at two loops we have \[y_{0}=\frac{\sqrt{3}}{2^{5/4}}g_{0}\left(1+\frac{620+33\sqrt{2}}{96}\kappa g_ {0}^{2}\right). \tag{7}\] Since \(\kappa g_{0}^{2}\ll 1\), the one loop expression is a good approximation. \[y_{0}=\frac{\sqrt{3}}{2^{5/4}}g_{0}\,, \tag{8}\] where the \(0\) subscripts means those couplings are evaluated at the scale \(\phi_{0}\). As discussed above, the condition in Eq. (8) is satisfied at two different points in field space, so we take our boundary for RGE running to be the lower one with \(\phi_{0}<M\). The solution for the quartic coupling at leading log is then given by \[\lambda_{\Phi}(\Phi)=\lambda_{0}+\kappa^{2}g_{0}^{6}\left(-b_{\lambda}\ln^{2} \frac{\Phi}{\phi_{0}}+\Theta(\Phi-M)\frac{3}{2}N_{\psi}\ln^{2}\frac{\Phi}{M} \right)\,, \tag{9}\] where \(\Theta(x)\) is the Heaviside step function, \(g_{0}\) and \(\lambda_{0}\) are the values of those couplings at the scale \(\phi_{0}\), and we have used Eq. (8) to eliminate the boundary value of \(y\). Here \(b_{\lambda}=9(68+21\sqrt{2})/64\approx 13.7\). The above solution is a good approximation if \(\lambda_{\Phi},\lambda_{\Phi H}\ll g^{2},y^{2}\), which is satisfied in our phenomenologically viable parameter space as we will see. We can now plug \(\lambda_{\Phi}\) into our potential in Eq. (1). We follow [16] and use the following phenomenological form of the potential: \[V(\Phi)=-\frac{a}{4}\Phi^{4}\left[1+b\ln^{2}\left(\frac{\Phi}{\phi_{0}}\right) \right.-\left.c\ln^{2}\left(\frac{\Phi}{M}\right)\Theta(\Phi-M)\right]+aV_{0}\;. \tag{10}\] We have added a cosmological constant term \(V_{0}\) in order for the minimum of the potential to be at (nearly) zero energy. As long as \(a>0\), \(c>b>0\), and \(M>\phi_{0}\), this potential will have a broad plateau around \(\phi\sim\phi_{0}\) and then a stable minimum with \(\langle\Phi\rangle>M>\phi_{0}\). The phenomenological parameters are controlled by the field theory parameters, \(a\) by \(\lambda_{0}\), \(b\) by \(g_{0}\), and \(c\) by \(N_{\psi}\). We note that the Heaviside function in the potential is a direct consequence of the heavy threshold at the scale \(M\). In order to have the successful inflation scenario there are two additional restrictions on \(b\). First, the potential cannot not develop any local minima to ensure that the inflation rolls smoothly towards the global minima: \(V^{\prime}(\phi)<0\) for \(\phi<\langle\Phi\rangle\). This imposes \(b<16\). Second, at field values \(\phi\sim\phi_{0}<M\), the potential should have inflection points, \(V^{\prime\prime}=0\), which means that \(b\geq 144/25\approx 5.7\). The inflaton potential is shown in Fig. 1 with a close up of the region where cosmological scales enter the horizon in the right panel. We can then find the paraemters of the inflaton potential, including the point on the potential where comsological scales enter the horizon, \(\phi_{i}\), such that 60 e-folds can be achieved and the observations of \(n_{s}\) and \(A_{s}\) can be matched (see appendix A for more details). As described in [16], there are two solutions, one at lower field values than the inflection points and one at higher field values. These in turn correspond to different signs of the running of the spectral index \(\alpha\) and is shown in the right panel of Fig. 1. Two detailed benchmark are explored further in the following sections and detailed in Table 2. The field content presented in this section, in addition to giving an attractive inflation model that is consistent with observations from the CMB, can also solve several of the problems associated with the Standard Model as we now describe in the following sections. ## III Neutrino mass The dynamical inflection point inflation requires the existance of fermions that are signlets under the dark and the SM gauge symmetry that we have denoted \(\chi\). These fermions can couple to the SM leptons the same way that right neutrinos do in many models. Unlike in typical seesaw models, the \(\chi\) fields also get a Dirac mass with the \(\psi_{1}\) fields via the vev of the inflaton. Ultimately these fields will still be responsible for giving the neutrinos mass. This neutrino mass generation resembles the inverse seesaw mechanism [23; 24; 25]. The right-handed-neutrino-like states will also participate in cogenesis of the baryon asymmetry of the universe and dark matter as described in detail in section IV. In order to generate light neutrino masses, we consider mass and Yukawa terms involving heavy fermions \(\chi,\psi_{1}\) that break the \(Z_{3}\) global symmetry. In order not to disturb the inflationary dynamics, the bare mass terms of these fermions are required to be smaller than inflaton field values at the start of inflation. This requires such bare mass terms to be \(\leq\phi_{i}\sim\mathcal{O}(\text{MeV})\). Since the bare mass terms are required to be small, we consider the inverse seesaw realisation where the bare mass term arises only for singlet fermion \(\psi_{1}\) which does not couple directly to SM lepton doublets. The operators required are written as follows: \[-\mathcal{L}_{\nu}=(Y_{R})_{\alpha i}\overline{L}_{\alpha}\tilde{H}\chi_{iR}+(Y_ {L})_{\alpha i}\overline{L}_{\alpha}\tilde{H}\chi_{iL}^{c}+\frac{1}{2}\mu\, \overline{\psi_{1L}}\psi_{1R}+\text{h.c.} \tag{11}\] Here \(\alpha\) is a lepton flavour index while \(i\) is a \(\chi\) flavour index. We can analyze the fermion mass matrix after the inflaton and Higgs settle to their non-zero vacuum values. Using the ansatz of Eq. (3), the heaviest fermions are the \(\psi_{2}\) and \(\chi_{2}\), which get a Dirac mass of \(y\langle\Phi\rangle\) and can be integrated out. We can then write the mass matrix for the remaining fermions in the basis \(\{\nu_{\alpha},\chi_{1R},\chi_{1L},\psi_{1L_{1}},\psi_{1R_{1}}\}\) as follows: \[M_{f}=\begin{pmatrix}0&Y\langle H\rangle&Y\langle H\rangle&0&0\\ Y\langle H\rangle&0&0&\tilde{y}\langle\Phi\rangle&0\\ Y\langle H\rangle&0&0&0&\tilde{y}\langle\Phi\rangle\\ 0&\tilde{y}\langle\Phi\rangle&0&0&\mu\\ 0&0&\tilde{y}\langle\Phi\rangle&\mu&0\end{pmatrix}. \tag{12}\] We have used \(Y\sim Y_{R,L}\) to simplify the notation of the Yukawa couplings to the SM neutrinos. Since \(\langle\Phi\rangle\gg\langle H\rangle\), it is natural to consider the hierarchy \(\tilde{y}\langle\Phi\rangle\gg Y\langle H\rangle\). Additionally, for the reasons mentioned above, \(Y\langle H\rangle\gg\mu\). This mass matrix has the structure of the inverse Figure 1: The inflaton potential. The left panel shows the full potential, while the plot on the right is a close up of the region near the inflection points. The plots are scaled such that they look the same for all allowed parameters. The right panel also shows the 1st (2nd) derivative of the potential dashed (dot-dashed), the location of the inflection points (black dots), and the points where cosmological scales enter the horizon (triangles) for the different benchmark points. The triangle on the left (right) is for BP1 (BP2) with negative (positive) \(\alpha\) (see Table 2). seesaw mechanism, and we can thus give the neutrino mass4 Footnote 4: The expression in Eq. (13) is correct for any relative size of \(m\) and \(M\) as long as both are parametrically larger than \(\mu\). \[m_{\nu}\approx\mu\left(\frac{Y\langle H\rangle}{\tilde{y}\langle\Phi\rangle} \right)^{2}\,. \tag{13}\] The heavy mass scale \(\tilde{y}\langle\Phi\rangle\) must be comparable to the reheating temperature for successful leptogenesis as well as production of dark matter (see section IV). Therefore it will \(\sim\) TeV. If the neutrino Yukawa couplings \(Y\) are \(\mathcal{O}(1)\), then, in order to get the observed neutrino masses, we need the soft mass \(\mu\sim 1-100\) eV, which is sufficiently low that it will not affect the inflationary dynamics. In our benchmarks discussed below, \(\mu\) will be somewhat larger and the Yukawa couplings to the neutrinos smaller than \(\mathcal{O}(1)\). Even with the simplified parameterization we have chosen, all the neutrino flavour structure can be encoded in \(Y\), the Yukawa couplings of the SM neutrinos to the \(\chi\)'s. As there are two \(\chi\) states after integrating out those at \(y\langle\Phi\rangle\), there will still be three different mass eigenvalues. We leave a complete analysis of the neutrino flavour sector to future work. ## IV Cogenesis of lepton asymmetry and dark matter The ingredients of the inflation model are also sufficient for the cogenesis of both a baryon asymmetry and dark matter. The right handed neutrinos that give the neutrinos mass as described in the previous section are also the key players for the cogenesis. The baryon asymmetry will be formed via the leptogenesis mechanism [20] with asymmetric decays of the lightest right handed neutrino. The baryon asymmetry will then be formed by the usual electroweak sphaleron process [26]. The dark matter production will proceed via a freeze-in annihilation of the right handed neutrinos into dark matter states. We will now describe these processes in detail and solve the coupled Boltzmann equations for all the relevant states to show that the correct BAU and dark matter abundance can be achieved. As noted previously, there are four approximately degenerate fermions with mass around the reheating temperature \(\sim\) TeV that are made of \(\psi_{1}\) and \(\chi\). They get a large Dirac mass given by \(\tilde{y}\langle\Phi\rangle\) (see Eqs. (2) and (3)) and also get a small splitting due to the symmetry breaking term \(\mu\) (see Eq. (11)). We denote these states \(N_{i}\) with \(N_{1}\) being the lightest. After inflation the universe will be reheated by inflaton decays. The main decay mode is \(\phi\to N_{i}N_{i}\) shown in Fig. 2. Another possible mode is \(\Phi\to HH\), and the ratio of this mode to the dominant one is given by \[\frac{\Gamma(\Phi\to HH)}{\Gamma(\Phi\to NN)}\sim\frac{\lambda_{\Phi H}^{2}}{ \lambda_{\Phi}\,\tilde{y}^{2}}\ll 1\,, \tag{14}\] where we have used Eq. (4) to estimate this ratio. After the inflaton decays, the universe will be reheated into a thermal bath that contains \(N_{i}\) states and all states with large couplings to the \(N\)'s. This will include all of the SM states via the Yukawa coupling in Eq. (11). The reheating temperature will be \(\sim\) TeV and is set by the inflaton width (see Eq. (16)). We will take the approximation that the reheating process is instantaneous. The \(N_{1}\) decay is dominated by the process \(N_{1}\to LH\) as shown in Fig. 3. The decay occurs out of equilibrium and will be will be asymmetric from the interference of the tree and one-loop diagrams shown in Fig. 3. Primarily the asymmetry will be coming from the resonance [27] which is from second diagram in Fig. 3. Earlier work on leptogenesis in inverse seesaw type scenarios can be found in [28; 29]. The CP asymmetry parameter corresponding to the CP violating decay of RHN \(N_{i}\) (summing over all lepton flavours) is given by [27] \[\epsilon_{i} \equiv \frac{\Gamma_{(N_{i}\rightarrow\sum_{\alpha}L_{\alpha}H)}-\Gamma_ {(N_{i}\rightarrow\sum_{\alpha}L_{\alpha}H^{\dagger})}}{\Gamma_{(N_{i} \rightarrow\sum_{\alpha}L_{\alpha}H)}+\Gamma_{(N_{i}\rightarrow\sum_{i}L_{ \alpha}^{c}H^{\dagger})}} \tag{15}\] \[= \sum_{j\neq i,j=1}^{4}\frac{\text{Im}[(h^{\dagger}h)_{ij}^{2}]}{( h^{\dagger}h)_{ii}(h^{\dagger}h)_{jj}}\frac{(M_{N_{i}}^{2}-M_{N_{j}}^{2})M_{N_{i}} \Gamma_{N_{j}}}{(M_{N_{i}}^{2}-M_{N_{j}}^{2})^{2}+M_{N_{i}}^{2}\Gamma_{N_{j}} ^{2}}. \tag{16}\] Here we denote the Yukawa coupling of lepton flavour \(\alpha\) to \(N_{i}\) as \(h_{\alpha i}\) which can be found from the ones in Eq. (11) by going to the physical mass basis of heavy fermions. Now, parametrizing the Yukawa couplings as \(h_{\alpha i}=y_{\alpha i}e^{-i\phi_{i}}\) and considering the resonant limit \(\Delta M_{ij}\equiv M_{N_{i}}-M_{N_{j}}\sim\Gamma_{N_{j}}\) we can approximate the CP asymmetry parameter as \[\epsilon_{i}\sim\frac{3}{2}\sin(\phi),\quad\phi_{i}-\phi_{j}=\phi. \tag{17}\] If \(\Delta M\) and \(\Gamma\) are not comparable, then there is additional suppression by \(\sim\text{Min}(\Delta M/\Gamma,\,\Gamma/\Delta M)\). Our benchmarks described below have large values of \(\epsilon\) and require the parameters to be in the resonance regime. There are also regions of parameter space where the model is viable with \(\epsilon\ll 1\) where there is freedom in the interplay of \(\Delta M\), \(\Gamma\) and \(\phi\). When the lepton asymmetry is generated, some of the asymmetry will be converted to a baryon asymmetry via the electroweak sphaleron process [26]. This process conserves \(B-L\) Figure 3: Process responsible for leptogenesis. Figure 2: Process responsible for reheating. and it is fast for temperatures above \(T\sim 130\) GeV, but exponentially suppressed for lower temperatures. The equilibrium value of the baryon asymmetry is given by \[Y_{B}\simeq a_{\rm sph}\,Y_{B-L}=\frac{8\,N_{F}+4\,N_{H}}{22\,N_{F}+13\,N_{H}}\, Y_{B-L}\,, \tag{18}\] The sphaleron conversion factor \(a_{\rm sph}\) given by [30], depends upon the number of fermion generations (\(N_{F}\)) and the number of Higgs doublets (\(N_{H}\)) which transform under \(SU(2)_{L}\) gauge symmetry of the SM. Since we do not have any additional \(SU(2)_{L}\) multiplets in our model, we have \(N_{F}=3\,,N_{H}=1\) and \(a_{\rm sph}=28/79\). The production of dark matter is also originated by the \(N_{i}\) states. The dark matter are the \(\psi_{i}\) shown in Table 1. There are \(N_{\psi}\) generations that are stable due to being the only states charged under the unbroken \(Z_{2}\) symmetry. These states, however do not couple to the SM directly nor have any tree level production (or annihilation) channels from (to) the SM bath particles. The DM does haave \(SU(2)_{D}\) gauge interactions, but the corresponding gauge bosons have mass \(g\langle\Phi\rangle\) which is much larger than \(T_{\rm reheat}\), and are therefore not present in the bath after reheating. Therefore the dominant production process for the dark matter is the tree-level \(N_{1}\) annihilation shown in Fig. 4. As \(T_{\rm reheat}\sim\mathcal{O}(\rm TeV)\) and the mediator mass \(M_{W_{D}}>10^{8}\) GeV, this process shown in Fig. 4 does not reach thermal equilibrium. This makes the non-thermal DM production more natural, keeping it in the feebly interacting massive particle (FIMP) paradigm [21]. In order to study the generation of lepton asymmetry and dark matter relic, one has to write the corresponding Boltzmann equations. We track the abundances of the \(N_{i}\) and \(\psi_{i}\). We also track the asymmetry in \(B-L\) number so that sphaleron dynamics can be ignored. The total baryon asymmetry can then be found from Eq. (18). We can write the Boltzmann equations in terms of dimensionless yield variable, \(Y_{x}\equiv n_{x}/s\) for a given species \(x\), where \(n_{x}\) is the number density and \(s\) is the total entropy density. We can also use \(z\equiv M_{N_{1}}/T\) as our time variable. The Boltzmann equations are then given by: Figure 4: Process responsible for dark matter freeze-in. \[\frac{dY_{N_{i}}}{dz} = -\frac{z(1-f(z)/3)}{H(M_{N_{1}})}\left(\left(\Gamma_{N_{i}}\frac{K_{ 1}(z)}{K_{2}(z)}\left(Y_{N_{i}}-Y_{N_{i}}^{\rm eq}\right)\right)+\sum_{j=1}^{4} s\langle\sigma v\rangle(Y_{N_{i}}Y_{N_{j}})\right),\] \[\frac{dY_{B-L}}{dz} = \frac{z(1-f(z)/3)}{H(M_{N_{1}})}\sum_{i=1}^{4}\left(\epsilon_{i} \Gamma_{N_{i}}\frac{K_{1}(z)}{K_{2}(z)}\left(Y_{N_{i}}-Y_{N_{i}}^{\rm eq} \right)-\frac{Y_{B-L}}{Y_{l}^{\rm eq}}\frac{\Gamma_{N_{i}}}{H}\frac{K_{1}(z)} {K_{2}(z)}Y_{N_{i}}^{\rm eq}\right),\] \[\frac{dY_{\psi}}{dz} = \frac{z(1-f(z)/3)}{H(M_{N_{1}})}\sum_{i,j=1}^{4}\left(s\langle \sigma v\rangle(Y_{N_{i}}Y_{N_{j}})\right), \tag{19}\] where \(f(z)=\frac{z}{g_{*s}}\frac{dq_{*s}}{dz}\) and \(H(T)\equiv\sqrt{4\pi^{3}g_{*}(T)/45}T^{2}/M_{\rm pl}\). Here, \(g_{*}(T),\,g_{*s}(T)\) are the relativistic degrees of freedom in energy density and entropy density of the universe at temperature \(T\). At very high temperatures, above the electroweak scale \(g_{*}(T)=g_{*s}(T)\) as all particles behave like radiation. As temperature decreases (\(T<m_{i}\)), particle species '\(i\)' becoming non-relativistic drops out from contributing to \(g_{*},g_{*s}\). Following the procedure outlined in [31], one can numerically calculate \(g_{*},g_{*s}\) as well as the function \(f(T)\). As noted above, \(Y\) is the number density normalized to the entropy of the universe, and \(Y^{\rm eq}\) is the equilibrium counterpart. Also, \(s=\frac{2\pi^{2}}{45}g_{*s}(T)T^{3}\) and \(K_{i}\) represents modified Bessel function of \(i\)-th order. The thermally averaged cross section [31]\(\langle\sigma v\rangle\) for the process \(N_{i}N_{j}\to\psi\psi\) is given by \[\langle\sigma v\rangle = \frac{T}{128\pi^{5}(n_{N_{1}}^{\rm eq})^{2}}\int_{\hat{s}_{0}}^{ \infty}d\hat{s}\,p_{\psi}\,p_{N_{1}}\,\frac{|\mathcal{M}|^{2}}{\sqrt{\hat{s}}} K_{1}\left(\sqrt{\hat{s}}/T\right),\quad\hat{s}_{0}\equiv{\rm Max}\left[4M_{N_{1}}^{2 },4m_{\psi}^{2}\right],\] \[p_{i} = \frac{\lambda^{1/2}(\hat{s},m_{i}^{2},m_{i}^{2})}{2\sqrt{\hat{s} }},\;\lambda(x,y,z)=x^{2}+y^{2}+z^{2}-2xy-2yz-2xz,\] \[\sum|\mathcal{M}|^{2} \simeq \frac{10N_{\psi}}{3\langle\Phi\rangle^{4}}\left(2\hat{s}^{2}- \frac{8}{5}\hat{s}\left(M_{N_{1}}^{2}-\frac{m_{\psi}^{2}}{4}\right)+4m_{\psi}^ {2}M_{N_{1}}^{2}\right). \tag{20}\] Before numerically solving these equations, we first give approximate analytical solutions for the \(B-L\) asymmetry and the dark matter abundance. For the \(B-L\) asymmetry, we can first compare the decay rate of the \(N_{i}\) to the Hubble parameter. At the time of reheating, \(H\sim T_{\rm reheat}^{2}/M_{\rm pl}\sim 10^{-12}\) GeV, while \(\Gamma_{N_{1}}\sim Y^{2}M_{N_{1}}\sim 10^{-4}\) GeV, where we are using rough values of the parameters in Table 2. Therefore, we are in the strong wash-out regime and the inverse decay will keep the \(N\) abundance close to its equilibrium value. We can thus estimate the the asymmetry following [32]: \[K = \frac{\Gamma_{N_{i}}}{H(T=M_{N_{i}})}\sim\frac{Y^{2}M_{\rm pl}}{M _{N_{1}}}\] \[\kappa(x) = \left(1+\frac{K^{2}x^{6}}{75}\right)^{-1}\frac{2K}{75}x^{5}\approx \frac{2}{xK}\] \[Y_{B-L} = \left.\frac{3}{4}\epsilon_{i}\kappa(M_{N_{i}}/T)\right|_{T=130\, \rm GeV}\sim 2\times 10^{-10}. \tag{21}\] The 130 GeV temperature is where the electroweak sphaleron freezes out as discussed above. We see that this rough estimate gives the asymmetry very close to the observed value. In our benchmarks below we have chosen large \(Y\) (and thus large \(K\)) and large \(\epsilon\). One could alternatively choose smaller \(Y\) and smaller \(\epsilon\). This would mean the \(N_{i}\) states have weaker couplings to SM leptons, but this scenario can still accommodate the BAU. To analytically estimate the dark matter relic abundance, we can first use the fact that the annihilation is mediated by a very heavy gauge boson so the production will be UV dominated. Since \(T_{\rm reheat}\gtrsim M_{N_{1}},m_{\psi}\), we can estimate the high temperature limit of the cross section \(\langle\sigma v\rangle\sim T^{2}/\langle\Phi\rangle^{4}\). We can then integrate the third line of Eq. (19) assuming \(Y_{N}\sim T^{3}\) at its equilibrium value: \[Y_{\psi}\sim\frac{N_{\psi}M_{\rm pl}M_{N_{1}}^{3}}{\langle\Phi\rangle^{4}}\int _{z_{\rm reheat}}\frac{dz}{z^{4}}\sim\frac{N_{\psi}M_{\rm pl}T_{\rm reheat}^{ 3}}{\langle\Phi\rangle^{4}} \tag{22}\] We see that for \(T_{\rm reheat}\sim 1\) TeV and \(\langle\Phi\rangle\sim 10^{9}\) GeV, we get \(Y_{\psi}\sim 10^{-13}\) which is the right ballpark for TeV scale mass dark matter. We now do a numerical analysis of the coupled Boltzmann equations in Eq. (19). We explore two benchmark parameter points with all the detailed parameters and outputs shown in Table 2. The initial abundances of lepton asymmetry and DM are negligible as they do not couple directly to the inflaton, while the \(N_{1}\) does couple directly so it is initially at its equilibrium value at \(T_{\rm reheat}\).5 The evolution of lepton asymmetry and DM for different benchmark points in table 2 is shown in Fig. 5. The left panel shows the evolution for BP2 while the right panel corresponds to BP1 in table 2. One may notice that the asymmetry in both of the plots seems to start from a prior asymmetry, but the initial condition for the asymmetry as mentioned above is taken to be zero. In the actual evolution, the asymmetry shoots up quickly after reheating as a result of rapid \(N_{i}\) decays. This can be understood by looking at Eq. (19) and seeing that if \(Y_{B-L}=0\), then the asymmetry is sourced by the deviation \(N_{1}\) from equilibrium, which can be estimated following [32]: Footnote 5: A more rigorous approach is, rather than assuming reheating is fast, to solve coupled equations involving inflaton field, radiation energy density together with the ones for \(N_{1},B-L,\psi\)[33; 34; 35]. \[\Delta\equiv Y_{N}-Y_{N}^{eq}\simeq\left(1+\frac{Kz_{\rm reheat}^{3}}{\frac{ 15}{4}+2z_{\rm reheat}}\right)^{-1}\frac{3}{16}z_{\rm reheat}^{3}K_{1}(z_{ \rm reheat})\sim 10^{-9}\,. \tag{23}\] The growth of the asymmetry is then proportional to \(\epsilon K\Delta\sim 1\), which explains why it rises so quickly. Once it becomes larger, the second term in the Boltzmann equation, the wash out, takes over and the asymmetry begins to slowly fall as seen in Fig. 3. The figure also shows the total DM relic for \(N_{\psi}\) copies of dark fermion doublet \(\psi\) for which the final abundance matches with the Planck 2018 data [1] given the masses shown in Table 2. We can see that, consistent with our analysis in Eq. (22), the production is UV dominated, namely most of the production occurs at early times. ## V Conclusions The recently proposed framework of Dynamical Inflection Point Inflation [16] provides a way to generate an inflaton potential using ordinary field theory ingredients, and the inflation scale can be parametrically lower than the Planck scale and agree with all observations from the CMB. The inflaton is coupled to gauge fields and fermions, and in this work we have shown that the fields in this inflation scenario can also solve three of the most significant shortcomings of the Standard Model: neutrino masses, the baryon asymmetry of the universe, and dark matter. The model contains an \(SU(2)_{D}\) gauge group, under which the inflaton is a doublet, and fermions that are singlets and doublets. All the new fields are neutral under the SM gauge symmetry. The singlet fermion can couple to the SM lepton portal \(LH\). When the inflaton settles to its minimum, the singlet and some of the doublet fermions will get Dirac masses, but they have properties similar to right handed (RH) neutrinos. These states will then give mass to the SM neutrinos via the inverse seesaw mechanism. The remaining doublets are charged under a \(Z_{2}\) symmetry. They are thus stable and serve as the dark matter candidate. The dominant inflaton decay is via the Yukawa couplings to the right handed neutrinos which reheats the universe up to the TeV scale. The dynamics of the fermions also gives rise to cogenesis of a baryon asymmetry and dark matter abundance. These RH neutrinos decay asymmetrically and out of equilibrium to \(LH\), generating a lepton asymmetry, which is then partly converted to a baryon asymmetry by the electroweak sphaleron. The dark sector fermions can also annihilate, via the heavy gauge boson, to the dark matter states. These states are stabilized by a \(Z_{2}\) symmetry, but their production is rare because the gauge bosons are orders of magnitude heavier than the fermions or the dark matter. Therefore the \begin{table} \begin{tabular}{|c|c|c|} \hline & BP1 & BP2 \\ \hline \(M/\phi_{0}\) & 6.7 & 6.27 \\ \hline \(\phi_{0}/M_{\rm pl}\) & \(5.85\times 10^{-19}\) & \(1.29\times 10^{-18}\) \\ \hline \(\phi_{i}=10^{7}\mu\) (GeV) & 5.9 & 5.9 \\ \hline \(\lambda_{0}\) & \(1.92\times 10^{-14}\) & \(6.94\times 10^{-13}\) \\ \hline \(N_{\psi}\) & 13 & 13 \\ \hline \(\tilde{y}\) & \(2.84\times 10^{-8}\) & \(1.95\times 10^{-8}\) \\ \hline \(Y\) & \(6.6\times 10^{-3}\) & \(9.08\times 10^{-3}\) \\ \hline \(\epsilon_{i}\) & 0.5 & 0.8 \\ \hline \(\langle\Phi\rangle\) (GeV) & \(2.75\times 10^{10}\) & \(2.78\times 10^{10}\) \\ \hline \(m_{\phi}\) (TeV) & 6.6 & 40.2 \\ \hline \(M=m_{\psi}\) (GeV) & 50 & 100 \\ \hline \(M_{N_{l}}\) (GeV) & 393 & 271 \\ \hline \(M_{N_{h}}\) (GeV) & \(2.38\times 10^{8}\) & \(5.22\times 10^{8}\) \\ \hline \(T_{\rm reheat}\) (GeV) & 452 & 770 \\ \hline \(N_{e}\) & 60 & 60 \\ \hline \(n_{s}\) & 0.9691 & 0.9691 \\ \hline \(r\) & \(4.6\times 10^{-45}\) & \(1.3\times 10^{-44}\) \\ \hline \(\alpha\) & -\(8.49\times 10^{-4}\) & \(2.36\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 2: Details of the two benchmark points used in the analysis. abundance is set by the freeze-in mechanism. Because the inflation scale is around the weak scale, we predict no discovery of tensor modes, \(r\ll 1\). The expected value of the running of the scalar spectral index, \(\alpha\), could be within the range of next generation CMB observations [36]. If \(\alpha\) is measured, its sign would determine where on the inflaton potential cosomological scales enter the horizon (see Fig. 1). There is a direct coupling of the inflaton to the Higgs that dynamically generates the Higgs mass parameter. If instead the portal coupling \(\lambda_{\Phi H}\) is larger and there is an additional bare Higgs mass parameter, the inflaton could possibly be probed via its mixing with the Higgs [37]. The RH leptons are at the weak scale and couple to SM leptons, so they could potentially be produced and discovered at colliders [38]. Our benchmarks have maximized this coupling, which in turn maximizes the asymmetry parameter \(\epsilon_{i}\). Precise measurements of decays of these right handed neutrino-like states could also shed light on whether they are in fact participating in a leptogenesis mechanism. The Yukawa couplings are expected to have \(\mathcal{O}(1)\) phases, so there will be a loop-level contribution to the electric dipole moments of the charged leptons [39]. The dark matter states are at the weak scale, but their coupling to the SM is suppressed by the heavy vector mass as well as by the mixing of heavy and light neutrinos. Therefore, like most freeze-in models, prospects for direct detection are quite limited. There may be signals in indirect detection, particularly if the mass spectrum of the dark matter is slightly non-minimal and there are long lived states that decay down to lighter ones. We have studied two specific benchmark points shown in Table 2, and shown the evolution of the cogenesis for those benchmarks in Fig. 5. This gives non-trivial proof of an existence of viable parameter points that satisfy all the observational constraints of the inflation observables, BAU, and dark matter. Throughout we have taken relatively simple Figure 5: Evolution of comoving number densities for \(B-L\) asymmetry and dark matter for the benchmark points shown in table 2. The left panel corresponds to \(\alpha<0\) (BP1) while the right panel corresponds to \(\alpha>0\) (BP2). The vertical line is when the temperature is 130 GeV, the sphaleron freeze out temperature. choices for the parameters, but a more complete exploration of the parameter space may uncover additional phenomenological signatures. ## Acknowledgements We are grateful to Sekhar Chivukula for helpful conversations. The work of DB is supported by the Science and Engineering Research Board (SERB), Government of India grant MTR/2022/000575. DS is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). AD would like to acknowledge the hospitality of UCSD where the work was finalised. ## Appendix A Review of Inflation Formulae In this appendix we review the standard formulas for slow roll inflation and its connection to cosmological observations that we have used in this work (see for example [40]). Given an inflaton \(\phi\) with its potential \(V(\phi)\), the slow roll parameters as \[\epsilon_{v} =\frac{M_{\rm pl}^{2}}{2}\left.\left(\frac{V^{\prime}}{V}\right)^ {2}\right|_{\phi=\phi_{i}},\] \[\eta_{v} =M_{\rm pl}^{2}\left.\frac{V^{\prime\prime}(\phi)}{V(\phi)} \right|_{\phi=\phi_{i}} \tag{30}\] \[\xi^{2} =M_{\rm pl}^{4}\left.\frac{V^{\prime\prime\prime}(\phi)V^{\prime} (\phi)}{V^{2}(\phi)}\right|_{\phi=\phi_{i}}\] where \(\phi_{i}\) is the point in the field space where the cosmological scales leave the horizon and \(M_{\rm pl}=1/\sqrt{8\pi G_{N}}\sim 2.4\times 10^{18}\) GeV. Now, redefining the parameters as : \[V^{\prime}=\frac{1}{\phi_{0}}\frac{\partial V}{\partial(\phi/\phi_{0})}=\frac {1}{\phi_{0}}V_{u}\quad u:=\phi/\phi_{0} \tag{31}\] where \(\phi_{0}\) is the scale of the inflection point. Now, in terms of the redefined parameter the slow parameters are \[\epsilon_{v} =\left.\frac{1}{2}\left(\frac{M_{\rm pl}}{\phi_{0}}\right)^{2} \left(\frac{V_{u}}{V}\right)^{2}\right|_{u=u_{i}};\] \[\eta_{v} =\left.\left(\frac{M_{\rm pl}}{\phi_{0}}\right)^{2}\left(\frac{V_ {uu}}{V}\right)\right|_{u=u_{i}};\] \[\xi^{2} =\left.\left(\frac{M_{p}}{\phi_{0}}\right)^{4}\frac{V_{uuu}V_{u}} {V^{2}}\right|_{u=u_{i}} \tag{32}\] Slow roll is maintained while all of these parameters are small. In the slow-roll regime, the above parameters can be mapped into observables that are measured in the CMB. The spectra for the scalar and the tensor perturbations are approximated in power laws given as \[\mathcal{P}_{\mathcal{R}} =A_{s}(k/k_{*})^{n_{s}-1+\frac{aA}{2}\ln k/k_{*}}\] \[\mathcal{P}_{t} =A_{t}(k/k_{*})^{n_{t}} \tag{10}\] where, \(n_{s}\) is the scalar index, \(\alpha_{s}=dn_{s}/d\ln k\) is the running of the scalar spectral index, \(n_{t}\) is the tensor spectral index; \(k_{*}=0.05\) Mpc\({}^{-1}\) is the pivot scale. The tensor spectral index is related to the scalar index by the ratio \(r\equiv A_{t}/A_{s}\) by \(n_{t}\simeq-r/8\) for single field slow-roll inflation. Now the mapping of the above observables with the potential parameters are given as \[n_{s} \approx 1-6\epsilon_{v}+2\eta_{v}\] \[r \approx 16\epsilon_{v}\] \[\alpha \approx 16\epsilon_{v}\eta_{v}-24\epsilon_{v}^{2}-2\xi^{2}\] \[A_{s} \approx\frac{1}{12\pi^{2}}\frac{V^{3}}{V^{\prime 2}} \tag{11}\] which are measured or bounded by CMB observations [2, 41], and whose precision will be improved by future observations [36]. The number of _e_-folds of inflation can be computed as \[N_{e}=\frac{1}{M_{\rm pl}^{2}}\int_{\phi_{e}}^{\phi_{i}}\frac{V(\phi)}{V^{ \prime}(\phi)}d\phi\simeq 50-60\;, \tag{12}\] where \(\phi_{e}\) is the scale where the inflaton is no longer slowly rolling and inflation ends. When we scale the potential w.r.t \(\phi_{0}\) and use \(u\equiv\phi/\phi_{0}\), the number of _e_-folding is given as \[N_{e}=\frac{\phi_{0}^{2}}{M_{\rm pl}^{2}}\int_{u_{e}}^{u_{i}}\frac{V(u)}{V_{u} (u)}du\;;\;V^{\prime}(\phi)=\frac{1}{\phi_{0}}V_{u}(u) \tag{13}\] Using the potential given in eq. (10), we can reparameterize in terms of \(u\) as \[\widetilde{V}(u) =-\frac{a}{4}u^{4}\left[a_{0}+c_{0}\ln(u)+b_{0}\ln^{2}(u)\right]\] \[+aV_{0}\;. \tag{14}\] Defining \(u_{M}=M/\phi_{0}\), the parameters of the potential can be defined piecewise: for \(u<u_{M}\) \[a_{0} =1;\quad b_{0}=b;\] \[c_{0} =0 \tag{15}\] and for \(u>u_{M}\) \[a_{0} =1-c\ln^{2}(u_{M});\quad b_{0}=b-c;\] \[c_{0} =2c\ln(u_{M}). \tag{10}\] The above number of \(e\)-folds in eq. (10) can be re-written as \[N_{e}=\frac{\phi_{0}^{2}}{M_{\rm pl}^{2}}\left[\int_{u_{M}}^{u_{i}}\frac{ \widetilde{V}(u)}{\widetilde{V}_{u}(u)}du+\int_{u_{e}}^{u_{M}}\frac{ \widetilde{V}(u)}{\widetilde{V}_{u}(u)}du\right]. \tag{11}\] Each integral can then be presented in terms of exponential integral function \(\mathcal{E}(x)\) as follows: \[\mathcal{H}(a_{0},b_{0},c_{0},u) \equiv\int\frac{\widetilde{V}(u)}{\widetilde{V}_{u}(u)}du\] \[=\frac{1}{16\alpha}\left[2\alpha u^{2}+32V_{0}\left(e^{\beta_{+}} \mathcal{E}(-\beta_{+}-2\ln(u))-e^{\beta_{-}}\mathcal{E}(-\beta_{-}-2\ln(u))\right)\right.\] \[+\left.(b_{0}-\alpha)e^{-\beta_{-}}\mathcal{E}(\beta_{-}+2\ln(u) )-(b_{0}+\alpha)e^{-\beta_{+}}\mathcal{E}(\beta_{+}+2\ln(u))\right] \tag{12}\] \[\alpha =\sqrt{4c_{0}^{2}+b_{0}^{2}-16b_{0}a_{0}}\quad,\quad\beta_{\pm} =\frac{1}{2}+\frac{c_{0}}{b_{0}}\pm\frac{\alpha}{2b_{0}}\] (13) \[\mathcal{E}(z) \equiv-\int_{-z}^{\infty}\frac{e^{-t}}{t}dt\;. \tag{14}\] From this expression, the number of \(e\)-folds can be written as \[N_{e} =\frac{\phi_{0}^{2}}{M_{\rm pl}^{2}}\left[\mathcal{H}(1,b,0,u_{i} )-\mathcal{H}(1,b,0,u_{M})\right.\] \[+\mathcal{H}(1-c\ln^{2}(u_{M}),b-c,2c\ln(u_{M}),u_{M})\] \[-\left.\mathcal{H}(1-c\ln^{2}(u_{M}),b-c,2c\ln(u_{M}),u_{e}) \right]\;. \tag{15}\] Along with that the inflation scale which is not at inflection point but very close to inflection point which is estimated from the spectral index \(n_{s}|_{u=u_{i}}=0.9691\pm 0.0041\). And the end of the inflation is determined by calculating the slow-roll parameter \(\epsilon_{v}|_{u=u_{e}}=1\). After inflation ends, the inflaton decays its energy to a thermal bath, which in the minimal scenario is dominated by Standard Model fields. The reheating temperature can be estimated as \[T_{\rm reheat}\approx 0.2\sqrt{\Gamma\,M_{\rm pl}}\,, \tag{16}\] where \(\Gamma\) the total decay width of the inflaton.
2305.01526
Huatuo-26M, a Large-scale Chinese Medical QA Dataset
In this paper, we release a largest ever medical Question Answering (QA) dataset with 26 million QA pairs. We benchmark many existing approaches in our dataset in terms of both retrieval and generation. Experimental results show that the existing models perform far lower than expected and the released dataset is still challenging in the pre-trained language model era. Moreover, we also experimentally show the benefit of the proposed dataset in many aspects: (i) trained models for other QA datasets in a zero-shot fashion; and (ii) as external knowledge for retrieval-augmented generation (RAG); and (iii) improving existing pre-trained language models by using the QA pairs as a pre-training corpus in continued training manner. We believe that this dataset will not only contribute to medical research but also facilitate both the patients and clinical doctors. See \url{https://github.com/FreedomIntelligence/Huatuo-26M}.
Jianquan Li, Xidong Wang, Xiangbo Wu, Zhiyi Zhang, Xiaolong Xu, Jie Fu, Prayag Tiwari, Xiang Wan, Benyou Wang
2023-05-02T15:33:01Z
http://arxiv.org/abs/2305.01526v1
# Huatuo-26M, a Large-scale Chinese Medical QA Dataset ###### Abstract In this paper, we release a **largest** ever medical Question Answering (QA) dataset with **26 Million** QA pairs. We benchmark many existing approaches in our dataset in terms of both retrieval and generation. Experimental results show that the existing models perform far lower than expected and the released dataset is still challenging in the pre-trained language model era. Moreover, we also experimentally show the benefit of the proposed dataset in many aspects: (i) trained models for other QA datasets in a zero-shot fashion; and (ii) as external knowledge for retrieval-augmented generation (RAG); and (iii) improving existing pre-trained language models by using the QA pairs as a pre-training corpus in continued training manner. We believe that this dataset will not only contribute to medical research but also facilitate both the patients and clinical doctors. See [https://github.com/FreedomIntelligence/Huatuo-26M](https://github.com/FreedomIntelligence/Huatuo-26M). + Footnote †: The first two authors contributed to this paper equally ## 1 Introduction Pre-trained language models have made great progress in natural language processing (NLP) and largely improve natural language understanding and natural language generation. This inspires researchers to apply PLMs for fields that were not considered the core playground of NLP, for example, medicine. However, the first _bottleneck_ for medicine using PLMs is the _data_, like most other breakthroughs in artificial intelligence that starts with data collection. To break the bottleneck, this work collects the largest medical Chinese QA dataset that also might enhance medical research. Note that there are 1.4B population speaking Chinese as their native language, and more importantly, the medical care for them (particularly the mainland of China) is generally far below the western counterpart (e.g., English-speaking and developed countries) 1. Footnote 1: see [https://en.wikipedia.org/wiki/World_Health_Organization_ranking_of_health_systems_in_2000](https://en.wikipedia.org/wiki/World_Health_Organization_ranking_of_health_systems_in_2000) DatasetWe collect the largest medical QA dataset from various sources as below: (i) collected from an online medical consultation website; (ii) automatically extracted from medical encyclopedias, and (iii) automatically extracted from medical knowledge bases. After text cleaning and data deduplication, we obtained the largest Chinese medical QA dataset, containing **26 Million** QA pairs. We call this dataset 'Huatuo-26M' to commemorate the great Chinese physician named Hua Tuo, who lived around 200 AC. As seen from Table 1 that this work has expanded the existing medical domain QA dataset by more than two orders of magnitude, even larger than most QA datasets in the general domain. BenchmarkBased on the collected dataset, we benchmark classical methods in the field of retrieval: For sparse retrieval, we test the performance of BM25 Robertson et al. (2009) and DeepCT Dai and Callan (2019), and for dense retrieval, we test the performance of DPR Karpukhin et al. (2020). At the same time, we also trained some of the auto-regressive language models namely GPT2 Brown et al. (2020) and T5 Raffel et al. (2020). The results suggest the task is still challenging using pre-trained language models, probably because the medical domain involves more expert knowledge than the general domain. To further show the usefulness of the collected dataset, we leverage the collected dataset in three use cases: (i) transfer to other QA datasets; (ii) as external knowledge for RAG; and (iii) as a pre-trained corpus. Use case I: Transfer for other QA dataset Since the Huatuo-26M dataset is large, we also expect that the models trained by the dataset could encapsulate general medical knowledge. Therefore, we use the trained models on two existing medical QA datasets, namely cMedQA2 Zhang et al. (2018) and webMedQA He et al. (2019). Experimental results show that the model can achieve competitive performance even in few or zero samples. **Use case II: As an external knowledge for RAG** Large-scale medical QA datasets themselves explicitly contain rich medical knowledge, and we leverage it as external knowledge in the context of retrieval-augmented generation Lewis et al. (2020). Experimental results on cMedQA2 and webMedQA datasets show that using this dataset as an external knowledge base can greatly improve the quality of generated texts. **Use case III: As a pre-trained corpus** Considering that the scale of the data set is comparable to that of pre-training corpora of general pre-trained language models, we use the text corpus of Huatuo-26M as a pre-trained corpus that could inject implicit knowledge into the model through pre-training. We improve Bert and RoBERTa in a continuously-training manner on the dataset by using QA pairs as pre-training corpora. The experimental results show the performance of pre-trained models on biomedical tasks could be largely improved by using Huatuo-26M as an additional pre-training corpus. **Contributions** of this work are as follows: **(i)** We release the largest Chinese Medical QA dataset (with **26,504,088** QA pairs); **(ii)** we benchmark some existing models for the proposed methods for both retrieval and generation; and **(iii)** we explore some additional usage of our dataset, for example, transfer for other QA datasets, train as external knowledge for RAG, and train as a pre-trained corpus. ## 2 Dataset Creation We have collected a variety of medical knowledge texts from various sources and unified them in the form of medical question-and-answer pairs. The main resources include an online medical QA website, medical encyclopedias, and medical knowledge bases. See Appendix B for specific examples from different sources. Here we will introduce the details of data collection from the above three data sources. ### Online Medical Consultation Records **Data Sources** We collected data from a website for medical consultation 2, consisting of many online consultation records by medical experts. Each record is a QA pair: a patient raises a question and a medical doctor answers the question. The basic information of doctors (including name, hospital organization, and department) was recorded. Footnote 2: Qianwen Health in [https://512zy.com/](https://512zy.com/) **Data Processing** We directly crawl patients' questions and doctor's answers as QA pairs, getting 31,677,604 pairs. Subsequently, we removed \begin{table} \begin{tabular}{l l l l} \hline \hline Dataset & Lang & Domain & Source & \#Q \\ \hline MedHop Welbl et al. (2018) & English & Medical & MEDLINE & 2.5K \\ BiQA Lamurias et al. (2020) & English & Medical & Online Medical forum & 7.4K \\ HealthQA Zhu et al. (2019) & English & Medical & Medical-services website & 7.5K \\ MASH-QA Zhu et al. (2020) & English & Medical & Medical article website & 35K \\ MedQuAD Ben Abacha and Demner-Fushman (2019) & English & Medical & U.S. National Institutes of Health (NIH) & 47K \\ ChiMed Tian et al. (2019) & Chinese & Medical & Online Medical forum & 47K \\ MedQA Jin et al. (2020) & EN\&CH & Medical & Medical Exam & 60K \\ webMedQA He et al. (2019) & Chinese & Medical & Medical consultancy websites & 63K \\ CliCR Suster and Daelemans (2018) & English & Medical & Clinical case reports & 100K \\ cMedQA2 Zhang et al. (2018) & Chinese & Medical & Online Medical forum & 108K \\ \hline **Huatuo-26M** & & **Chinese** & **Medical** & **Consultation records, Encyclopedia, KBs** & **26M** \\ \hline TriviaQA Joshi et al. (2017) & English & General & Trivia & 96K \\ HotpotQA Yang et al. (2018) & English & General & Wikipedia & 113K \\ SQuAD Rajpurkar et al. (2016) & English & General & Wikipedia & 158K \\ DuReader He et al. (2017) & Chinese & General & Web search & 200K \\ Natural Questions Kwiatkowski et al. (2019) & English & General & Wikipedia & 323K \\ MS MARCO Nguyen et al. (2016) & English & General & Web search & 1.0M \\ CNN/Daily Mail See et al. (2017) & English & General & News & 1.3M \\ PAQ Lewis et al. (2021) & English & General & Wikipedia & 65M \\ \hline \hline \end{tabular} \end{table} Table 1: Existing QA dataset. the QA pairs containing special characters and removed the repeated pairs. Finally, we got 25,341,578 QA pairs. ### Online Medical Encyclopedia Data SourcesWe extract medical QA pairs from plain texts (e.g., medical encyclopedias and medical articles). We collected 8,699 encyclopedia entries for diseases and 2,736 encyclopedia entries for medicines on Chinese Wikipedia 3. Moreover, we crawled 226,432 high-quality medical articles from the Qianwen Health website4. Footnote 3: zh.wikipedia.org/wiki/ Footnote 4: [https://51zyzy.com/](https://51zyzy.com/) Footnote 5: [https://cpubmed.openi.org.cn/graph/wiki](https://cpubmed.openi.org.cn/graph/wiki) Footnote 6: [https://github.com/zhihao-chen/QASystemOnMedicalGraph](https://github.com/zhihao-chen/QASystemOnMedicalGraph) Footnote 7: [https://github.com/baiyang2464/chatbot-base-on-Knowledge-Graph](https://github.com/baiyang2464/chatbot-base-on-Knowledge-Graph) Data ProcessingWe first structure an article. Each article will be divided into title-paragraph pairs. For example, such titles in articles about medicines could be usage, contraindications, and nutrition; for articles about medicines about diseases, they could be diagnosis, clinical features, and treatment methods. We remove the titles of paragraphs that have appeared less than five times, finally resulting in 733 unique titles. Based on these titles, we artificially design templates to transform each title into a question that could be answered by the corresponding paragraph. Note that a disease name or a drug name could be a placeholder in the templates. See the appendix C for details. ### Online Medical Knowledge Bases Data SourcesSome existing knowledge bases explicitly store well-organized knowledge, from which we extract medical QA pairs. We collect data from the following three medical knowledge bases: **CPubMed-KG**5 is a knowledge graph for Chinese medical literature, which is based on the large-scale medical literature data from the Chinese Medical Association; **39Health-KG6** and **Xyyy-KG7** are two open source knowledge graphs. See basic information is shown in Tab.2. Footnote 5: [https://github.com/baiyang2464/chatbot-base-on-Knowledge-Graph](https://github.com/baiyang2464/chatbot-base-on-Knowledge-Graph) Footnote 6: [https://github.com/baiyang2464/chatbot-base-on-Knowledge-Graph](https://github.com/baiyang2464/chatbot-base-on-Knowledge-Graph) Data ProcessingWe clean the three knowledge graphs by removing invalid characters and then merge entities and relationships among entities among these three knowledge graphs, resulting in 43 categories. Each category is associated with either a relationship between entities or an attribute of entities. Subsequently, we manually design templates to convert each category to a _question_. The _question_ is either 1) querying the object entity based on the subject entity or 2) querying an attribute of an entity. The object entity will be the _answer_ w.r.t the _question_ in both cases. Finally, we obtained 798,444 QA pairs by constructing questions and answers with corresponding templates. See the appendix D for details. ## 3 Data Statistics and Analysis The basic statistics of Huatuo-26M are shown in Table 3, most of the QA pairs are from online consultation records. The average length of the dataset questions is 44.6 and the average length of the answers is 120.7. Questions could be long (e.g. in consultant records) or short (in encyclopedias and knowledge bases). There exists both long answers (e.g., Encyclopedia) and short answers (e.g. consultant records and knowledge bases). We randomly take 1% QA pairs as the test set while others form the training set. **Questions are colloquial while answers are professional** Huatuo-26M consists of a large number of colloquial QA pairs, which are closer to the offline medical diagnosis and contain a lot of medical knowledge. As shown in the sample from online medical consultation in Table 9, the patient's question contains patient characteristics and daily symptoms accompanied by life-like scenes, while the doctor's answers are targeted and with contextual semantic continuity. **Questions are diverse** To better understand the characteristics of the data set, we perform heuristic analysis on questions, counting from the first \begin{table} \begin{tabular}{l r r r} \hline \hline & \# entity type & \#relation & \#entity & \#triplets \\ \hline CPubMed-KG & - & 40 & 1.7M & 4.4M \\ 39Health-KG & 7 & 6 & 36.8K & 210.0K \\ Xyyy-KG & 7 & 10 & 44.1K & 294.1K \\ \hline \hline \end{tabular} \end{table} Table 2: Basic statistics of the three knowledge bases. \begin{table} \begin{tabular}{l r r r} \hline \hline Composition & \# Pairs & Len(Q) & Len(A) \\ \hline Huatuo-26M Train & 26,239,047 & 44.6 & 120.7 \\ Huatuo-26M Test & 265,041 & 44.6 & 120.6 \\ \hline \hline Data source: & & & \\ Consultant records & 25,341,578 & 46.0 & 117.3 \\ Encyclopedia & 364,066 & 11.5 & 540.4 \\ Knowledge bases & 798,444 & 15.8 & 35.9 \\ \hline All & 26,504,088 & 44.6 & 120.7 \\ \hline \hline \end{tabular} \end{table} Table 3: Basic statistics of Huatuo-26M. meaningful phrase of the question. As shown in Figure 1, we get the relative distribution of recurring questions and their subsequent vocabulary distribution. A large part of the patient's question is about newborns, the pregnant, and children, which does make sense given the inexperienced parents and pre-established immunity of children who are prone to illness. At the same time, consultations performed on behalf of elderly parents also accounted for a considerable part, which shows that online consultation has helped solve the problem of China's aging population to a certain extent. **Significant topics in Huatuo-26M** In order to understand the data set in a more fine-grained manner, we also draw word clouds for different data sources in Appendix A. We found that for the QA pairs derived from online medical consultation records, the questions cover not only common diseases such as colds and coughs, but also chronic diseases such as hypertension and diabetes, and even a small number of complex diseases such as cysts and epilepsy. The answers focused on prescribing medicines, providing precautions for diet and daily life, and recommending further treatment at the hospital. For the QA pairs from Wikipedia, compared with the QA pairs from consultant records, the proportions of Andrology, Gynecology, and Infectious Diseases have increased. For the QA pairs from the knowledge base, the proportion of questions about intractable diseases such as lymphoma and leukemia increases, while the answers include more professional diagnosis and treatment measures that need to be performed in offline hospitals such as MRI, CT, and puncture. ## 4 Benchmarks We benchmarked some mainstream methods on Huatuo-26M dataset from both answer retrieval and generation. ### Retrieval based benchmark In this section, we will benchmark mainstream retrieval methods on Huatuo-26M dataset, including both sparse and dense retrieval methods. #### 4.1.1 Baselines and Experimental Settings For a given question, we rank the top 1000 relevant answers from the answer pool, which consists of answers from both training and test sets. For encyclopedias and knowledge bases, we use 90% questions for training and the rest for testing. For consultant records or all categories, we use 99% questions for training and the rest for testing, since testing with 1% questions is enough and could save more evaluation time than that with 10% questions. BM25BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document. Considering that the pre-training models for Chinese are all based on single characters, for the convenience of comparison, we use single characters as units to build indexes instead of words. We utilize the Lucene code base and set k1 to 1.2 and b to 0.9. **DeepCT** (Dai and Callan, 2020) uses BERT 8 to determine context-aware term weights. We trained the model for 3 epochs, with a learning rate of \(2\times 10^{-5}\) using Adam. The batch size is set to 72 and the max sequence length is set to 256. Footnote 8: We use the BERT checkpoint released by Google, which could be downloaded from [https://huggingface.co/bert-base-chinese](https://huggingface.co/bert-base-chinese) DPR(Karpukhin et al., 2020) learns embeddings by a simple dual encoder framework. The DPR model used in our experiments was trained using the batch-negative setting with a batch size of 192 and additional BM25 negatives. We trained the question and passage encoders for 2 epochs, with a learning rate of \(10^{-5}\) using Adam, linear scheduling with warm-up and dropout rate 0.1. Figure 1: Distribution of patient-focused questions. We do the analysis heuristically, counting from the first meaningful phrase of the question. We present the relative distribution of these recurring problems and their subsequent distributions. Evaluation MetricsWe use Recall@k and MRR@10 as evaluation indicators. Recall@k measures the percentage of top k retrieved passages that contain the answer. MRR@10 calculates the average of the inverse of the ranks at which the first relevant document was retrieved. #### 4.1.2 Results The experimental results are shown in Table 4. Both DeepCT and DPR outperform BM25, evidencing the effectiveness of neural IR models. In most cases, DPR performs better than DeepCT, this is probably because dense IR models might be generally more powerful than sparse neural IR models. Note that the recall performance is relatively low in experiments involving consultant records since the pool of retrieval candidates (i.e., 26M) is too large to recall desired documents. Interestingly, we found that the top-ranked answers are still informative even if it does not recall the desired answer. For specific sample analysis, please refer to App. E. These retrieval models generally do not perform well in QA extracted from knowledge bases. Since questions in knowledge bases are concise and it requires models to deeply understand knowledge (e.g. medical entities and their in-between relationship). Knowledge representation in pre-trained language models (e.g. in retrieval scenarios) is still challenging; while it becomes more challenging in the medical domain since it is more knowledge-intensive. It is worth noting that retrieval-based solutions for medical QA assume that 1) there should be pre-defined answers for all medical questions; and 2) answers should be static for a given question and independent of the different backgrounds of patients. The two assumptions sometimes do not hold. First, there are always some new emergent situations in the medical domain, e.g. COVID-19, which people have little information about it when it just emerges. Second, the answers, e.g., suggestions and treatment, for a given medical question is dependent on the individual's situation, e.g., age and gender, symptoms and complications, and whether the symptoms are early or late. Therefore, a static answer might not be enough for medical consultation. ### Generation Based Benchmark We fine-tune generative language models (e.g., T5 and GPT2) using the training set of Huatuo-26M and evaluate them in the test set. #### 4.2.1 Baselines and Experimental Settings We report results for _raw_ T5 and GPT2 and the results after _fine-tuning_ on Huatuo-26M train set. T5trains many text-based language tasks in a unified text-to-text framework. We continuously train T5 for 1 epoch on the full training set using batch-size 8, with a learning rate of \(10^{-4}\) using Adam, linear scheduling with a warm-up rate of 0.1. The Chinese T5 model has 12 layers T5 9. Footnote 9: [https://huggingface.co/imxly/t5-pegasus](https://huggingface.co/imxly/t5-pegasus) GPT2is a decoder-only generative language model. We fine-tune GPT2 for 1 epoch on the full training set with a batch size of 12, with a learning rate of \(10^{-4}\) using Adam, linear scheduling with a warm-up rate of 0.1. In both T5 and GPT2, the maximum lengths of questions and answers are set to 256 and 512. The Chinese GPT is the original 12-layer GPT2 10. \begin{table} \begin{tabular}{l l r r r r r} \hline \hline Data source & Model & Recall @5 & Recall @20 & Recall @100 & Recall @1000 & MRR @10 \\ \hline \multirow{3}{*}{Medical consultant records} & BM25 & 4.91 & 6.99 & 10.37 & 17.97 & 3.82 \\ & DeepCT & **7.60** & 10.28 & 14.28 & 22.85 & **6.06** \\ & DPR & 6.79 & **11.91** & **20.96** & **42.32** & 4.52 \\ \hline \multirow{3}{*}{Encyclopedias} & BM25 & 4.58 & 8.71 & 17.82 & 39.91 & 3.10 \\ & DeepCT & **20.33** & 26.92 & 36.61 & 53.41 & **16.25** \\ & DPR & 16.01 & **27.25** & **45.33** & **78.30** & 11.20 \\ \hline \multirow{3}{*}{Knowledge bases} & BM25 & 0.52 & 1.02 & 1.82 & 3.51 & 0.38 \\ & DeepCT & 1.05 & 1.46 & 2.10 & 3.29 & 0.71 \\ & DPR & **2.66** & **5.25** & **11.84** & **33.68** & **1.83** \\ \hline \multirow{3}{*}{ALL} & BM25 & 4.77 & 6.83 & 10.21 & 17.84 & 3.71 \\ & DeepCT & **7.58** & 10.24 & 14.22 & 22.68 & **6.04** \\ \cline{1-1} & DPR & 6.79 & **11.92** & **21.02** & **42.55** & 4.53 \\ \hline \hline \end{tabular} \end{table} Table 4: Retrieval-based benchmark the Huatuo-26M dataset. Results are separated for different data sources. Evaluation MetricsWe use BLEU, ROUGE, GLEU, and Distinct as evaluation indicators. **BLEU** evaluates the similarity of generated and reference sentences by computing the k-gram overlap between the generated utterance and the reference. **ROUGE-N** measures the N-gram overlap between the generated sentence and the reference, and ROUGE-L measures the longest sequence of word matches using the longest common subsequence. **GLEU** automatically evaluates sentence-level fluency by examining different parsers. **Distinct-1/2** is an auxiliary metric for evaluating the textual diversity of the generated response by calculating the number of distinct n-grams. #### 4.2.2 Results The results of the generation benchmark are summarized in Table 5. Obviously, the fine-tuned T5 and GPT2 models have improved significantly compared to the raw T5 and GPT2 models without fine-tuning, especially fine-tuned T5 has achieved the best results in all evaluation indicators. Note the performance of the generation method seems relatively weak (with relatively low scores in these generation metrics), this is probably because the expected answers are typically long and it is more difficult to generate exactly-same long answer than a short answer (like entities in some general QA tasks, e.g. Natural Questions Kwiatkowski et al. (2019)). We warn that generation-based medical QA is risky. Since it is difficult to verify the correctness of generated content; misleading information in the medical domain might lead to more severe ethic issues. We benchmark these generation methods because generation methods in QA are nowadays more promising than retrieval methods thanks to the success of ChatGPT. However, they are not ready to be deployed in the real world. ## 5 Applications This section will demonstrate the usefulness of the proposed dataset from many aspects: transfer for other QA datasets, as external knowledge, and as a pre-training corpus in Sec. 5.1, 5.2, and 5.3. ### Transfer for Other QA Dataset In this section, we will explain how Huatuo-26M is beneficial to the existing QA dataset. Problem SettingIn this section, we directly apply the model pre-trained on the Huatuo-26M dataset and evaluate it on other answer generation datasets. A similar configuration could be found in T5-CBQA Roberts et al. (2020). Experimental SettingsWe selected two existing Chinese medical QA datasets as examples, namely cMedQA2 Zhang et al. (2018) and webMedQA He et al. (2019). **cMedQA2** is a publicly available dataset based on Chinese medical questions and answers consisting of 108,000 questions and 203,569 answers. **webMedQA** is a real-world Chinese medical QA dataset collected from online health consultancy websites consisting of 63,284 questions. We select the correct QA pairs from these two datasets to train our generation model. The model settings of T5 and GPT2 follow Sec. 4.2.1. ResultsAs shown in Table 6, the performance of the model pre-trained on the Huatuo-26M dataset is much higher than the raw models. Especially, additionally training on Huatuo-26M improves the raw T5 models with 25.42 absolute points in cMedQA2 and 22.73 absolute points in webMedQA. Moreover, in cMedQA2 dataset, T5 trained in Huatuo-26M which never sees neither the training set nor test of cMedQA2, outperforms T5 trained by cMedQA2 in terms of BLEU-1. This evidences that Huatuo-26M includes a wide range of medical knowledge, which is beneficial for downstream medical tasks. Moreover, using Huatuo-26M as a training set achieves better performance on cMedQA2 than using its own training set, this is probably due to the large scale of Huatuo-26M that might have related information in cMedQA2. This shows a great potential of Huatuo-26M for transfer \begin{table} \begin{tabular}{l r r r r r r r r r} \hline \hline Model & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & BLEU & ROUGE-1 & ROUGE-2 & ROUGE-L & Distinct-1 & Distinct-2 \\ \hline T5 & 0.33 & 0.18 & 0.12 & 0.07 & 0.10 & 0.67 & 0.19 & 0.63 & 0.01 & 0.02 \\ T5 (fine-tuned) & **26.63** & **16.74** & **11.77** & **8.46** & **11.38** & **33.21** & **13.26** & **24.85** & **0.51** & **0.68** \\ GPT2 & 10.04 & 4.60 & 2.67 & 1.62 & 3.34 & 14.26 & 3.42 & 12.07 & 0.17 & 0.22 \\ GPT2 (fine-tuned) & 23.42 & 14.00 & 9.35 & 6.33 & 9.47 & 30.48 & 11.36 & 23.15 & 0.43 & 0.58 \\ \hline \hline \end{tabular} \end{table} Table 5: Generation based benchmark on Huatuo-26M. learning in Chinese medicine. ### As an External Knowledge Problem SettingRAG Lewis et al. (2020) combines pre-trained parametric and non-parametric memory (i.e., external knowledge) for generation, by doing which memorization can be decoupled from generalization. Here we use the Huatuo-26M as the external knowledge resource in RAG. For a given question \(q\), we use trained DPR as a retrieval model to get the top-ranked QA pair (\(q_{\text{aug}},q_{\text{aug}}\)) from the QA dataset as an additional input. Experimental SettingConsidering that T5 performs better in zero-shot scenarios than GPT2, we use T5 instead of GPT2 to generate the answer conditioning on a concatenated text \((q_{\text{aug}},q_{\text{aug}},q)\). Since RAG models rely a retrieval model, we first train a Chinese DPR model using our dataset. Then we use the document encoder to compute an embedding for each document, and build a single MIPS index using FAISS Johnson et al. (2017) for fast retrieval. In RAG training, we retrieve the closest QA pair for each question and split it into \((q_{\text{aug}},q_{\text{aug}},q)\) format. We define the maximum text length after splicing as 400, train for 10 epochs with batch size 24 and learning rate 3e-05. The difference between **T5** and **T5** (**Huatuo-26M**) is that the latter was first trained in Huatuo-26M dataset before training in the target dataset (i.e., cMedQA2 or webMedQA). ResultsAs shown in Table 7, we find that the RAG strategy improves the quality of text generation to a certain extent. Particularly, on cMedQA2, the model can consistently benefit from the RAG strategy with and without pre-training on the Huatuo-26M dataset. For RAG, we could additionally train backbone models in Huatuo-26M before fine-tuning, as introduced in Sec. 5.1; the improvement of theditional pre-training could be found in cMedQA2 (3 absolute point improvement over purely RAG) but not in webMedQA (nearly 6 absolute point decrease); this might depend on the characteristics of target datasets. ### As a Pre-trained Corpus Problem SettingWe use Huatuo-26M as a pre-trained corpus to continue training existing pre-trained language models like BERT and RoBERTa. #### 5.3.1 Experimental Settings BERTBERT Devlin et al. (2018) is a transformer-based language representation model. **BERT-base** is the original 12-layer BERT and the Chinese BERT is downloaded from [https://huggingface.co/bert-base-chinese](https://huggingface.co/bert-base-chinese). **BERT-base (Huatuo-26M)** is the model initialized by **BERT-base** and continuously trained by the Huatuo-26M dataset using masked language model. We trained the model for 10 epochs with a learning rate \(5^{-5}\) with batch size 64. Questions and answers are spliced together, and the maximum length is 256. RoBERTaRoBERTa Liu et al. (2019) is a better-optimized BERT model. The Chinese Roberta is downloaded from [https://huggingface.co/hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext). **RoBERTa-base** is with 12 layers and **Roberta-large** is with 24 layers. **RoBERTa-base (Huatuo-26M)** is the model initialized by **RoBERTa-base** and continuously trained by the Huatuo-26M dataset using masked language model.We trained the model for 10 epochs with a learning rate \(5^{-5}\) with a batch size 64. Questions and answers are spliced together, and the maximum length is 256. ZenDiao et al. (2019) a BERT-based Chinese text encoder augmented by N-gram representations \begin{table} \begin{tabular}{l l r r r r r r r r r} \hline \hline Dataset & Model & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & BLEU & ROUGE-1 & ROUGE-2 & ROUGE-L & Distinct-1 & Distinct-2 \\ \hline \multirow{3}{*}{**cMedQA2**} & GPT2 (raw) & 9.96 & 4.30 & 2.33 & 1.33 & 3.18 & 13.85 & 3.07 & 11.60 & 0.175 & 0.218 \\ & 0.23 & 0.12 & 0.07 & 0.04 & 0.07 & 0.53 & 0.13 & 0.50 & 0.014 & 0.015 \\ & GPT2 (fine-tuned by Huatuo-26M) & 23.34 & 13.27 & 8.49 & 5.55 & 8.97 & 29.10 & 9.81 & 21.27 & 0.462 & 0.611 \\ & T5 (fine-tuned by Huatuo-26M) & **25.65** & **14.94** & **9.79** & **6.64** & **10.03** & **30.64** & **10.49** & **21.48** & **0.543** & **0.727** \\ \hline \multirow{3}{*}{**webMedQA**} & T5 (fine-tuned by cMedQA2) \({}^{\dagger}\) & 20.88 & 11.87 & 7.69 & 5.09 & 7.62 & 27.16 & 9.30 & 20.11 & 0.418 & 0.526 \\ \cline{1-1} \cline{2 that take different character combinations into account during training. ZEN thus combines comprehensive information about character sequences and the words or phrases they contain. MacBERTCui et al. (2020) reduces the gap between the pre-training and fine-tuning stages by covering words with a similar vocabulary to it, which is effective for downstream tasks. It replaces the original MLM task with the MLM for correction (Mac) task, and mitigates the difference between the pre-training and fine-tuning stages. Mc-BertZhang et al. (2020) study how the pre-trained language model BERT adapts to the Chinese biomedical corpus, and propose a new conceptual representation learning method that a coarse-to-fine cryptographic strategy is proposed to inject entity and linguistic domain knowledge into representation learning. #### 5.3.2 Experimental Data We evaluated BERT and RoBERTa trained on the Huatuo-26M dataset on the CBLUE Zhang et al. (2022). CBLUE is the first Chinese medical language understanding evaluation benchmark platform, including a collection of natural language understanding tasks such as named entity recognition, information extraction, and single sentence/sentence pair classification. #### 5.3.3 Results As shown in Table 8, BERT and RoBERTa trained on the Huatuo-26M dataset have significantly improved the performance of CBLUE. The trained 12-layer RoBERTaHuatuo-26M) model outperforms the 24-layer Roberta model in terms of average scores, demonstrating that the Huatuo-26M dataset is rich in medical information. The average score of the RoBERTa-base (Huatuo-26M) model is 0.8 percentage points higher than that of the RoBERTa-base model and 0.5 percentage points higher than that of the MC-BERT-base model. ## 6 Conclusion In this paper, we propose the largest Chinese medical QA dataset to date, consisting of **26 Million** medical QA pairs, expanding the size of existing datasets by more than 2 orders of magnitude. At the same time, we benchmark many existing works based on the data set and found that these methods still have a lot of room for improvement in medical QA scenarios. We also demonstrate the possible uses of the dataset in practice. The experimental results show that the dataset contains rich medical knowledge that can be very helpful to existing \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline Model & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & BLEU & ROUGE-1 & ROUGE-2 & ROUGE-L & Distinct-1 & Distinct-2 \\ \hline \hline **cMedQA2 Fine-tuned** & & & & & & & & & \\ T5 & 20.88 & 11.87 & 7.69 & 5.09 & 7.62 & 27.16 & 9.30 & 20.11 & 0.418 & 0.526 \\ T5-RAG & 25.86 & 18.48 & 15.26 & 13.02 & 14.27 & 34.24 & 17.69 & 27.54 & 0.395 & 0.516 \\ T5(Huatuo-26M) & 28.76 & 17.08 & 11.67 & 8.41 & 10.45 & 29.79 & 10.23 & 20.68 & **0.647** & **0.831** \\ T5(Huatuo-26M)-RAG & **31.85** & **22.77** & **18.70** & **15.96** & **17.08** & **37.01** & **19.23** & **28.72** & 0.573 & 0.760 \\ \hline **webMedQA Fine-tuned** & & & & & & & & & & \\ T5 & 21.42 & 13.79 & 10.06 & 7.38 & 8.94 & 31.00 & 13.85 & 25.78 & 0.377 & 0.469 \\ T5-RAG & 20.30 & 13.29 & 9.97 & 7.61 & 9.40 & 32.40 & 14.88 & 27.25 & 0.285 & 0.377 \\ T5(Huatuo-26M) & **31.47** & **20.74** & **15.35** & **11.60** & **12.96** & 34.38 & 15.18 & 26.72 & **0.651** & **0.832** \\ T5(Huatuo-26M)-RAG & 25.56 & 16.81 & 12.54 & 9.58 & 11.80 & **34.88** & **15.59** & **27.43** & 0.447 & 0.611 \\ \hline \hline \end{tabular} \end{table} Table 7: The comparison with or without using Huatuo-26M as an external RAG corpus. The difference with Tab. 6 is that here we finally fine-tune these models in the target datasets. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Model & CMedEE & CMedIE & CDN & CTC & STS & QIC & QTR & QQR & **Avg-ALL** \\ \hline BERT-base & **62.1** & **54.0** & 55.4 & 69.2 & 83.0 & 84.3 & 60.0 & **84.7** & 69.1 \\ **BERT-base (Huatuo-26M)** & 61.8 & 53.7 & **56.5** & **69.7** & **84.6** & **86.2** & **62.2** & **84.7** & **69.9** \\ \hline RoBERTa-base & 62.4 & 53.7 & 56.4 & 69.4 & 83.7 & 85.5 & 60.3 & 82.7 & 69.3 \\ RoBERTa-large & 61.8 & **55.9** & 55.7 & 69.0 & **85.2** & 85.3 & **62.8** & 84.4 & 70.0 \\ **RoBERTa-base (Huatuo-26M)** & **62.8** & 53.5 & **57.3** & **69.8** & 84.9 & **86.1** & 62.0 & **84.7** & **70.1** \\ \hline ZEN Diao et al. (2019) & 61.0 & 50.1 & 57.8 & 68.6 & 83.5 & 83.2 & 60.3 & 83.0 & 68.4 \\ MacBERT Cui et al. (2020) & 60.7 & 53.2 & 57.7 & 67.7 & 84.4 & 84.9 & 59.7 & 84.0 & 69.0 \\ MC-BERT Zhang et al. (2020) & 61.9 & 54.6 & 57.8 & 68.4 & 83.8 & 85.3 & 61.8 & 83.5 & 69.6 \\ \hline \hline \end{tabular} \end{table} Table 8: The performance on the test set of CBLUE evaluation. We use Huatuo-26M as a pre-trained corpus. The results including Zen, MacBERT, and MC-BERT are from the official website. datasets and tasks. We hope that the Huatuo-26M dataset can not only help promote the research of medical QA, but also practically help doctors and patients. ### Limitation The dataset might contain some wrong medical information since its scale is large with 26M QA pairs and manual checking by experts in nearly impossible in the current stage. To better maintain the dataset, we aim to build an online website where clinical doctors or experts could modify these QA pairs. This might be done by recruiting part-time doctors to first check these data and regularly update them. This dataset might be translated into other languages, especially low-resource languages. Note that the translation might introduce some additional errors. Moreover, one should also be noticed some basic differences between traditional Chinese medicine and western medicine. For medical consultation, the treatment/suggestions vary from person to person. In other words, it might be highly dependent on the individual's situation, e.g., age and gender, whether the main symptoms such as pain are accompanied by other symptoms, or whether the symptoms are early or late. The information might need to be confirmed in a multi-turn dialogue instead of single-turn QA. In the future, we would explore dialogue systems for medical QA. ### Ethics Statement As we mentioned in the limitation, the collected data might still have wrong medical information, which comes from two aspects: 1) doctors might make mistakes in online medical consultation, especially given the fact patience might expose incomplete information; and 2) the automatic extraction of QA pairs might also introduce some inaccurate information. Although the data scale is too large to manually check by medical experts, we have made some efforts to reduce its negative effects. We have highlighted these concerns in many parts of this paper and warned readers. ### Dataset Download All data are crawled from open-source resources. For these data resources where we extract question-answering pairs, namely online encyclopedias, and knowledge bases, we directly provide full-text question-answering pairs. For the raw data we crawled as question-answering pairs, like online consultation records, we provide two versions: a **URL version** that provides a URL website associated with a question-answering pair; and a **full-text version** that directly provides full texts for question-answering pairs. Huatuo-26 providing URL links for online consultation records is fully open-sourced 11. While Huatuo-26 provides full texts for all QA pairs is only open-sourced to research institutes or universities if they agree on a license to promise for the purpose of research only. Footnote 11: The temporary download link is in [https://drive.google.com/file/d/1SKsU8owLt3IWZPLInPytpCwm8-EH3iW6/view](https://drive.google.com/file/d/1SKsU8owLt3IWZPLInPytpCwm8-EH3iW6/view), QA pairs from encyclopedias and knowledge bases are full-text and complete, but one has to crawl QA pairs from online medical consultation records by itself. This is to avoid data misuse from some companies or individuals.
2308.08298
White Dwarf Mass Growth in Cataclysmic Variables: Roles of Dwarf Novae
The disc instability mechanism (DIM) is widely accepted to account for the transient behaviour of dwarf novae (DNe), which experience short outbursts separated by long quiescence. The duty cycle (the ratio between the outburst duration and the recurrence time) determines the amount of accreted mass by the white dwarf (WDs) during outbursts, thus playing an important role in the long-term binary evolution. Employing the code of Modules for Experiments in Stellar Astrophysics, we systemically investigate the influence of the duty cycles on the evolution of DNe and the mass growth of accreting carbon-oxygen (CO) WDs. Our calculations show that, while the DIM can considerably influence the accretion process, efficient WD-mass growth requires a particular range of the duty cycle. For WDs with the initial masses of 0.6, 0.7 and 1.1 $M_\odot$, these duty cycles are 0.006$\,\leq$$d$$\,\leq$0.007, $d$\,=\,0.005 and $d$\,=\,0.003, and the accumulated mass of the WDs can reach 0.1, 0.13 and 0.21 $M_\odot$, respectively. In all of our simulations, no CO WDs can grow their masses to the explosion mass of Type Ia supernovae of about $1.38~M_\odot$. Because of a much short timescale of the outburst state, the final donor-star masses and orbital periods are insensitive to the duty cycles. Therefore, we propose that the DIM in DNe could alleviate the WD mass problem to some extent.
Wei-Min Liu, Long Jiang, Wen-Cong Chen, Xiang-Dong Li
2023-08-16T11:59:21Z
http://arxiv.org/abs/2308.08298v1
# White Dwarf Mass Growth in Cataclysmic Variables: Roles of Dwarf Novae ###### Abstract The disc instability mechanism (DIM) is widely accepted to account for the transient behaviour of dwarf novae (DNe), which experience short outbursts separated by long quiescence. The duty cycle (the ratio between the outburst duration and the recurrence time) determines the amount of accreted mass by the white dwarf (WDs) during outbursts, thus playing an important role in the long-term binary evolution. Employing the code of Modules for Experiments in Stellar Astrophysics, we systemically investigate the influence of the duty cycles on the evolution of DNe and the mass growth of accreting carbon-oxygen (CO) WDs. Our calculations show that, while the DIM can considerably influence the accretion process, efficient WD-mass growth requires a particular range of the duty cycle. For WDs with the initial masses of 0.6, 0.7 and 1.1 \(M_{\odot}\), these duty cycles are \(0.006\leq\)\(d\leq\)0.007, \(d=0.005\) and \(d=0.003\), and the accumulated mass of the WDs can reach 0.1, 0.13 and 0.21 \(M_{\odot}\), respectively. In all of our simulations, no CO WDs can grow their masses to the explosion mass of Type Ia supernovae of about 1.38 \(M_{\odot}\). Because of a much short timescale of the outburst state, the final donor-star masses and orbital periods are insensitive to the duty cycles. Therefore, we propose that the DIM in DNe could alleviate the WD mass problem to some extent. keywords: stars: dwarf novae - novae, cataclysmic variables - stars: white dwarfs -stars: evolution ## 1 Introduction Cataclysmic variables (CVs) are short-period interacting binaries in which a low-mass donor star is transferring materials to a carbon-oxygen (CO) white dwarf (WDs, see Warner, 1995; Ritter, 2010; Knigge et al., 2011, for reviews). Angular momentum loss (AML) plays a crucial role in the secular evolution of CVs. It is traditionally thought that, above the period gap (orbital periods between 2 and 3 hours), magnetic braking (MB, Verbunt and Zwaan, 1981; Rappaport et al., 1983) dominates the evolution of CVs, while gravitational radiation (GR, Landau and Lifshitz, 1975) is solely responsible for AML below the period gap because MB ceases a fully convective donor star (Para et al., 2017). However, it still remains controversial whether MB really completely disappears in this situation (Patterson, 1998; Knigge et al., 2011; Sarkar and Tout, 2022). Dwarf novae (DNe) are a subtype of CVs which switch between long-term quiescence and short-term outbursts (Osaki, 1974). According to the thermal-viscous disc instability mechanism (DIM, Osaki, 1996; Hellier, 2001), the mass-transfer rate from the donor star during quiescence is much greater than that from the disc onto the WD. Therefore, the transferred material piles up in the disc and the temperature of the disc rises owing to viscous heating. When the disc material becomes fully ionised, a sudden mass transfer onto the WD occurs, leading to an outburst. Considering the influence of DIM on the mass-transfer process, King et al. (2003) presented an alternative evolutionary channel for long-period DNe, and showed that a WD with initial mass \(~{}0.7M_{\odot}\) could grow to the explosion mass of Type Ia supernovae (SNe Ia, about 1.38 \(M_{\odot}\)) if the system experiences thermal-timescale mass transfer in the early evolutionary stages. Adopting the similar idea, Xu and Li (2009) investigated accretion-induced collapse of WDs in long-period DNe, and suggested that the evolution can well account for the formation of the bursting pulsar GRO J1744-28 if the duty cycle (i.e., the ratio between the outburst duration and the recurrence time) is \(d=0.003\). Wang et al. (2010) explored in detail the formation of long delay-time SNe Ia taking into account DIM (with \(d=0.01\)). They showed that it is possible for a 0.6 \(M_{\odot}\) WD to grow to the Chandrasekhar limit with an initial donor star of mass less than 1.7 \(M_{\odot}\). Subsequent studies on the evolution of WD binaries (Chen et al., 2014; Toonen, Voss, and Knigge, 2014; Liu et al., 2019), neutron star X-ray binaries (Liu and Chen, 2011; Jia and Li, 2016; Gao, Li, and Shao, 2022), and black hole X-ray binaries (Shao and Li, 2020) also adopted constant value of \(d\). However, the derived \(d\) from observations of DNe range from a few \(10^{-3}\) to around 0.5, which might be related to the orbital properties of the binaries (Britt et al., 2015). Since the accretion behaviour of the WD during outbursts sensitively depend on the magnitude of \(d\), using single value of \(d\) is obviously a too simplified assumption. Our purpose is to systematically model the evolutionary processes of DNe that experience the accretion-disc instability, and investigate the influence of the initial parameters including the initial WD mass, initial donor-star mass, initial orbital period, and duty cycle on the evolution of DNe. Meanwhile, we hope to contribute an evolutionary channel to solve the WD mass problem. In observations, the average mass of isolated WDs is about 0.6 \(M_{\odot}\)(Kepler et al., 2007), while the mean mass of WDs in CVs are obviously more massive (about 0.8 \(M_{\odot}\)) (Zorotovic et al., 2011). Schreiber et al. (2016) argued that this mass problem can be solved by consequential AML, but the subsequent study indicated that the proposed physical interpretation of this empirical consequential AML still need to be improved (Liu & Li, 2019). As mentioned above, both King et al. (2003) and Xu & Li (2009) present an evolutionary example with a low duty cycle (\(d\approx 0.003\)). An observational study of dwarf nova AX J1549.8-5416 implied that a duty cycle of 0.5 (Zhang et al., 2017). Therefore, the duty cycles might have a wide distribution, \(d=0.01\) used in the investigation on the progenitors of SNe Ia is probably implausible for the evolution of DNe. A thorough research for the influence of duty cycles on the evolution of DNe is required. In Section 2, we describe the input physics and binary evolution code. The numerically calculated results are presented in Section 3. Sections 4, and 5 give a brief discussion and conclusion, respectively. ## 2 Input physics and binary evolution code In this section, we firstly introduce the parameterised condition for DIMs of DNe, and a critical mass-transfer rate under which DIMs occur. Subsequently, the mass-growth law of WDs during the hydrogen and helium burning, and detailed settings in the binary evolutionary code are presented. ### Mass Transfer and Accretion Disc Instability By parameterising the properties of DNe, Smak (1983) found that the accretion in WD binaries is unstable if the mass-transfer rate (\(\dot{M}_{\rm d}\)) from the donor star is less than a critical mass-transfer rate \(\dot{M}_{\rm d,cr}\), which is related with the orbital period. Actually, this critical mass-transfer rate depends on the effective temperature at the outer edge of the accretion disc. Because of the thermal and viscous instability of the accretion disc, the accretion flow is unstable if the effective temperature is below a critical temperature as \[T_{\rm cr}\approx 7943(R_{\rm disc}/10^{10}\ {\rm cm})^{-0.1}\ {\rm K}, \tag{1}\] where \(R_{\rm disc}\) is the outer edge radius of the accretion disc (Smak, 1983). This critical temperature is consistent with the hydrogen ionisation temperature of about 6500 K (King et al., 1997). Corresponding to the critical temperature, the critical mass-transfer rate can be written as (Van Paradijs, 1996) \[\dot{M}_{\rm d,cr}=4.3\times 10^{-9}(P_{\rm orb}/4\,{\rm hr})^{1.7}\ M_{ \odot}\,{\rm yr}^{-1}, \tag{2}\] where \(P_{\rm orb}\) is the orbital period. If the mass-transfer rate from the donor star is less than \(\dot{M}_{\rm d,cr}\), the accretion disc would suffer thermal and viscous instability. Once the accretion disc is unstable, the mass accretion on to the WD would transit between the short outburst state and the long quiescent state. Given a recurrence time \(t_{\rm rec}\), the duty cycle is defined as \(d=t_{\rm out}/t_{\rm rec}\), where \(t_{\rm out}\) is the duration of the outburst state. The mass accumulated in the accretion disc within the quiescent time (which is close to the recurrence time when \(d\ll 1\)) is assumed to be accreted by the WD in the outburst stage, i.e. \(|\dot{M}_{\rm d}|\times t_{\rm rec}=\dot{M}_{\rm acc}\times t_{\rm out}\). Due to the disc instability, the accreting WD has a chance to experience a steady mass accumulation, resulting in an efficient mass growth of the accreting WD (Kahabka & van den Heuvel, 1997). Therefore, we take an accretion rate of the WD as follows \[\dot{M}_{\rm acc}=\left\{\begin{array}{ll}|\dot{M}_{\rm d}|/d,&|\dot{M}_{\rm d }|\leq\dot{M}_{\rm d,cr}\\ |\dot{M}_{\rm d}|,&|\dot{M}_{\rm d}|\geq\dot{M}_{\rm d,cr}\end{array}\right. \tag{3}\] It is noteworthy that the WD only accretes mass from the accretion disc at a rate of \(\dot{M}_{\rm acc}\) during the outburst state once the disc instability occurs. ### Mass Growth of WDs During the accretion of the WD, its mass growth depends on the mass accumulation efficiencies during the hydrogen and helium burning stages. Similar to Liu & Li (2019), the mass-growth rate of a WD is given by \[\dot{M}_{\rm WD}=\eta_{\rm H}\eta_{\rm He}\dot{M}_{\rm acc}, \tag{4}\] where \(\eta_{\rm H}\) and \(\eta_{\rm He}\) are the mass accumulation efficiencies for hydrogen and helium burning, respectively. We follow Hillman et al. (2016) and Kato & Hachisu (2004) for the description of \(\eta_{\rm H}\) and \(\eta_{\rm He}\), which are determined by the WD mass and the accretion rate (see also Liu & Li, 2016, for more details). There exists an accretion-rate range for the steady burning of the accreting hydrogen. The upper critical accretion rate for steady hydrogen burning can be written as (Han & Podsiadlowski, 2004) \[\dot{M}_{\rm H,up}=5.3\times 10^{-7}\left(\frac{1.7-X}{X}\right)(M_{\rm WD }/M_{\odot}-0.4)\ M_{\odot}\,{\rm yr}^{-1}, \tag{5}\] where \(X\) is the hydrogen abundance of the donor star. When \(\dot{M}_{\rm acc}\) is greater than \(\dot{M}_{\rm H,up}\), the accreted Hydrogen stably burns into He at a rate of \(\dot{M}_{\rm H,up}\), and the excess material is blown away in the form of the optically thick winds (Kato & Hachisu, 1994). Furthermore, if the mass-accretion rate for the hydrogen material is lower than \(\dot{M}_{\rm H,low}=3\times 10^{-8}\ M_{\odot}\,{\rm yr}^{-1}\), strong hydrogen-shell flashes take place, and all accreted material is ejected from the surface of the WD (Kovetz & Prialnik, 1994). During the hydrogen and helium burning, the excess material in unit time (\(\dot{M}_{\rm acc}-\dot{M}_{\rm WD}\)) is thought to carry away the specific AM of the WD. ### Binary Evolutionary Code We carried out binary evolutionary calculations of DNe using the Modules for Experiments in Stellar Astrophysics (MESA, Paxton et al., 2015). At the beginning of the model, a binary system containing a WD and an MS companion star is assumed to exist in a circular and synchronised orbit. In the calculations, the WD is thought to be a point mass. For the chemical abundance of the donor star, we take a solar composition (\(X=0.70\), \(Y=0.28\) and \(Z=0.02\)). Our simulations depend on four initial input parameters including the initial donor-star mass \(M_{\rm d,i}\), initial WD mass \(M_{\rm WD,i}\), initial orbital period \(P_{\rm orb,i}\), and duty cycle \(d\). Except for AML by the mass loss mentioned in Section 2.2, we consider systematic AML including the MB above the period gap and GR (Landau & Lifshitz, 1975). For MB mechanism, we adopt the standard MB description given by Rappaport et al. (1983), and take the MB index as \(\gamma=4\)(Verbunt & Zwaan, 1981). ## 3 Calculated results We take the initial WD masses to be \(M_{\rm WD,i}=0.6\), 0.7 and 1.1 \(M_{\odot}\), and the corresponding initial donor star masses to be \(M_{\rm d,i}=0.5\) 0.6 and 0.7 \(M_{\odot}\)), (0.7, 0.8 and 0.9 \(M_{\odot}\)), and (0.8, 0.9 and 1.0 \(M_{\odot}\)), respectively. The initial orbital periods are taken to be three typical values in the range from 0.5 to 1.58 d. We model the evolution of 27 WD binaries to investigate the influence of duty cycles on the mass growth of CO WDs in DNe. King et al. (2003) derived a duty cycle range from a few times \(10^{-3}\) to 0.1 based on the observations (Warner, 1995). Therefore, we set \(d=0.002,0.005,0.01,0.05\), and 0.1, which cover the plausible range of duty cycles. Tables 1, 2, and 3 summarise the initial parameters and the final WD masses for \(M_{\rm WD,i}=0.6\), 0.7, and 1.1 \(M_{\odot}\), respectively. For the same \(P_{\rm orb,i}\) and a fixed \(d\), a high mass ratio \(q\) (mass ratio between the donor-star mass to the WD mass) tends to produce a high mass growth of WDs. Similarly, a long initial orbital period \(P_{\rm orb,i}\) also give rise to a massive WD for a fixed donor-star mass and \(d\) because the donor stars with long \(P_{\rm orb,i}\) experience a deeply nuclear evolution before they fill Roche lobes, resulting in a relatively high mass-transfer rate. Our simulations indicate that duty cycles play vital roles in influencing the mass growth of WDs. A small \(d\) tends to produce a high mass growth of WDs. The initial parameters of the WD masses are \(M_{\rm WD,i}=0.7\) and \(M_{\odot}\), respectively. For the same \(P_{\rm orb,i}\) and a fixed \(d\), a high mass ratio \(q\) (mass ratio between the donor-star mass to the WD mass) tends to produce a high mass growth of WDs. Similarly, a long initial orbital period \(P_{\rm orb,i}\) also give rise to a massive WD for a fixed donor-star mass and \(d\) because the donor stars with long \(P_{\rm orb,i}\) experience a deeply nuclear evolution before they fill Roche lobes, resulting in a relatively high mass-transfer rate. Our simulations indicate that duty cycles play vital roles in influencing the mass growth of WDs. A small \(d\) tends to produce a high mass growth of WDs. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(M_{\rm d,i}/M_{\odot}\) & \(M_{\rm WD,i}/M_{\odot}\) & \(M_{\rm WD,f}/M_{\odot}\) & \(P_{\rm orb,i}/d\) & duty cycle \\ \hline \hline [MISSING_PAGE_POST] \hline \hline \end{tabular} \end{table} Table 1: Initial parameters and the final WD masses for \(M_{\rm WD,i}=0.6\)\(M_{\odot}\) Figure 1: Evolution of a dwarf nova with \(M_{\rm d,i}=0.6~{}M_{\odot}\), \(M_{\rm WD,i}=0.6~{}M_{\odot}\) and \(P_{\rm orb,i}=0.5\) d for \(d=0.005\) (upper panels), and 0.05 (bottom panels). In the left panels, the orange, red, black, and blue curves represent the evolutionary tracks of the accretion rate \(\dot{M}_{\rm acc}\) onto the WD during outburst states, the critical mass-transfer rate \(\dot{M}_{\rm d,cr}\) below which DIMs occur, the mass-transfer rate \(|M_{\rm d}|\) of the donor star, and the WD mass, respectively. The horizontal dashed lines indicate \(\dot{M}_{\rm H,low}\) below which strong H-shell flash occurs. In the right panels, the black, and blue curves denote the evolutionary tracks of the donor-star mass, and the orbital period, respectively. Figure 4: Same as Figure 1, but with \(M_{\rm d,i}\) = 0.9 \(M_{\odot}\), \(M_{\rm WD,i}\) = 0.7 \(M_{\odot}\) and \(P_{\rm orb,i}\) = 0.5 d massive WD when \(d=0.005-0.1\). However, the final WD masses \(M_{\rm WD,\,f}\) with \(d=0.002\) are less than those with \(d=0.005\). A detailed discussion on this issue see also Section 4. In particular, in Table 3 the WDs with \(d=0.05\), and 0.1 cannot accumulate any matter because these systems experience helium novae stages. Furthermore, almost all WDs cannot increase their masses over \(0.01M_{\odot}\) if \(d=0.1\). Although the actual \(d\) may exceed 0.1 observationally (Zhang et al., 2017). Therefore, the influence of a duty cycle greater than 0.1 on the mass growth of WDs is negligible. To show more evolutionary details, Figures 1 to 5 present five examples for different initial parameters. Figure 1 plots the evolution of a WD binary with \(M_{\rm d,i}=0.6\)\(M_{\odot}\), \(M_{\rm WD,i}=0.6\)\(M_{\odot}\), and \(P_{\rm orb,i}=0.5\) d. At the age of 0.32 Gyr, the donor star fills its Roche lobe, and transfers the starby hydrogen-rich material onto the WD at a rate of about \(10^{-9}-10^{-8}\)\(M_{\odot}\,{\rm yr}^{-1}\). Once \(\dot{M}_{\rm d}\leq\dot{M}_{\rm d,cr}\), the accretion-disk instability occurs. The evolutionary tracks of the accretion rates of the WD with different \(d\) emerge similar profiles (Note that the accretion-rate tracks in all figures only represent evolutionary tendency, and the accreted mass of WDs in a recurrence time is \(\Delta M=\dot{M}_{\rm acc}t_{\rm feed}=\dot{M}_{\rm acc}t_{\rm out}\)). This phenomenon originates from that two mass-transfer rates with different duty cycles are approximate same, while the accretion rates during the outburst state emerge a similar evolutionary law due to \(\dot{M}_{\rm acc}=-\dot{M}_{\rm d}/d\). When \(d=0.005\), the accretion rate of the WD is about \(2\times 10^{-7}-2\times 10^{-6}\)\(M_{\odot}\)\({\rm yr}^{-1}\), which is in the range between \(\dot{M}_{\rm H,low}\) and \(\dot{M}_{\rm H,up}\), thus the stable hydrogen burning cause the WD mass to increase to be \(0.686\)\(M_{\odot}\). For \(d=0.05\), once a relatively low accretion rate of about \(2\times 10^{-8}-2\times 10^{-7}\)\(M_{\odot}\,{\rm yr}^{-1}\) decreases to be less than \(\dot{M}_{\rm H,low}\), strong hydrogen-shell flash will block the mass growth of the WD. The mass transfer from the donor star divides into two stages: high mass-transfer rate stage, and low mass-transfer rate stage. In the first stage, the mass transfer is driven by the MB. Subsequently, the donor star evolves into a fully convective star (with a mass of about 0.3 \(M_{\odot}\)), thus the magnetic field lines cannot be locked to the core and magnetic braking cuts off (Grossman et al., 1974). As a result, the mass transfer ceases in the period gap of 2-3 hours (Paczynski & Sienkiewicz, 1983; Spruit & Ritter, 1983). Under the period gap, the GR drives the mass transfer again at a low rate of about \(10^{-10}\)\(M_{\odot}\,{\rm yr}^{-1}\). Even if \(d=0.005\), the accretion rate is only about \(2\times 10^{-8}\)\(M_{\odot}\) yr\({}^{-1}\), which is still less than \(\dot{M}_{\rm H,low}\). Therefore, the WD masses keep a constant in the second mass-transfer stage. The right panels showing the evolution of the donor-star mass and the orbital period. It seems that the duty cycles are difficult to influence the evolution of the donor stars and the orbital periods. The initial WD mass and the donor-star mass in Figure 2 are same as Figure 1 but for \(P_{\rm orb,i}=1.0\) d. Because of a long initial orbital period, the mass transfer starts at the age of 3 Gyr. Due to a deeply nuclear evolution of the donor star, a slightly high accretion cause the WD mass to be 0.692 \(M_{\odot}\) and 0.616 \(M_{\odot}\) for \(d=0.005\), and 0.05, respectively. These two WD masses are slightly greater than those (0.686 \(M_{\odot}\) and 0.614 \(M_{\odot}\), see also Table 1) with \(P_{\rm orb,i}=0.5\) d. Figure 3 illustrates the evolution of a WD binary with \(M_{\rm d,i}=0.7\)\(M_{\odot}\), \(M_{\rm WD,i}=0.7\)\(M_{\odot}\), and \(P_{\rm orb,i}=0.5\) d. The evolutionary trends are very similar to those of Figure 1 for a same \(d\). A high \(M_{\rm WD,i}\) leads to a high \(M_{\rm H,up}\), which benefits the mass growth of WDs. Therefore, the mass growth of the WD can reach 0.118 \(M_{\odot}\) and 0.026 \(M_{\odot}\) for \(d=0.005\), and 0.05, respectively. To study the influence of the donor-star masses, in Figure 4 we adopt \(M_{\rm WD,i}\), and \(P_{\rm orb,i}\) same as Figure 3 but for \(M_{\rm d,i}=0.9\)\(M_{\odot}\). Since the donor-star mass is higher than the WD mass, the mass is transferred to the accretion disk at a rate of \(\ga 10^{-8}\)\(M_{\odot}\,{\rm yr}^{-1}\) in the early stage of the first mass-transfer stage, which is greater than \(\dot{M}_{\rm d,cr}\). Without the disc instability, such a mass-transfer rate (is also the accretion rate) is smaller than \(\dot{M}_{\rm H,low}\), hence the WD mass cannot increase in the early stage. However, because of a relatively long mass-transfer timescale the WD masses still increase to be 0.819, and 0.727 \(M_{\odot}\) for \(d=0.005\), and 0.05, respectively. The corresponding mass growths are approximately equal to those in Figure 3. In Figure 5, the initial parameters are \(M_{\rm d,i}=1.0\)\(M_{\odot}\), \(M_{\rm WD,i}=1.1\)\(M_{\odot}\), and \(P_{\rm orb,i}=0.5\) d. When \(d=0.005\), the disc instability can cause the accretion rate of the WD to remain in the hydrogen-steady-burning region (i.e. \(\dot{M}_{\rm H,low}\leq\dot{M}_{\rm acc}\leq\dot{M}_{\rm H,up}\)) except for the early mass-transfer stage. Therefore, the accreting material can efficiently accumulate onto the surface of the WD, and the final WD mass is close to 1.30 \(M_{\odot}\). On the other hand, when \(d=0.05\), no accreted matter can be retained on the WD surface. The reason is that the low mass accumulating rate for hydrogen burning cannot supply a high accretion rate for stable helium burning, thus helium novae burst. In the early mass-transfer stage, the mass-transfer rate with \(d=0.005\) temporarily exceeds \(\dot{M}_{\rm d,cr}\) due to AML by the mass loss, resulting in a sudden dip of the accretion rate. However, this phenomenon is absent for \(d=0.05\). ## 4 Discussion In principle, a small duty cycle would produce a high accretion rate during the outburst states. However, a sufficient small duty cycle cannot naturally result in an efficient mass growth of WDs due to the limit of the upper critical mass-accretion rate \(\dot{M}_{\rm H,up}\). Figure 6 shows the influence of duty cycles on the accretion rate and mass growth of the WD for a binary with \(M_{\rm d,i}=0.7\)\(M_{\odot}\), \(M_{\rm WD,i}=0.7\)\(M_{\odot}\), and \(P_{\rm orb,i}=0.5\) d. It is clear that \(\dot{M}_{\rm acc}\) with \(d=0.05\) exceeds the low critical accretion rate (over which the steady H burning is possible) for a short timescale, resulting a tiny mass growth of the WD. Subsequently, strong H-shell flash causes the WD not to accumulate any material because \(\dot{M}_{\rm acc}<\dot{M}_{\rm H,low}\). When \(d=0.005\), \(\dot{M}_{\rm acc}\) (about \(10^{-6}-10^{-7}\)\(M_{\odot}\) yr\({}^{-1}\)) is always greater than \(\dot{M}_{\rm H,low}\) in the first mass-transfer stage, resulting in an efficient mass growth of the WD. For \(d=0.002\), \(\dot{M}_{\rm acc}\) exceeds \(\dot{M}_{\rm H,up}\) in the first mass-transfer stage, thus most of the accreted material is lost from the surface of the WD in the form of the optically thick winds. In the second mass-transfer stage, \(\dot{M}_{\rm acc}\) is still greater than \(\dot{M}_{\rm H,low}\) for a short timescale, while the total mass growth of the WD in two stages is still less than that in \(d=0.005\). An efficient mass accumulation on the surface of WDs depends on whether the accretion rates are in the range from \(\dot{M}_{\rm H,low}\) to \(\dot{M}_{\rm H,up}\), i.e. \(\dot{M}_{\rm H,low}\leq\dot{M}_{\rm acc}\leq\dot{M}_{\rm H,up}\). Therefore, the efficient duty cycle \(d\) is related to the mass-transfer rate, and the WD mass. It requires generous numerical calculations to find an efficient \(d\), which results in a maximum mass growth of the WD. Table 4 summarises the efficient duty cycles and the maxima of \(M_{\rm WD,f}\) for different initial parameters. For a specific WD binary, we adopt different \(d\) in steps of 0.001 to find a maximum \(M_{\rm WD,f}\). Similar to Figure 6, a small \(d\) cannot result in the production of a more massive WD. For initial WD masses of 0.6, 0.7 and 1.1 \(M_{\odot}\) \begin{table} \begin{tabular}{c c c c c} \hline \hline \(M_{\rm d,i}/M_{\odot}\) & \(M_{\rm WD,i}/M_{\odot}\) & \(M_{\rm WD,f}/M_{\odot}\) & \(P_{\rm orb,i}/d\) & duty cycle \\ \hline \hline 0.6 & 0.6 & 0.687 & 0.5 & 0.006 \\ 0.6 & 0.6 & 0.693 & 1.0 & 0.006 \\ 0.7 & 0.6 & 0.692 & 0.5 & 0.007 \\ 0.7 & 0.6 & 0.696 & 1.0 & 0.007 \\ \hline 0.7 & 0.7 & 0.818 & 0.5 & 0.005 \\ 0.7 & 0.7 & 0.831 & 1.0 & 0.005 \\ 0.8 & 0.7 & 0.819 & 0.5 & 0.005 \\ 0.8 & 0.7 & 0.822 & 1.0 & 0.005 \\ \hline 0.9 & 1.1 & 1.290 & 0.5 & 0.003 \\ 0.9 & 1.1 & 1.292 & 1.0 & 0.003 \\ 1.0 & 1.1 & 1.312 & 0.5 & 0.003 \\ 1.0 & 1.1 & 1.311 & 1.0 & 0.003 \\ \hline \hline \end{tabular} \end{table} Table 4: The maximum \(M_{\rm WD,f}\) and the corresponding duty cycles the efficient \(d\) are 0.006 to 0.007, 0.005, and 0.003. It seems that an initial massive WD corresponds to a small efficient \(d\). The reason for this tendency is that the accretion rate of the WD still has chances to be less than the \(\dot{M}_{\rm H,up}\) of a massive WD for a small \(d\). In our simulated WD binaries, all WDs cannot increase their masses to the explosion mass of SNe Ia of about 1.38 \(M_{\odot}\). However, considering the effect of the disc instability Wang et al. (2010) obtained the initial parameter space of the progenitors of SNe Ia in the orbital period-donor star mass plane. Especially, several WD binaries with \(M_{\rm d,i}=1.6\) to 1.8 \(M_{\odot}\), \(M_{\rm WD,i}=0.61\)\(M_{\odot}\), and \(P_{\rm orb,i}=1.2\) to 1.6 d are potential progenitor of SNe Ia. The discrepancy between our results and those by Wang et al. (2010) should arise from different initial parameters including donor-star masses and orbital periods. To achieve an efficient mass growth of the surface of WDs, the initial donor-star masses of the progenitors of SNe Ia must have masses greater than 1.1 \(M_{\odot}\) (see also Wang et al. 2010), which ensures a thermal-timescale mass transfer for a long time. However, our work aims at the mass growth of WDs in DNe, thus the maximum donor-star mass is 1.0 \(M_{\odot}\). Furthermore, Wang et al. (2010) took a constant duty cycle as \(d=0.01\) in their simulations. However, in this work we study the influence of different duty cycles on the mass growth of WDs. Our calculations indicate that the growth of a WD mass can reach a maximum of around 0.1 to 0.15 \(M_{\odot}\) for \(M_{\rm WD,i}=0.6\) to 0.7 \(M_{\odot}\) during the disc instability. The extent of this increase, for low \(M_{\rm WD,i}\) might alleviate the mass discrepancy between single WDs and WDs in CVs (Zorotovic et al., 2011; Liu & Li, 2016). However duty cycles of DNe remains uncertain while the birthrate of DNe amongst all CVs is also poorly known. So it is hard to draw a robust conclusion as to whether DNe can solve this WD mass problem. We also find that the evolution of orbital periods and donor-star mass is hardly affected by duty cycles. This is consistent with a known relation between the donor-star mass and the orbital period. The main reason is that the timescale of the outburst states is much shorter than the recurrence time, and the secular AML due to mass loss is smaller than the contribution of MB. Therefore, DIMs and different duty cycles can mainly influence the WD masses. ## 5 Conclusion In this paper, we explore the influence of the disc instability on the WD-mass growth in DNe. Employing the stellar evolution code MESA, we model the evolution of dozens WD binaries consisting of a WD and a MS companion for different duty cycles. Our main results are summarised as follows. (1) The DIM plays an important role in influencing the mass growth of WDs during DNe. For \(M_{\rm WD,i}=0.6\), 0.7 and 1.1 \(M_{\odot}\), the maximum mass growth are 0.1, 0.13 and 0.21 \(M_{\odot}\), respectively. Therefore, the DIM in DNe could alleviate the WD mass problem to some extent. (2) Duty cycles are key factors in determining the mass growth of WDs. For a specific WD binary, there exists an efficient \(d\), at which the mass growth of WDs reaches a maximum. This efficient \(d\) is related to the initial parameters including \(M_{\rm WD,i}\), \(M_{\rm d,i}\), and \(P_{\rm orb,i}\). In our simulated parameter space, for \(M_{\rm WD,i}=0.6,0.7\), and 1.1 \(M_{\odot}\) the efficient duty cycles locate at 0.006 \(\leq\)\(d\)\(\leq\)0.007, \(d=0.005\) and \(d=0.003\), respectively. (3) For the same \(d\), a long initial orbital period or a high donor-star mass leads to a large mass growth of WDs. This is why King et al. (2003) favoured the long-period DNe as an alternative channel of SNe Ia. Within our simulated parameter space, no WDs can accumulate their masses to the explosion mass of SNe Ia of 1.38 \(M_{\odot}\), thus no SNe Ia happen. (4) The final donor-star masses and orbital periods are difficult to be affected by \(d\). The main reason is that the timescale of the outburst state is much shorter than the recurrence time, resulting in the AML due to mass loss during the hydrogen and helium unstable burning is much smaller than the contribution by MB. (5) A canonical duty cycle of \(d=0.01\) used in the previous studies for the progenitors of SNe Ia (see also Wang et al., 2010). may overestimate or underestimate the effect of DIMs, and the latter seems more likely. It requires a relatively small duty cycle of \(d=0.003-0.007\) in order to alleviate the WD mass problem. In the future, we plan to explore observational studies of some DNe (e.g. EY Cyg1954+3221 and RU Peg2214+1242, Ritter & Kolb, 2003). These DNe possess relatively high WD masses, however, their outburst details are not well understood. We expect to constrain the duty cycles and some key parameters for these sources, and reproduce their evolutionary history. ## Acknowledgements We are grateful to the referee, Professor Christopher Tout for the valuable comments which helped improve this manuscript. This work was supported by the Natural Science Foundation of China under grant Nos. U2031116, 12273014, 12041301, 12121003, and U1838201. ## Data Availability All data underlying this article will be shared on reasonable request to the corresponding author.
2310.04830
Extract-Transform-Load for Video Streams
Social media, self-driving cars, and traffic cameras produce video streams at large scales and cheap cost. However, storing and querying video at such scales is prohibitively expensive. We propose to treat large-scale video analytics as a data warehousing problem: Video is a format that is easy to produce but needs to be transformed into an application-specific format that is easy to query. Analogously, we define the problem of Video Extract-Transform-Load (V-ETL). V-ETL systems need to reduce the cost of running a user-defined V-ETL job while also giving throughput guarantees to keep up with the rate at which data is produced. We find that no current system sufficiently fulfills both needs and therefore propose Skyscraper, a system tailored to V-ETL. Skyscraper can execute arbitrary video ingestion pipelines and adaptively tunes them to reduce cost at minimal or no quality degradation, e.g., by adjusting sampling rates and resolutions to the ingested content. Skyscraper can hereby be provisioned with cheap on-premises compute and uses a combination of buffering and cloud bursting to deal with peaks in workload caused by expensive processing configurations. In our experiments, we find that Skyscraper significantly reduces the cost of V-ETL ingestion compared to adaptions of current SOTA systems, while at the same time giving robustness guarantees that these systems are lacking.
Ferdinand Kossmann, Ziniu Wu, Eugenie Lai, Nesime Tatbul, Lei Cao, Tim Kraska, Samuel Madden
2023-10-07T14:38:43Z
http://arxiv.org/abs/2310.04830v1
# Extract-Transform-Load for Video Streams ###### Abstract. Social media, self-driving cars, and traffic cameras produce video streams at large scales and cheap cost. However, storing and querying video at such scales is prohibitively expensive. We propose to treat large-scale video analytics as a data warehousing problem: Video is a format that is easy to produce but needs to be transformed into an application-specific format that is easy to query. Analogously, we define the problem of Video Extract-Transform-Load (_V-ETL_). _V-ETL_ systems need to reduce the cost of running a user-defined _V-ETL_ job while also giving throughput guarantees to keep up with the rate at which data is produced. We find that no current system sufficiently fulfills both needs and therefore propose _Skyscraper_, a system tailored to _V-ETL_. _Skyscraper_ can execute arbitrary video ingestion pipelines and adaptively tunes them to reduce cost at minimal or no quality degradation, e.g., by adjusting sampling rates and resolutions to the ingested content. _Skyscraper_ can hereby be provisioned with cheap on-premises compute and uses a combination of buffering and cloud bursting to deal with peaks in workload caused by expensive processing configurations. In our experiments, we find that _Skyscraper_ significantly reduces the cost of _V-ETL_ ingestion compared to adaptions of current SOTA systems, while at the same time giving robustness guarantees that these systems are lacking. **PVLDB Artifact Availability:** The source code, data, and/or other artifacts have been made available at [https://github.com/ferdiko/vetl](https://github.com/ferdiko/vetl). ## 1. Introduction Every day, millions of video streams are produced by smartphones, TV stations, self-driving cars, dashcams, and CCTV cameras deployed in cities and office buildings. These video streams can offer great insights and enormous value in fields such as city planning, marketing, advertisement, smart retail, or autonomous driving. For example, city planners around Vancouver are currently facing the challenge of deciding where to place electric vehicle (EV) chargers. For that, they want to obtain data that tells them which points in the city are most commonly traversed by EVs. Most cities like Vancouver already installed hundreds to thousands of traffic cameras, which could be used to obtain such EV counts. The naive way of counting how many EVs pass by each camera is to store the video from all cameras and then run an object detection algorithm1 on the recorded video at query time. However, this approach has major disadvantages. First, storing the video requires outrageously large storage volumes. For example, one thousand traffic cameras roughly produce 230 TB of data every month.2 Storing one month's data on Amazon S3 would therefore cost $60,000 per year. Second, querying for trends or averages usually requires analyzing months to years of data, which leads to large query latencies. Even on modern GPUs, state-of-the-art computer vision (CV) models can only process a few frames per second. For example, processing one year of video with the YOLO object detector [62] takes six months on an AWS p3.2xlarge instance (with an NVIDIA Tesla V100 GPU). Third, naively applying CV techniques at such scales is prohibitively expensive for many applications. For example, naively running the YOLO object detector [62] to analyze a month of traffic data from 100 cameras costs $110,000 on AWS3. Footnote 1: In Canada (as in many other countries), EVs are especially easy to distinguish from other cars since they have green license plates. Footnote 2: One traffic camera feed in our experiments produces 7.8GB of data per day. To address the limitations of the naive approach, we propose to manage live video streams like in a data warehouse. Video is a format that is easy to produce but hard to query. A _video warehouse_ allows for efficient querying by converting incoming video into an intermediate format that is easy to query. This intermediate format is application-specific and contains the extracted entities of interest. In the EV example, it would contain car counts and types. Analogous to traditional data warehouses, we refer to the process of preparing the data for querying as Video Extract-Transform-Load (_V-ETL_). Video is _extracted_ from the cameras, _transformed_ into the intermediate format using CV, and _loaded_ into a query engine like a relational database system. This lets the user issue queries in SQL against tables with the extracted entities (e.g., obtaining the EV counts is a simple count query on a Detections table, where the detected car is an EV, grouped by the camera id). Video warehouses eliminate the storage problem since users may throw video away after extracting all entities of interest during ingestion. They also solve the query latency issue, since users can issue queries against the intermediate format and no expensive CV algorithm needs to be run at query time. However, video warehouses do not magically solve the cost problem, as the video still needs to be processed during the _V-ETL_ Transform step. Furthermore, video processing must happen at the rate at which the video is produced in order to achieve continuous ingestion. To address the challenges imposed by _V-ETL_, we built _Skyscraper_ which allows for cheap video ingestion while also adhering to throughput requirements. _Skyscraper_'s goal is to make the _V-ETL transform_ step more practical. It allows users to provision hardware resources according to their monetary budget and optimizes the quality of the extracted video entities on the given resources. Depending on the provisioned hardware, _Skyscraper_ reduces the work imposed by the _V-ETL_ job while degrading the result quality as little as possible. _Skyscraper_ does this by dynamically configuring knobs that are inherent to CV workloads. Examples of such knobs include the frame rate or the image resolution at which the video is processed, as well as further, application-specific knobs. Each of these knob represents a trade-off between work and result quality: Expensive knob configurations can reliably deliver good results, even for difficult inputs (e.g., many object occlusions); cheap configurations, on the other hand, only deliver good results on easy inputs (e.g. few occlusions, good lighting conditions etc.) but are prone to mistakes on difficult inputs. The content of real-world video streams is highly variable with frequent changes in how difficult it is to analyze the content (i.e., changes every few 10s of seconds). _Skyscraper_ saves work by using expensive knob configurations on difficult video segments and cheap configurations otherwise. Since _Skyscraper_ needs to process data on constrained hardware at a required throughput, _Skyscraper_ must configure the knobs not only based on the video content but also on the available hardware resources. Industrial deployments for live video processing are typically provisioned with three types of resources [35]: a local compute cluster with a high-bandwidth connection to the video source, a video buffer, and cloud resources that may be used to rent on-demand cloud compute (to limit cloud costs, users typically want to set a cloud budget.) _Skyscraper_ leverages all three of these resource types: _Skyscraper_ itself runs on the local cluster and uses it to process video. To keep costs low, the local cluster is typically not provisioned to process the most expensive knob configurations in real-time. When it falls behind, _Skyscraper_ sets video in the buffer and, as the buffer starts to fill, offloads work to on-demand cloud workers. _Skyscraper_ must avoid prematurely using up buffer space and cloud credits in order to not run out of them when expensive knob configurations would have the greatest impact. _Skyscraper_ therefore forecasts the workload and rations compute resources with regard to future demand. To still be robust to unavoidable inaccuracies in the forecast, we propose to combine a predictive planning component with a reactive execution component, which lets _Skyscraper_ make tuning decisions while considering both, the future demand and the content that is actually streamed in the moment. Despite the need for predictive knob tuning, _Skyscraper_'s knob tuning decisions must impose a low overhead -- this is especially important in low-budget regimes, where large decision overheads would consume a significant portion of the compute resources. While prior content-adaptive knob tuners run additional CV operators to make tuning decisions [16; 40], _Skyscraper_ adapts to the video content only based on a user-defined quality metric (e.g., certainties commonly reported by CV models) that are extracted anyways when running the _V-ETL_ job. This allows _Skyscraper_ to make tuning decisions in under 0.5 ms on a single CPU core. Figure 1 shows an overview of how _Skyscraper_ processes the EV example workload. The user specifies user-defined functions (UDFs) that transform the video into the application-specific query format. In Figure 1, the user only defines two UDFs. The object detector UDF is responsible for detecting new cars, while the object tracker UDF is responsible for tracking cars as they move across the frame to avoid double counting them. Finally, the user registers the workload's tunable knobs. In the simple example, the user only defines a single knob that controls how frequently the object detector should be run. _Skyscraper_ optimizes the costly Transform step while the user code performs the Extract and Load steps. _Prior work._ While _Skyscraper_ is the first system to specifically address the challenge of _V-ETL_, there are several lines of work that are relevant to _Skyscraper_. We briefly highlight two of them here and refer to Section 6 for a more detailed discussion on related work. Table 1 shows the differences between _Skyscraper_ and these works. First, there is prior work on content-adaptive knob tuning, such as Chameleon [40] and Zeus [16]. These systems are designed to reduce the average processing time per frame while assuming that the provisioned hardware can always ingest video in real-time (even during peak workload). However, when ingesting video on cheaper machines that are not peak-provisioned, prior systems do not provide throughput guarantees and are therefore impractical for _V-ETL_. Adapting these systems to fulfill throughput requirements on cheap hardware is challenging, since they are agnostic to lag and the hardware resources they run on. We discuss this further in Section 6. Second, there is prior work on systems that use knob tuning to adapt to the current query load. VideoStorm [81] and VideoEdge [35] are designed for scenarios where users run a dynamic set of queries over video streams, which causes dynamic changes to the type and number of queries running. At times when many queries are running concurrently, not all queries may be able to run at maximum quality and in real time. VideoStorm and VideoEdge tune the queries' knobs such that the queries fulfill their quality and latency goals as well as possible. However, VideoStorm and VideoEdge only adapt to the query load (i.e., the queries present in the system) and are agnostic to the streamed content. This brings no benefit in scenarios where the query load is static. While we envision most _V-ETL_ applications to ingest video using a static set of processing jobs, VideoStorm might still be used if users dynamically redefine how to ingest video. In summary, our contributions are as follows: \(\bullet\) We define the problem of Video Extract-Transform-Load (_V-ETL_) and identify its importance. \(\bullet\) To make _V-ETL_ more practical, we propose _Skyscraper_, the first content-adaptive knob tuning system with throughput guarantees. _Skyscraper_ lets users provision compute resources according to their budget and optimizes the result quality on the given resources. \(\bullet\) To effectively ration compute resources over time, we propose a combination of predictive planning and reactive execution. \(\bullet\) We propose a tuning method that only relies on a user-defined quality metric which is extracted anyways when running the _V-ETL_ job. We find that this method allows for negligible tuning overheads. \(\bullet\) We conduct experiments on several real-world and synthetic workloads and find that _Skyscraper_ can achieve cost reductions up to 8.7\(\times\) over baselines on various workloads. \begin{table} \begin{tabular}{|p{56.9pt}|p{142.3pt}|p{142.3pt}|} \hline & **Adapt to video content** & **Adapt to query load** \\ \hline **Throughput guarantees** & _Skyscraper_ & VideoStorm, VideoEdge \\ \hline **No throughput guarantees** & Chameleon, Zeus & \\ \hline \end{tabular} \end{table} Table 1. _Skyscraper_ compared to other video knob tuning systems Figure 1. _Skyscraper_ optimizing the expensive _V-ETL_ Transform step of the EV counting example job. The blue components are provided by _Skyscraper_, the red ones are provided by the user. ## 2. Problem Definition and System Overview ### Problem definition Video Extract-Transform-Load (_V-ETL_) refers to extracting entities of interest from a video stream by processing it according to a user-defined specification and adhering to two constraints. First, _V-ETL_ systems must process video at the rate at which it arrives. A _V-ETL_ system may lag behind on processing but may only do so by a constant amount. In practice, this means that _V-ETL_ systems may use a fixed-size storage medium (i.e., buffer) to set video aside for later processing. Equation 1 states that the size of the buffered frames may not exceed the size of the buffer. \[out(t)\subseteq in(t)\ \wedge\ \sum_{F\in in(t)\ out(t)}size(F)\leq B\qquad\qquad \forall t \tag{1}\] where \(t\) is a timestamp, \(in(t)\) is the set of frames that the video source has produced at time \(t\), \(out(t)\) is the set of frames that the _V-ETL_ system has processed at time \(t\), \(size(F)\) is the size of frame \(F\) in bytes and \(B\) is the buffer size in bytes. Second, _V-ETL_ systems must process video at a budget that is defined by the user. This budget is provided as a dollar cost \(budget_{T}\) that may be spent over a given time interval \(T\). The processing cost over interval \(T\) encompasses all costs including average wear of hardware, cloud costs, etc. The summed cost of processing all frames in \(T\) must be below \(budget_{T}\): \(\sum_{F\in T}cost(F)\leq budget_{T}\). The combination of processing video at a required throughput while being constrained on computing resources makes for exciting optimization problems. _Skyscraper_ aims to maximize the overall result _quality_ by tuning workload-specific _knobs_ that are inherent to computer vision workloads (e.g., the frame rate or image resolution). In _Skyscraper_, the quality is user-defined and is measured and returned by the user code -- this lets _Skyscraper_ generalize to different workloads with different notions of quality. Users may further register arbitrary knobs together with a corresponding _knob domain_. The knob domain is a user-defined set of values that the knob may take (e.g. the knob domain for the frame rate knob might be {15 FPS, 30 FPS}). _Skyscraper_ dynamically configures registered knobs based on the streamed video content and maximizes the quality (e.g. accuracy) of the extracted entities while adhering to the _V-ETL_ requirements. Formally, a knob configuration \(k\) refers to an instantiation of each knob to a value in its domain. Some knob configurations induce more work than others. Similarly, some produce more qualitative results than others. However, the result quality of a knob configuration depends not only on the configuration but also on the video content. While a high image resolution may reliably produce good results, it may not always be needed as some content can also be accurately processed at a lower resolution. Let a _video segment_ denote a sequence of successive frames of the video (e.g., 2 seconds of video). We denote the quality that a knob configuration \(k\) achieves on a video segment \(s\) as \(qual(k,s)\). The optimization goal of _Skyscraper_ is to maximize the overall quality \(qual(u)\) of entities extracted from video \(u\), which is given by \(qual(v)=\sum_{s\in v}qual(k_{s},s)\)where \(k_{s}\) is the configuration used to process segment \(s\). ### System overview The following subsection gives a high-level overview of _Skyscraper_. Section 3 and Section 4 then provide a more detailed discussion of _Skyscraper_'s design. We focus on how _Skyscraper_ ingests a single video stream and show in Appendix D, how this approach can easily be generalized to multiple streams. _Design challenges._ To explain why _Skyscraper_ works the way it does, we present a simplistic, idealized approach to content-adaptive knob tuning with throughput guarantees, and show where this approach fails in practice. We then present the ideas that _Skyscraper_ uses to overcome the issues of the idealized approach. For now, we do not consider buffering or the scheduling of computation between on-premise resources and the cloud. Instead, we simply consider a computation budget \(budget_{T}\) on the number of arithmetic operations that we may use to ingest video produced during time period \(T\). We are further given a small set of knob configurations \(\mathcal{K}\) which allows us to process different segments of the video at different costs and qualities (see Section 2.1). We observe that the knob tuning system must speculate about the future content of the video in order to effectively ration \(budget_{T}\) over time. Otherwise, the system can not assess whether it is sensible to process content with an expensive knob configuration now or to save the budget for the future when expensive knob configurations might have a larger impact. Furthermore, we find that the effectiveness of different knob configurations often changes within seconds -- a content-adaptive knob tuning system should therefore reassess which configuration to use every couple of seconds. Now, suppose we have a forecasting function that can perfectly predict what quality each knob configuration achieves at any given time in the future. In this idealized world, we can easily build a system that achieves optimal performance: Our optimal system would slice time interval \(T\) into segments \(t_{i}\) of equal length, where each segment \(t_{i}\) is a few seconds long. The system then forecasts the quality that each knob configuration achieves on each segment \(t_{i}\in T\). Finally, given the forecasted qualities, optimizing the assignment of knob configurations to segments is an instance of the 0-1 knapsack problem, where the overall quality must be maximized under the given budget \(budget_{T}\) (more details in Appendix B). Unfortunately, we find that achieving good accuracy on this forecasting task is infeasible in the real world. To forecast the knob configurations' qualities for each \(t_{i}\in T\), our forecasting function needs to predict what happens at each second in the video, hours into the future. This is impossible since the precise timing of events is subject to substantial randomness. For example, it is impossible to predict the exact moment in which a large group of pedestrians will pass by a camera, hours into the future. To make our system work in the real world, we need to design a more practical forecasting task. We rely on two insights that guide the design of this new forecasting task. First, we observe that there are a few types of video content that characterize any of the videos seen throughout the live stream (e.g., rush hour traffic, normal traffic, low traffic). For the content of the same kind, each knob configuration produces results of similar quality. For example, for content with many occlusions (e.g., rush hour), knob configurations that cannot handle occlusions will always produce low-quality results. Second, we observe that, while it is impossible to predict _when_ certain content appears, it is possible to predict _how often_ it appears, _assuming the future video is distributed roughly as a recent historical video has been_. For example, while it is impossible to predict the _precise moments_ (i.e. the \(t_{i}\)'s) at which groups of pedestrian pass by the camera, it is possible to estimate _how often_ groups of pedestrians pass by the camera. We can now design a forecasting task where accurate predictions are feasible in practice. Based on the first insight (content falls into a few categories), we use a simple clustering mechanism to compute _content categories_ such that all streamed content falls into one of these categories. We construct them such that all knob configurations achieve a similar quality on the content of the same category (more details in Section 3). Then, based on the second insight (content distribution is predictable), we simply predict how often each content category appears within a time interval \(T\). For example, if our forecasting model thinks that 10% of the video in \(T\) shows rush-hour traffic, it would forecast 10% for the rush-hour category. In practice, we can achieve high forecasting accuracy on real-world workloads. Finally, we need to re-think how to use the forecast for knob tuning. Since we no longer forecast the qualities of individual segments \(t_{i}\), we cannot assign knob configurations the same way as in our idealized system. Instead, we can only assign knob configurations to content categories. Knowing how often each content category appears allows us precisely determine the overall cost of using a knob configuration to process the content of that category. In Section 4, we describe how this allows us to find the optimal assignment of knob configurations to content categories under a given budget and for a given forecast. Given this assignment, we then need to reactively determine what category the current content belongs to. Once we determine the category, we can simply look up and use the knob configuration we assigned to this category. Section 4.2 describes a simple method for determining the current content category, which runs fast and determines the correct category with high accuracy. In summary, we took a simplistic, idealized system and made it practical by re-designing the forecasting task. We then built an efficient system around it that can leverage this forecast for predictive knob tuning. _Skyscraper_ takes these ideas and implements them for real hardware provisionings. _Skyscraper walk-through._ Given these challenges imposed by content-adaptive knob tuning with throughput guarantees, we now give an overview on how _Skyscraper_ uses these ideas when provisioned with real hardware (i.e., with a local compute cluster, video buffer and cloud credits). _Skyscraper_ is split into an _offline learning phase_ and an _online ingestion phase_ as shown in Figure 2. Section 3 gives a detailed description of the offline phase and Section 4 gives a detailed description of the online phase. The offline phase is used to pre-compute invariant properties of the _V-ETL_ workload, which allow online ingestion at negligible overheads. To compute these properties, the user provides _Skyscraper_ with a small set (e.g. 5 minutes) of labeled data and a larger set (e.g. two weeks) of unlabeled data from the ingested video source. _Skyscraper_ uses this data to prepare online ingestion in three steps. First, _Skyscraper_ profiles different knob configurations on the provisioned on-premise hardware and cloud hardware. Each knob configuration corresponds to a directed acyclic graph (DAG) of UDFs. _Skyscraper_ profiles the cloud cost and runtime of different UDF placements -- executing some UDFs on the cloud may reduce the execution time (due to added parallelism) but increases the cloud cost. _Skyscraper_ filters out placements that do not lie on the cost-runtime Pareto frontier. Similarly, _Skyscraper_ filters out knob configurations that do not lie on the runtime-quality Pareto frontier. Appendix F discusses how _Skyscraper_ handles data movement between UDFs and between on premises and the cloud. Second, _Skyscraper_ uses the unlabeled data to construct the content categories as discussed under _Design challenges_. The content categories are constructed solely based on a quality metric that is measured and returned by the user code (e.g. certainty or errors commonly reported by CV models). By construction, the content categories discriminate between any content characteristic that affects the quality of at least one knob configuration. Constructing the content categories solely based on a user-defined quality metric lets _Skyscraper_ generalize across workloads since _Skyscraper_ doesn't need to understand the precise workings of the UDFs and how their performance is affected by pixel-level changes. Furthermore, dealing with low-dimensional quality vectors (e.g., 5-dimensional) allows _Skyscraper_ to run fast, which is almost impossible when dealing with high-dimensional image data (e.g., 750,000-dimensional). Third, _Skyscraper_ uses the unlabeled data to train the forecasting model. As in under _Design challenges_, the forecasting model forecasts how often each content category appears within a defined future time interval. This forecast is based on how frequently the content categories have appeared in the recent past. After the offline phase, each knob configuration is characterized by the quality it achieves on different content categories as well as the profiled runtimes and cloud costs when executing the knob configuration using different task placements. When optimizing video ingestion, _Skyscraper_ only considers the runtime of knob configurations together with the quality the knob configuration achieves on the current content category. This is sufficient to maximize the quality under throughput constraints and lets _Skyscraper_ agnostic to the UDFs. _Skyscraper_ periodically performs predictive _knob planning_ (e.g. every 2 days) and reactive _knob switching_ (e.g. every 2 seconds): Knob planning involves forecasting how often each content category Figure 2. Overview over all processing steps of _Skyscraper_. appears in the future (e.g. within the next 2 days) and assigning knob configurations to the content categories based on the forecast. Knob switching involves determining the content category of the current video content and looking up what knob configuration the planning phase assigned to that category. Based on the assigned knob configuration, the available buffer space, and the profiled runtimes, _Skyscraper_ then picks a knob configuration and task placement and uses it to process the next segment of video. _Processing example_ Figure 3 shows how the knob planner and knob switcher use the provisioned resources to achieve high-quality results when running the EV example workload on 24 hours of a traffic camera stream. The uppermost plot in Figure 3 shows how three different knob configurations (expensive, medium, cheap) achieve different result qualities. For the EV workload, the result quality is mainly affected by object occlusions (i.e., one car overlaps with another car). We observe that expensive configuration reliably produces high-quality results while the cheap one only produces high-quality results at night, when there is little traffic and few occlusions. The second plot in Figure 3 shows how the dynamic knob switching in _Skyscraper_ causes the change in the workload (TFLOP per second). We can see that the workload is low during the night when _Skyscraper_ frequently uses the cheap configurations, but high during the day when _Skyscraper_ uses the expensive configurations. The data in Figure 3 is smoothed and hides that _Skyscraper_ switched 4500 times between knob configurations over the course of the plotted time period. If we would instead always use the most expensive configuration, the workload would be constant at 5.2 TFLOP/s. The third plot in Figure 3 shows how _Skyscraper_ sets video aside into the buffer during the day when frequently running the expensive knob configuration. We can also see how _Skyscraper_ catches up on processing the buffered video at 5PM, when the workload decreases. The buffer has a size of 4GB and is full at around 2 PM. When it is full, _Skyscraper_ decides to offload some work to the cloud which is reflected by the rising amount of cloud credits spent in the bottom figure (note that the Y axis shows the percentage of the daily cloud budget that has been spent). We can see that _Skyscraper_'s spending comes close to what it planned for that day. ## 3. Offline Preparation Phase In the offline preparation phase, _Skyscraper_ is fitted on the historical video data recorded from the same source that will be ingested in the online phase. _Skyscraper_ needs a small set of labeled data (i.e., 20 minutes) and a larger set of unlabeled data (e.g., 2 weeks). Based on this data, _Skyscraper_ first leverages prior work (Beng et al., 2016; Kwan et al., 2017) to create a filtered set of knob configurations and a set of good task placements for them. Then, _Skyscraper_ clusters video content into categories allowing _Skyscraper_ to reason about video content in the online phase. Furthermore, _Skyscraper_ trains a forecasting model to predict the frequency that each content category appears in the near future. We describe these procedures in more detail as follows. ### Filter knob configurations and task placements In order to optimize video processing while inducing little decision overheads during online ingestion, _Skyscraper_ needs to decide the desirable knob configuration \(k\) to process the streamed content and the placement \(TP_{k}\) of its task graph \(G_{k}\). Recall that the placement of \(G_{k}\) specifies which computation components when using knob configuration \(k\) to run on the cloud and which ones to run on-premises. The number of all knob configurations is exponential in the number of user-registered knobs. Similarly, the number of all possible placements for a task graph is exponential in the number of tasks. _Skyscraper_ leverages prior work (Beng et al., 2016; Kwan et al., 2017) to filter the set of knob configurations and task placements down to a smaller set. Thereafter, _Skyscraper_ only needs to consider promising candidates in the online phase, reducing the size of the decision problem and therefore online overheads. We leverage the greedy hill climbing algorithm (Zhu et al., 2017) proposed in VideoStorm (Kwan et al., 2017) to filter the knob configurations. We use PlaceTo (Beng et al., 2016) to filter the task placements (details in Appendix A). ### Categorize video dynamics _Skyscraper_ discretizes video content into _content categories_ with the property that knob configurations achieve similar result quality for all video segments belonging to the same content category. In this section, we describe how to identify these content categories and will discuss how to forecast them in Section 3.3 and how the categories allow for efficient video ingestion in Section 4. _Skyscraper_ categorizes video content using unlabeled training data. _Skyscraper_ first samples a set of video segments \(\mathcal{S}^{\prime}\) from the unlabeled data. _Skyscraper_ then processes each segment \(s\in\mathcal{S}^{\prime}\) with all configurations \(k\in\mathcal{K}\) and records the result quality that each \(k\) achieves on the segment \(s\) as \(\mathit{qual}_{s}(k)\). The result quality measurement is defined by the user and will be further discussed in Section 4. We group the qualities of all configurations \(k\) on a segment \(s\) into a \([\mathcal{K}]\)-dimensional _quality vector_\(\mathit{qual}_{s}=[\mathit{qual}_{s}(k_{1}),...,\mathit{qual}_{s}(k_{| \mathcal{K}|})]^{T}\). We gather the \(\mathit{qual}_{s}\) for all segments \(s\in\mathcal{S}^{\prime}\) to form a set of quality vectors \(\mathcal{Q}=\{\mathit{qual}_{s}\,|\,s\in\mathcal{S}^{\prime}\}\). Then, _Skyscraper_ decides the content categories \(\mathcal{C}\) by running KMeans (Kwan et al., 2017) on \(\mathcal{Q}\). Thereafter, the content is clustered according to the quality that the knob configurations achieve on it, ensuring that all knob configurations achieve similar result quality for the content of the same category by the property of KMeans. A content category \(c\in\mathcal{C}\) is therefore characterized by a \(|\mathcal{K}|\)-dimensional cluster center, which denotes the average quality that the knob configurations will Figure 3. Running the EV workload over a traffic camera. achieve on content belonging to \(c\). We denote the cluster center as \([\vec{q}\vec{u}al(k_{1},c),...\vec{q}\vec{u}al(k_{|\mathcal{K}|},c)]\), where \(\vec{q}\vec{u}al(k,c)\) is the average quality that \(k\) will achieve on videos categorized as \(c\). We evaluate choices for the number of categories (\(k\) of KMeans) in Appendix I.1 and find that _Skyscraper_ is not very sensitive to \(k\) as long as it is not too small (e.g. \(\geq 3\)). Furthermore, it is easy to tune such hyperparameters during the offline phase. ### Train the forecasting model _Skyscraper_ trains a forecasting model \(\mathcal{F}\) to predict how frequently each content category \(c\in C\) appears in the near future time interval given their frequency in the most recent history. \(\mathcal{F}\) allows _Skyscraper_ to effectively ration computational resources and optimally allocate them for different video content categories to come. We denote the forecasted time interval as the _planned interval_. _Skyscraper_ uses a simple feed-forward neural network as forecasting model \(\mathcal{F}\). We find this to be sufficient and describe its architecture in Appendix K. Let \(r^{(T)}\) be \(|C|\)-dimensional histogram representing the frequency each category \(c\in C\) appears over time interval \(T\). The output of \(\mathcal{F}\) is thus \(r^{(PI)}\) where \(PI\) is the planned interval. The input to \(\mathcal{F}\) is the content histograms of the most recently ingested data. We split the most recent time interval \(T_{input}\) into \(n\) equally-sized intervals \(T_{input}=[T_{1},..,T_{n}]\) and provide their category occurring frequency \([r^{(T_{1})},..,r^{(T_{n})}]\) as time-series inputs to \(\mathcal{F}\). We evaluate choices of \(T_{input}\) and \(n\) in Appendix I.3 and find that _Skyscraper_ is not very sensitive to them as long as both \(T_{input}\) and \(n\) are reasonably large (i.e. \(T_{input}\) is a couple of days and is split into intervals of a couple of hours). _Skyscraper_ pre-trains \(\mathcal{F}\) in the offline phase using the unlabeled data, which we describe the detail in Appendix H. Furthermore, \(\mathcal{F}\) can be fine-tuned in the online phase using the recently ingested data to provide more accurate forecasting. ## 4. Online video ingestion After completing the offline learning phase, _Skyscraper_ is ready to ingest live video streams. During live ingestion, _Skyscraper_ uses both a predictive component (_knob planner_) and a reactive component (_knob switcher_) to make knob tuning decisions. The predictive knob planner periodically forecasts trends in the video content and lets _Skyscraper_ make knob tuning decisions with the future workload in mind. This allows _Skyscraper_ to put the provisioned compute resources to optimal use and prevents premature use of buffer space and cloud credits, making use of expensive knob configurations when they have the greatest impact. However, while it is possible to forecast long-term trends in the content, the exact short-term occurrence of content is subject to substantial noise. Thus, _Skyscraper_ also uses a reactive knob switcher that switches between knob configurations based on the current content. The knob switcher presents a way to leverage the forecasted workload trends while being robust to short-term noise. In the following section, we describe the algorithms used for both the knob planner and the knob switcher. ### Knob planner The knob planner computes a _knob plan_ that specifies which knob configurations \(k\in\mathcal{K}\) to use for each content categories \(c\in\mathcal{C}\) to maximize the overall result quality given the available compute resources. Such assignment of knob configurations to \(c\) is based on the forecasted _content distribution_, which specifies how frequently each knob configuration will appear over the forecasted interval. Recall from Section 3.3, we refer to this interval as the the _planned interval_. We find that accurate forecasts can be achieved a couple of days into the future and consequently re-compute the knob plan every couple of days using a fresh forecast. Formally, the knob plan generates a histogram \(\alpha_{c}\) over knob configurations \(\mathcal{K}\) for each content category \(c\in C\). \(\alpha_{c}\) determines how often a knob configuration \(k\in\mathcal{K}\) should be used for processing content of category \(c\) - i.e., there is one bucket in the histogram for each knob configuration, indicating the relative frequency with which that configuration should be chosen for the content category. Let \(\alpha_{k,c}\) denote the frequency that histogram \(\alpha_{c}\) assigns to knob \(k\in\mathcal{K}\) (i.e., how often knob \(k\) should be used to process the content of category \(c\)). A knob plan \(\mathcal{P}\) is thus defined as the set containing the histograms for all content categories: \(\mathcal{P}=\{\alpha_{c}\mid c\in C\}\). Finding a knob plan that maximizes the result quality under the compute budget involves jointly optimizing the histograms for all content categories. Each category's histogram determines the total resource consumption for processing content of the category, which in turn determines how many resources are available for the remaining categories. _Skyscraper_ creates a knob plan in two steps. First, the knob planner uses the pre-trained model \(\mathcal{F}\) from the offline phase to forecast how often (the ratio \(r_{c}\) described in Section 3) each content category will appear over the planned interval. Second, using the forecasted content ratios \(r_{c}\), _Skyscraper_ formulates the assignment of knobs to content categories as a linear program. This allows _Skyscraper_ to find the globally optimal knob plan \(\mathcal{P}\). _Skyscraper_ maximizes the expected overall result quality using the content category cluster centers computed in the offline phase. As described in Section 3, each content category \(c\in C\) is defined by a KMeans cluster center, which is a vector whose \(i\)-th element denotes the average quality \(\vec{q}\vec{u}al(k_{i},c)\) that knob configuration \(k_{i}\) achieves on the content of category \(c\). Given the average quality of each knob configuration for each content category, the solution of the linear program maximizes the overall expected quality while being constrained by the compute budget _budget_.4 Footnote 4: The unit of the compute budget is given in \(core*s\) using the on-premise server cores. _Skyscraper_ internally takes care of converting the user-defined cloud credits budget. maximize \[\sum_{k,c}\alpha_{k,c}*r_{c}*q\vec{u}al(k,c)\] (2) subject to \[\sum_{k,c}\alpha_{k,c}*r_{c}*cost(k)\leq budget\] (3) \[\sum_{k}\alpha_{k,c}=1,\ \ \ \alpha_{k,c}\geq 0 \forall c\] (4) The decision variables of the linear program are \(\alpha_{k,c}\), which determine how often the content of category \(c\) should be processed by configuration \(k\) and thereby make up the knob plan. The goal of the knob plan is to maximize the overall result quality, which is denoted by Line 2. Line 3 denotes that the total amount of cost should stay below the user-specified budget. Finally, Line 4 enforces that the assigned ratios \(\alpha_{k,c}\) add up to 1 for each content category (this is merely for normalization). We use an off-the-shelf solver (Kosner et al., 2017) which is able to find the solution to this linear program in less than a second for the problem sizes encountered by _Skyscraper_. After finding the optimal value for the decision variables \(\alpha_{k,c}\), we have the knob plan \(\mathcal{P}\) which tells us how often to use each knob \(k\) to process the content of category \(c\) in order to achieve maximum quality given the constrained computing resources. In Section 4.2, we show how \(\mathcal{P}\) can be leveraged to efficiently switch between knob configurations. ### Knob switcher Based on the current video content, the knob switcher reactively determines which knob configuration \(k_{next}\in\mathcal{K}\) to use and which tasks of \(k_{next}\)'s task graph \(G_{k_{next}}\) to execute on the cloud and which tasks to execute on-premises. The knob switcher is designed to be lightweight and doesn't induce significant decision overheads, even when run frequently. It decides on the next knob configuration \(k_{next}\) and task placement \(p_{next}\) in three simple steps: First, it determines the category \(c_{curr}\in\mathcal{C}\) that the current content belongs to. Second, it looks content category \(c_{curr}\) up in the knob plan to obtain the configuration histogram \(\alpha_{c_{curr}}\) that the knob plan assigns to \(c_{curr}\). Third, the knob switcher picks knob configuration \(k_{next}\) based on \(\alpha_{c_{curr}}\) along with a task placement \(p_{next}\) -- the knob switcher hereby guarantees to never overflow the buffer. In the following, we describe how the knob switcher performs each of these steps in more detail. In the first step, the knob switcher determines the category \(c_{curr}\) of the current content merely using the reported quality \(qual^{*}(k_{curr})\) of the current knob configuration \(k_{curr}\). This allows the knob switcher to select a category in a low overhead way, rather than running an expensive processing step on the video directly. Specifically, given \(qual^{*}(k_{curr})\), the knob switcher selects the current content category \(c_{curr}\) as the one whose average quality for \(k_{curr}\) (\(\widehat{qual}(k_{curr},c_{curr})\)) matches the currently reported quality (\(qual(k^{*})\)) the closest. The average quality \(\widehat{qual}(k_{curr},c)\) of \(k_{curr}\) for a category \(c\in\mathcal{C}\) is given by \(c\)'s cluster center (see Section 3.2). This is denoted by Equation 5. \[c_{curr}=\operatorname*{argmin}_{c\in\mathcal{C}}\big{|}\widehat{qual}(k_{curr },c)-qual^{*}(k_{curr})\big{|} \tag{5}\] Note that the knob switcher's content classification is analogous to traditional classification with KMeans but only uses one vector dimension since the other dimensions are unattainable. This works well in _Skyscraper_'s case because the content of different categories will induce different result qualities for all knob configurations. As a result, the quality of one knob configuration is sufficient to discriminate between content categories. We experimentally verify this in Section 5.6. In the second step, the knob switcher then looks up the derived content category \(c_{curr}\) in the knob plan \(\mathcal{P}\). This yields a histogram \(\alpha_{c_{curr}}\) dictating how often each knob configuration \(k\in\mathcal{K}\) should be used to process the content of the current category \(c_{curr}\): In the third step, the knob switcher determines the knob configuration \(k_{next}\) that will be used for processing the newly arriving content, together with task placement \(p_{next}\) that determines which tasks of \(k_{next}\)'s task graph to execute on the cloud and which ones to execute on-premises. The knob switcher tries to adhere as closely to the planned histogram \(\alpha_{c_{curr}}\) as possible and therefore keeps a histogram \(\widehat{\alpha}_{c}\) for each \(c\in\mathcal{C}\), which denotes how frequently each knob configuration has actually been used to process the content of category \(c\). To adhere as closely to the knob plan as possible, the knob switcher picks the knob configuration \(k_{next}\) that minimizes the difference between \(\widehat{\alpha}_{c_{curr}}\) and \(\alpha_{c_{curr}}\). This is denoted by Equation 6. Finally, the knob switcher picks a placement \(p_{next}\) for \(k_{next}\). _Skyscraper_ picks the cheapest placement of \(G_{k_{next}}\) that does not overflow the buffer. \[k_{next}=k_{i}\;\;\text{with}\;i=\operatorname*{argmax}_{1\leq i\leq|\mathcal{ K}|}\left(\alpha_{c_{curr}}\left[i\right]-\widehat{\alpha}_{c_{curr}}\left[i \right]\right) \tag{6}\] It is worth noting that there is an edge case when picking the task placement \(p_{next}\): Some knob configurations do not possess task placements that run in real-time, even when heavily adding cloud compute. Reasons for this include limited bandwidth to the cloud, high round trip times to the cloud, and limited opportunities for adding parallelism to the DAG execution. If all placements of \(k_{next}\) would make _Skyscraper_'s buffer overflow, the knob switcher will choose a different configuration \(k^{\prime}_{next}\) to be the next one. This knob configuration \(k^{\prime}_{next}\) is the next less qualitative one compared to \(k_{next}\). Like for \(k_{next}\), the knob switcher will pick the cheapest placement of \(k^{\prime}_{next}\) that does not overflow the buffer. If all placements of \(k^{\prime}_{next}\) would overflow the buffer, the knob switcher will recursively apply this procedure of picking the next less qualitative knob configuration until it finds a configuration and task placement that do not overflow the buffer. In summary, the knob switcher uses three steps to find a knob configuration \(k_{next}\in\mathcal{K}\) along with a task placement \(p_{next}\) while adding little runtime overheads to the ingestion process. The knob switcher tries to adhere as closely to the knob plan \(\mathcal{P}\) as possible, only deviating from the knob plan when this is required to avoid a buffer overflow. This ensures that the knob switcher maximizes the result quality with the given resources. ## 5. Evaluation We evaluate _Skyscraper_ on several real-world applications, covering public health monitoring, traffic planning, and social media analysis. We describe these workloads in subsection 5.2. Then, we evaluate _Skyscraper_ on the following aspects: 1. What cost savings does _Skyscraper_ achieve versus using a static knob configuration? 2. How much do cloud bursting and buffering individually contribute to cost savings in different quality regimes? When do they perform well and when don't they? 3. How much decision overhead does _Skyscraper_ impose at different scales? 4. How accurate are knob planner and knob switcher, and what effect do inaccuracies have on _Skyscraper_'s end-to-end performance? We further evaluate different hyperparameter choices of _Skyscraper_ in Appendix I (e.g., number of content categories (KMeans clusters), periodicity of running the knob switcher, and more). We hereby find that _Skyscraper_'s end-to-end performance is insensitive to many of the hyperparameters as long as they are within a reasonable range. ### Implementation We implement _Skyscraper_ in Python on top of Ray (Ray, 2017). We instantiate several Ray actors for both the on-premise and the cloud version of each UDF. The number of duplicate actors is based on the number of logical cores of the machine. We only map UDFs to Ray actors; all of _Skyscraper_'s components run in the parent process and synchronize the calls to the actors. We discuss implementation choices in more detail in Appendix N. We use AWS Lambda [69] to run UDFs in the cloud and provision 3GB of memory for each cloud function. To simulate incoming video streams in real time, we read video frames from the disk and pause appropriately between frames to guarantee 30 fps streaming rate. All workloads are compute-bound and we find in Appendix, that in our experiments decode only amounted to 5% of the overall runtime. The streamed video is encoded in H.264 [64] and has a resolution of 1280 x 720 (HD). In our experiments, each frame is decoded when arriving in the system (as part of the user code). When sending full or partial frames to the cloud, we compress them to JPEG-1 format [14]. We then serialize the JPEG using Base64 [38] and send the string as part of an HTTPS request. The overhead for encoding and decoding is negligible compared to the transfer time saved through compression. ### Workloads We evaluate _Skyscraper_ using three workloads on public health monitoring, traffic planning, and social media analysis. They cover a diverse set of computer vision primitives including object detectors, trackers, and classifiers, as described below. _COVID-19 safety measures (COVID)_ During the coronavirus pandemic, decision-makers have executed several safety measures to slow down the spread of the virus. Such measures include wearing facial masks and social distancing. Measuring where and how strictly people adhere to these measures can be used for decision-making and informing people at risk. The COVID workload consists of a YOLOv5 object detector [62] to detect pedestrians and a KCF tracker [31] to track the detected pedestrians ("detect-to-track"). After the detection, for each detected pedestrian, the workload employs homography [18] to measure the pedestrian's distance from others. This workload contains the following knobs: 1) _frame rate_ at which video is processed ({30FPS, 15FPS, 10FPS, 5FPS, 1FPS}), 2) _object detection rate_ to run object detector (every {1, 5, 30, 60} frames) and 3) _tiling for object detection_ that slices the frames into ({1x1, 2x2}) tiles. The detailed semantics of these knobs are provided in Appendix J. The workload is executed on an 8-day video stream of a busy shopping street in Tokyo.5We measure quality in terms of the number of people detected and tracked over time as YOLO has a low false positive rate and KCF trackers reliably report tracking errors. Footnote 5: The Koen-Dovi street in the Shibuya district. [https://youtu.be/aLQR-nsEME](https://youtu.be/aLQR-nsEME) _Multi-object tracking (MOT)_ Multi-object tracking (MOT) is a key primitive in many video analytical pipelines. In this workload, we adopt the recent state-of-the-art TransMOT [15] tracker on MOT benchmark [22] and introduce four tunable knobs: 1) _frame rate_ (every {1, 5, 30, 60} frames), 2) _number of tiles_ ({1x1, 2x2} tiling), 3) _length of history_ denoting the number of historical frames ({1, 2, 3, 5}) as the TransMOT input, and 4) _model size_ ({small, medium, large}) that specifies different parameter sizes of the pre-trained TransMOT. The details of TransMOT and its tunable knobs are provided in Appendix J. We run MOT on a stream of a traffic intersection, Shibuya in Tokyo to track pedestrians for 8 days. MOT's processing quality is defined as the sum of tracked pedestrians weighted by the model's reported certainty. With this quality metric, we want to evaluate how _Skyscraper_ maximizes model certainty as a proxy for accuracy as proposed in prior work [55, 63]. _Multi-modal opinion sentiment and emotion intensity (MOSEI)_ This workload is synthetic and simulates a video stream analysis application on Twitch. The number of incoming streams varies over time and mimics the number of live Twitch streams over two days.6 We further introduce two types of spikes to evaluate _Skyscraper_ under difficult conditions: Footnote 6: As recorded by Twitch Tracker at [https://twitchtracker.com/statistics/active-streamers](https://twitchtracker.com/statistics/active-streamers) \(\bullet\)_MOSEI-HIGH_: We introduce high but short peaks in workload, consisting of 62 concurrently incoming video streams. This makes cloud bursting difficult due to bandwidth limitations. \(\bullet\)_MOSEI-LONG_: We introduce a long peak of continuous workload. In this case, the buffer alone cannot handle all the extra work. We use the CMU-MOSEI [7] dataset to simulate incoming video streams, as it has ground truth labels that allow us to train the models used in the workload. It contains various talking head videos from YouTube. The task of the MOSEI workload is to classify the opinion sentiment of the speaker using both the audio and the visual content. CMU-MOSEI provides extracted features from the video with ground-truth labels. We trained a neural network on CMU-MOSEI's training set and used its test set to evaluate _Skyscraper_. MOSEI workload contains the four knobs: 1) _frame rate_, 2) _frequency of sentiment analysis_ that we may run sentimental analysis model once every {1, 2, 3, 4, 5, 6, 7} sentences of the spoken audio and video, 3) _model size_ of the sentimental analysis model, and 4) the _number of streams_ to analyze. Further details about this dataset, the entity extraction DAG, and the tunable knobs are presented in Section J in the Appendix. We evaluate the processing quality as the weighted sum over the ingested streams weighted on model's reported certainty. Due to space limitation, we describe the hyperparameters of _Skyscraper_ for all four workloads in Appendix I. ### Cost efficiency In this section, we evaluate the end-to-end cost savings that _Skyscraper_ achieves on these workloads. We hereby compare _Skyscraper_ to two baselines. The Static baseline processes the video streams statically using the same knob configuration throughout the stream. The Chameleon* baseline refers to an adapted version of Chameleon [40]. We equip Chameleon with a buffer and adapt it to set video aside when the provisioned hardware cannot process it in real-time. This allows Chameleon to achieve cost savings, since it doesn't need to be provisioned to handle peak workload. However, Chameleon* is not practical and may easily crash, as its lack of throughput guarantees may lead to buffer overflows. We benchmarked Chameleon* on several hardware setups and only report the numbers where it didn't crash during the benchmark. For each system, we report the overall result quality that the system achieves on different hardware set ups. Since we do not have access to a wide range of compute servers, we use Google Cloud VM instances as the provisioned, always-on hardware ("on-premise serves"). In the case of _Skyscraper_, which additionally uses AWS Lambda, we have verified that the bandwidth and latencies from the Google Cloud VMs to AWS Lambda realistically reflect the ones of commodity on-premise setups. In our experiments, we consider the following Google Cloud machines: * e2-standard-4: 4 vCPUs, 16 GB memory, 0.14 USD/h * e2-standard-8: 8 vCPUs, 32 GB memory, 0.27 USD/h * e2-standard-16: 16 vCPUs, 64 GB memory, 0.54 USD/h * e2-standard-32: 32 vCPUs, 128 GB memory, 1.07 USD/h * c2-standard-60: 60 vCPUs, 240 GB memory, 2.51 USD/h While these instance types do not possess hardware accelerators (e.g., GPUs), we note that there is nothing fundamental about _Skyscraper_ that would prevent users from using hardware different from only CPUs. If a user provisions _Skyscraper_ with a server that contains hardware accelerators, the application's UDFs would need to make sure that the hardware accelerators are used when executing the UDF. In the offline phase, _Skyscraper_ will then just measure the UDF's runtime and work normally without any modifications. Figure 4 visualizes the cost of processing the workloads from Section 5.2 with each system. On average, content category changes occured every 42s for COVID, every 43s for MOT, every 30s for MOSEI HIGH, every 24s for MOSEI LONG. However, all workloads had some periods with very frequent category changes and others with few category changes. Table 2 in Appendix C further provides the numeric measurements depicted in Figure 4. The total cost of each system is derived from the cost of renting the cloud hardware. In Appendix L, we estimate that the same amount of computing costs 1.8\(\times\) more when using a Google Cloud VM than when using a provisioned on-premise server (this estimate is high and in favor of the baselines). Thus, the total cost of all systems is given by the cost of renting the Google Cloud VMs divided by 1.8 plus the cost of the AWS Lambda workers. _Summary._ Overall, _Skyscraper_ offers significantly better cost-quality trade-offs than current approaches. _Skyscraper_'s performance benefits are especially large on the MOT workload: _Skyscraper_ is 8.7\(\times\) cheaper than the static baseline at a comparable quality. Furthermore, _Skyscraper_ is 3.7\(\times\) cheaper than Chameleon* at a better quality. Chameleon* suffered from large profiling overheads. For the COVID and MOT workload, our results are comparable to what the authors report in the Chameleon paper (2-3\(\times\) speedup over the static baseline at the highest quality level). For the MOSEI workloads, the profiling overheads were especially large since the expensive knob configurations cause large amounts of work. ### Ablation study To evaluate how much buffering and cloud bursting individually contribute to the cost savings, we run an ablation study where we independently disable them. Running this ablation study on unstimulated hardware is prohibitively expensive (i.e., we need to conduct dozens of measurements as the one in Figure 4), so we can only afford to analyze with simulated results. We use a simple but accurate simulator, that we describe in Appendix M.1. We evaluate the accuracy of the simulator on the benchmarked workloads in Appendix M.2 and find that it is reasonably accurate. We use two metrics to evaluate the performance of _Skyscraper_: (1) **The monetary cost** of processing the workload. We hereby also evaluate _Skyscraper_ for different cost ratios between the on-premise and the cloud computing. In Appendix L, we estimate that a ratio of 1:1.8 between on-premises and AWS Lambda is realistic at the current market prices. When evaluating the monetary cost, we evaluate four variations of _Skyscraper_: (1a) _No buffering, no cloud:_ We disable both buffering and cloud bursting. Effectively, this corresponds to not switching knob configurations and only using the most qualitative knob configuration that runs in real time on the given on-premise server. (1b) _Only buffering: Skyscraper_ may only use placements that place every task on-premise and can not use the cloud. (1c) _Only cloud: Skyscraper_ may use the cloud but not buffering. (1d) _Buffering & cloud:_ This corresponds to standard _Skyscraper_. (2) **The amount of work** measured in \(\mathit{core}*\mathit{seconds}\) used in the processing. This is independent of whether the computation is buffered or executed on the cloud or on premises. When evaluating the amount of work, we compare _Skyscraper_ to two baselines: (2a) _Static:_ This baseline corresponds to statically using the same knob configuration. It is similar to baseline (1a) where _Skyscraper_ also statically uses the same configuration. (2b) _Skyscraper_: We measure the amount of work that _Skyscraper_ performs for processing the workload. (2c) _Optimum:_ The optimum baseline fully leverages the ground truth to always choose the optimal knob configuration. Specifically, given the performance of each knob configuration beforehand, it uses the greedy 0-1 knapsack approximation to choose knob configurations that maximize quality under certain budget. Figures 6, 8, 10, 12 show the cost-quality trade-off curves for the COVID, MOT, MOSEI-HIGH, and MOSEI-LONG workloads. Figures 7, 9, 11, 13 show the work-quality trade-off curves. For the COVID and MOT workload, _Only cloud_ and _Only buffering_ alone can achieve significant speed-ups over the baseline. For both workloads, when combining the two (_Buffering & cloud_), peak quality can be roughly reached at 1.5\(\times\) less cost than when only buffering or only using the cloud for a cost ratio of 1.8:1. For 5:2 cost Figure 4. Cost-quality trade-off of _Skyscraper_, Chameleon* and statically using the same knob throughout ingestion. ratio, _Only cloud_ performs significantly worse, because off-loading work off to the cloud incurs a very high cost. For 1:1 cost ratio, _Only cloud_ matches the performance of _Buffering & cloud_ as using cloud resources has the same cost the on-premises computations. For the MOSEI workloads, we can see how _Only buffering_ and _Only cloud_ struggle to deliver good performance for MOSEI-HIGH and MOSEI-LONG, respectively. However, we observe that _Buffering & cloud_ delivers good performance on both. The reason for the bad performance of _Only cloud_ on MOSEI-HIGH is bandwidth limitations that limit the number of social media streams that can be offloaded to the cloud. The reason for the bad performance of _Only buffering_ on MOSEI-LONG is that the buffer gets filled early on, which prevents _Skyscraper_ from using expensive knob configurations for the remaining duration of the long workload peak. Finally, Figures 12, 12, 12 show that _Skyscraper_'s work reduction method performs astonishingly close to optimum. Only for the MOSEI-LONG workload (Figure 12) does _Skyscraper_ leave large room for improvement. _Summary._ To certain extent, the buffering and cloud bursting optimizations are complementary to each other. Specifically, the performance improvement of using both over using one of them is not as large as performance difference between them. Therefore, cloud bursting lessens the need for buffering and vice versa. However, _Skyscraper_ can still achieve 1.5\(\times\) cost savings in the COVID and MOT workloads over only one of the two methods. Furthermore, the MOSEI workloads show that buffering and cloud bursting struggle for different kinds of workload spikes. By combining the two, _Skyscraper_ can achieve good performance for both kinds of spikes. ### Runtime overheads Appendix E evaluates the runtime of the offline phase. For the COVID workload, the overall runtime was 1.6 hours on two c2-standard-60 machines. 83% of the time was spent creating the training data for the forecasting model, which is embarrassingly parallel and can be sped up when using more machines. _Skyscraper_'s _knob planner_ and _knob switcher_ add overheads to the online execution time. In this section, we evaluate their runtimes for different amounts of placements, content categories, and knob configurations. All runtime measurements are performed on a single core of the Intel Xeon Gold 6130 CPU running at 2.10GHz. The worst-case runtime of the _knob switcher_ is linear in the total number of placements (for all knob configurations). This worst case is achieved when the knob switcher needs to iterate through all configuration-placement pairs until it finds one that does not overflow the buffer (see Section 4). The left plot in Figure 12 shows the worst-case runtime as the dashed line and the average runtimes of the _knob switcher_ for the COVID, MOT, and MOSEI experiments. The _knob planner_ conducts an inference pass through a small neural network and solves a linear program. For the linear program, the number of variables is \(|\mathcal{C}|*|\mathcal{K}|\) and the number of constraints is \(1+2*|\mathcal{C}|\), where \(\mathcal{C}\) denotes the number of content categories and \(\mathcal{K}\) is the number of knob configurations. The right image in Figure 13 uses the heat map to visualize the overheads caused by the _knob planner_ for different amounts of content categories and knob configurations. This image also shows the actual runtime of _knob planner_ on the three workloads. _Summary._ For common problem such as the COVID, MOT, and MOSEI workloads, the overheads of both the _knob switcher_ and _knob planner_ are negligible. While the _knob switcher_ runs every few seconds, its runtime is typically below a millisecond. Similarly, the _knob planner_ typically runs every few days but with a runtime below a second. We also show that the runtime overhead of our optimization is reasonable for more complicated workloads. ### Microbenchmarks This subsection evaluates how accurately _Skyscraper_'s forecasting model \(\mathcal{F}\) can predict the future content distribution and how sensitive _Skyscraper_'s performance is to inaccuracies in the forecast. Similarly, the subsection evaluates the accuracy at which the knob switcher classifies the video content into a content category \(c\in C\) and how sensitive _Skyscraper_'s performance is to misclassifications. In our evaluation, we focus on the real-world workloads COVID and MOT. The MOSEI workloads are synthetically created by inducing workload spiking patterns as described in Section 5.2. While these workloads present especially difficult spiking patterns for buffering and cloud bursting, the forecasting model achieves 100% accuracy and the knob switcher particularly high performance due to the regularity and smoothness of their workload peaks. We therefore do not evaluate them in terms of accuracy in this subsection. _Forecasting model_ We evaluate the forecasting model on 8 days of test data after training it on 16 days of unlabeled training data. We train and evaluate the forecasting model on four different lengths of the planned interval: {1, 2, 4, 8} days. As described in Section 4.1, the length of the planned interval determines the frequency of running knob planner and how long \(\mathcal{F}\) needs to forecast into the future. We find that for both workloads, _Skyscraper_'s forecasting method achieves a low Mean Absolute Error (MAE) when forecasting 1 to 4 days into the future. We denote the MAE values in Appendix L3. For both workloads, the lowest MAE was achieved when forecasting 2 days into the future, while the largest MAE was incurred when doing so for 8 days. There is a sweet spot on how far to forecast into the future but this sweet spot is unrelated to the frequency of content category changes. Forecasting over very large time intervals is hard because events far in the future become increasingly uncorrelated to the current events, which the forecast is based on. On the other hand, forecasting over too short time periods is also hard: The streamed video content is always subject to a certain amount of randomness (e.g. a large group of people randomly walking past a camera). Over large enough time intervals, this randomness is smoothed out, which makes the forecast more precise. When this smoothing effect is not achieved, errors due to unforeseen randomness will be noticeable in the MAE of the predictions. The high MAE when forecasting 8 days into the future shows that forecasting far into the future is difficult as events become increasingly uncorrelated to the current events, which the forecast is based on. On the other hand, forecasting over too short time periods also leads to higher MAEs: Streamed video content is always subject to a certain amount of randomness (e.g. a large group of people randomly walking past a camera). Over large enough time intervals, this randomness is smoothed out and therefore doesn't show in the MAE, which doesn't occur for forecasts over short periods. Figure 14 shows the impact of the prediction errors in terms of end-to-end performance. For comparison, we additionally run _Skyscraper_ using the ground truth content distributions instead of forecasting. For planned interval lengths between 1 and 4, _Skyscraper_'s performance is very close to the optimal performance using the ground truth predictions. However, for both workloads _Skyscraper_ performs significantly worse for a planned interval length of 8. _Knob switcher_ As described in Section 4.2, it is possible that the knob switcher misclassifies video content into the wrong content category. We identify two reasons for such misclassifications. First, the knob switcher classifies content based on the quality of one knob configuration. This corresponds to KMeans classification, where a vector is classified using only one dimension instead of all. We denote misclassifications, that occur because of this as _Type-A errors_. Second, the knob switcher determines the current content category based on the past couple of seconds of the video. It will then switch to a knob configuration that is used for processing the next couple of seconds of video, which creates a time mismatch. The last couple of seconds might belong to a different content category than the next couple of seconds. We denote errors caused by this timing mismatch as _Type-B errors_. Distinguishing between these two errors lets us gain insight into where performance losses come from, which could be used for further enhancements of _Skyscraper_. In Figure 15, we denote the standard knob switcher as described in Section 4.2 as _Standard_ and compare it against two baselines: _Ground truth_ denoting _Skyscraper_ using the ground truth content categories and _No Type-B errors_ denotes a baseline that partially uses the ground truth to eliminate errors of Type-B. Specifically, it determines the content category using _Skyscraper_'s standard approach but on the data of a future couple of seconds (i.e., it knows how the current knob configuration would perform in the next couple of seconds without executing it). Like this, only errors of Type-A Figure 14. The effect of different planned interval lengths on _Skyscraper_’s end-to-end performance Figure 13. Overheads: knob switcher (<Ims) and planner (<Is) impede the performance of the _No Type-B errors_ baseline, which shows their impact on _Skyscraper_'s end-to-end performance. Figure 15 shows that the knob switcher's misclassifications have a negative impact on _Skyscraper_'s end-to-end performance when using the _Standard_. The misclassification rate of _Standard_ is 2.1% on COVID and 6.6% on the MOT workload. However, the performance of the _No Type-B errors_ baseline almost matches the _optimum_. This suggests that the remaining Type-A errors barely impede the overall performance. These errors constitute 0.5% of the knob switcher's error rate on COVID and 3.7% on the MOT workload. _Summary_ The microbenchmarks provide two insights. First, when forecasting between 1 to 4 days into the future, _Skyscraper_'s forecasting method is accurate and does not significantly harm end-to-end performance when compared to using the ground truth as forecast. However, when forecasting further into the future (e.g., 8 days), the forecasts become less accurate, which shows an effect on _Skyscraper_'s end-to-end performance. Second, misclassifications of the knob switcher negatively impact _Skyscraper_'s performance. We hereby identify a time mismatch as the sole driver for the performance losses. This timing mismatch occurs because the knob configuration to process the next couple of seconds with is based on the content of the last couple of seconds. ## 6. Related Work The cost problem of video processing has previously been recognized [65, 66]. While we are not aware of past research which manages video streams like in a data warehouse, several systems propose end-to-end solutions for managing archived collections of video like in a relational database system [9, 20, 28, 41, 42, 47, 53, 78, 79]. Likewise, we are not aware of past work that directly addresses the _V-ETL_ problem, but there are several lines of work on efficient video processing that are relevant to _Skyscraper_. _Content-adaptive knob tuning systems._ Content-adaptive knob tuning systems aim at saving computational work by dynamically adjusting knobs that are inherent to CV workloads to the video stream's content. Chameleon performs content-adaptive knob tuning for general CV workloads [40]. However, Chameleon assumes that each knob configuration can be run in real-time on the provisioned hardware resources ("peak provisioning"). Chameleon then minimizes the average processing time per frame. As discussed in Section 1, such systems cannot deliver cost savings while also adhering to throughput guarantees, which is required in the _V-ETL_ problem. Zeus is another content-adaptive knob tuning system [16], but cannot be used for general-purpose _V-ETL_, as it is specific to action detection (e.g., detect someone crossing the street). _Query-load-adaptive knob tuning systems._ Instead of adapting to the streamed content, some systems tune the knobs of a CV workload solely based on the concurrently running queries (while being agnostic to the streamed content). These systems are useful in scenarios where users issue dynamic queries over video streams, which require the system to dynamically multiplex compute resources among the queries. VideoStorm [81] and VideoEdge [35] go beyond dynamic resource allocation and also tune the queries' knobs based on the other queries that are concurrently running. However, in scenarios where the query load remains static, there is no benefit in dynamically adapting to the query load. In _V-ETL_, a constant set of jobs is used to ingest the video streams. In contrast to VideoStorm and VideoEdge, _Skyscraper_ therefore dynamically adapts to changes in the video content instead of the query load. _Streaming ETL._ Treating data warehouse ingestion as a stateful stream processing problem is an established approach [25], which is successfully used in many big data applications [56]. Like _Skyscraper_, traditional streaming ETL is also concerned with maintaining data quality while handling fluctuating workloads without peak provisioning. This is typically achieved through methods like back pressure or load shedding, which mitigate workload peaks arising from fluctuating volumes of arriving data [72]. However, in _V-ETL_, data often arrives at constant volume, and only the content of the data changes. In contrast to traditional streaming systems, _Skyscraper_'s optimizations therefore focus on adapting to the content of the streamed data and not to its volume. _General-purpose cloud offloading._ Several works have previously explored the idea of offloading work from an on-premise server to on-demand cloud workers [1, 17, 19, 26, 37, 48, 50, 83]. These works assume that jobs occasionally arrive and these jobs may be executed locally or offloaded to the cloud. However, these works only optimize the placement of work and do not reduce work by means like knob tuning, which is _Skyscraper_'s main optimization. _Task-specific computer vision optimizations._ Several works optimize the application of CV for specific tasks and queries. While these methods cannot be used to optimize arbitrary _V-ETL_ jobs, they can be used inside _Skyscraper_'s UDFs to further reduce cost. General methods to improve the efficiency of neural networks include model compression [27, 49], compact neural architectures, [36, 51, 62], and knowledge distillation [6, 32, 44, 74]. Further works propose efficient CV primitives that are query-aware or content-adaptive [5, 10, 11, 13, 39, 44, 76]. Finally, some works reduce processing costs of certain video queries by intelligently skipping frames [8, 30, 33, 43, 45, 46, 54, 58, 80]. ## 7. Conclusion In this paper, we defined the problem of _V-ETL_ for transforming video streams to a queryable format through expensive ML-based video processing DAGs. In response, we introduced _Skyscraper_, which uses content-adaptive knob tuning to reduce the cost of the _V-ETL_ Transform step while adhering to _V-ETL_'s throughput requirements on constrained hardware resources. _Skyscraper_ supports conversions to arbitrary query formats. ## Acknowledgments We thank the Data Systems and Artificial Intelligence Lab (DSAIL) for supporting this work. Figure 15. End-to-end performance of knob switcher against baselines that leverage ground truth for content classification
2303.04908
Goal-oriented Policies for Cost of Actuation Error Minimization in Wireless Autonomous Systems
We consider the minimization of the cost of actuation error under resource constraints for real-time tracking in wireless autonomous systems. A transmitter monitors the state of a discrete random process and sends updates to the receiver over an unreliable wireless channel. The receiver takes actions according to the estimated state of the source. For each discrepancy between the real state of the source and the estimated one, we consider a different cost of actuation error. This models the case where some states, and consequently the corresponding actions to be taken, are more important than others. We provide two algorithms: one reaching an optimal solution but of high complexity, and one providing a suboptimal solution but with low complexity. The performance of the two algorithms are quite close as shown by the simulations.
Emmanouil Fountoulakis, Nikolaos Pappas, Marios Kountouris
2023-03-08T21:54:02Z
http://arxiv.org/abs/2303.04908v1
# Goal-oriented Policies for Cost of Actuation Error Minimization in Wireless Autonomous Systems ###### Abstract We consider the minimization of the cost of actuation error under resource constraints for real-time tracking in wireless autonomous systems. A transmitter monitors the state of a discrete random process and sends updates to a receiver over an unreliable wireless channel. The receiver then takes actions according to the estimated state of the source. For each discrepancy between the real state of the source and the estimated one, we consider a different cost of actuation error. This models the case where some states, and consequently the corresponding actions to be taken, are more important than others. We provide two algorithms, a first one reaching an optimal solution but of high complexity, and a second low-complexity one that provides a suboptimal solution. Our simulation results evince that the performance of the two algorithms are quite close. ## I Introduction Emerging cyber-physical and real-time autonomous systems are envisioned to introduce various applications and services, in which information distilled from measurements or observations is valuable when it is fresh, accurate, and useful to the specific goal of the data exchange. In this context, a relevant yet challenging problem is that of remote real-time tracking and actuation driven by sampled and potentially delayed measurements transmitted over a wireless channel using limited resources. Conventional communication system design has mainly remained agnostic to the _significance_ of transmitted messages, in particular at the physical and medium access layers. The optimization of system performance has been dominated by metrics such as throughput, delay, and packet drop rate. Although these performance metrics have turned out to be instrumental for enabling reliable and efficient communication, they fall short of differentiating the packets according to their information content and its value. A recently developed metric, named Age of Information (AoI), has been proposed to measure the freshness and the timeliness of information [1, 2, 3]. However, baseline AoI-based metrics do not take into account the source evolution and the significance of the generated information with respect to the communication task/goal and the context. Several variants of AoI have been proposed for tackling the problem of remote estimation in status update systems [4, 5, 6, 7]. Nevertheless, the aforementioned works do not consider the _cost of actuation error_, as they mainly focus on the discrepancy between the source and the estimated value of the process at the destination. A recently proposed approach, which is also adopted in this paper, takes into account the semantics of the information, i.e., significance, goal-oriented usefulness, and contextual importance of information as a means to leverage the synergy between data generation and processing, information transmission, and signal reconstruction [8, 9, 10, 11]. In this work, we consider the problem of real-time tracking and estimation of an information source from a remote actuator. A transmitter samples and sends information about the state of a source in the form of status update packets to a remote actuator (receiver) over an unreliable wireless channel. The actuator takes actions depending on the estimated state of the remote source. We also consider that the transmitter has limited resources, which prevents it from sampling and transmitting updates continuously. This paper extends the results of [12, 13], where the problem of remote monitoring of a discrete Markov source is considered and semantics-empowered policies are proposed to significantly reduce both the real-time reconstruction and the cost of actuation errors, as well as the amount of ineffective updates. Specifically, we consider a more general discrete stochastic source process and resource constraints, which make the solution essentially different. The problem is formulated as a Constrained Markov Decision Process (CMDP), and two _goal-oriented semantic-aware_ policies are proposed. A key takeaway is that it is optimal for the transmitter to remain silent even if there is a discrepancy between the actual state of the source and its estimate at the receiver, due to the delay induced by the wireless channel, the high transition probability, and the large actuation error. ## II System Model We consider a time slotted communication system in which a transmitter monitors a discrete random process and sends status updates to a receiver over an error-prone wireless channel. Let \(t\in\mathbb{Z}_{>0}\) denote the \(t\)-th slot. The receiver operates as a remote actuator and performs actions according to the estimated state of the source. The state of the process is modeled by a Discrete-Time Markov Chain (DTMC) \(\left\{X_{t}\right\}_{t\in Z_{>0}}\) and is assumed to be ergodic. The state of the source takes values from the set \(\{0,1,\ldots,N\}\), where \(N\in\mathbb{Z}_{>0}\). Each state corresponds to a specific action that has to be performed by the actuator. The channel realization is denoted by \(h_{t}\), and is equal to \(1\) if a packet is successfully received at time slot \(t\) and \(0\) otherwise. The success probability is defined as \(p_{s}=\Pr(h_{t}=1)\), and the failure probability as \(p_{f}=\Pr(h_{t}=0)=1-p_{s}\). For every successful transmission, the receiver updates its information regarding the state of the source with a new estimate denoted by \(\hat{X}_{t}\). The receiver sends an acknowledgement (ACK)/negative ACK for successful/failed transmissions. We assume that ACK/NACK information is sent and received instantaneously and error free. If the receiver does not successfully receive an update, it uses its previous estimate as the current one, i.e., \(\hat{X}_{t+1}=\hat{X}_{t}\). We consider that the sampling and transmission processes take a time slot to be performed. Therefore, the receiver receives an update from the transmitter with one slot delay, if a transmission is successful at time slot \(t\), and the actuator updates its state at slot \(t+1\). The transmitter generates a status update \(X_{t}\) by sampling the source _at will_. The decision to sample and transmit at time slot \(t\) is denoted by \(\alpha_{t}\), where \[\alpha_{t}=\begin{cases}1,\text{ if the source is sampled and its state transmitted,}\\ 0,\text{ otherwise.}\end{cases} \tag{1}\] ### _Performance metrics_ We consider that the actuator (receiver) takes actions according to the estimated state of the source. If the estimated state is different from the real state of the source, an actuation error occurs depending on a pre-defined function. The cost of actuation error captures the _significance_ (semantics) of the error at the point of actuation. Note that some errors may have a larger or a more critical impact than others. Let \(C_{i,j}\) denote the cost of being in state \(i\) at the source, and in state \(j\), estimated at the receiver, at time slot \(t\), i.e., \(X_{t}=i\) and \(\hat{X}_{t}=j\). We assume that the costs \(C_{i,j}\) are given and remain the same over the time horizon. Furthermore, for every sampling and transmission actions, we consider a cost \(c\). This cost can represent, for instance, the power consumption for both sampling and transmission procedures. ## III Problem Formulation The objective of this work is to minimize the average total cost of actuation error under average resource constraints. The expected time averages of the transmission and actuation costs are defined as \[\bar{c}\triangleq\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E }\left\{\alpha_{t}c\right\}\text{, }\bar{C}\triangleq\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T} \mathbb{E}\left\{C_{i,j}^{t}\right\}\text{,} \tag{2}\] respectively. To this end, we formulate our stochastic optimization problem as \[\min_{\pi}\quad\bar{C}^{\pi}\text{, s t, }\text{, }\bar{c}^{\pi}\leq c_{\text{ max}}\text{,} \tag{3}\] where \(\pi\) is the policy that decides the rule of selecting the right value \(\alpha_{t}\) at every time slot \(t\), and \(c>0\) is the time-averaged cost constraint. The problem in (3) is a CMDP. The system state is described by tuple \(S_{t}=(X_{t},\hat{X}_{t},C_{t})\), actions \(\alpha_{t}\in\mathcal{A}\), where \(\mathcal{A}=\{0,1\}\), and the transition matrix is described by \(P_{i,j}=\Pr\left\{X_{t+1}=j|X_{t}=i\right\}\). We assume that the transmitter has knowledge of the channel and source statistics. ## IV Proposed Algorithms In this section, we provide two optimization algorithms for solving problem (3) optimally and suboptimally. ### _Optimal Solution_ The problem in (3) is a CMDP, which is, in general, difficult to be solved [14]. In order to solve the constrained problem, we relax the constraints in (3) by utilizing Lagrangian multipliers. We show that this approach can provide the optimal solution. We define the _Lagrangian_ function as \[\mathcal{L}(\pi,\lambda)=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T} \mathbb{E}_{\pi}\left\{C_{t}+\lambda\alpha_{t}c\right\}-\lambda c\text{,} \tag{4}\] where the immediate cost is \(f(S_{t})=C_{t}+\lambda\alpha_{t}c\). In order to proceed with the solution in (3), we consider the following optimization problem \[\min_{\pi\in\Pi}\mathcal{L}(\pi,\lambda)\text{,} \tag{5}\] for any given \(\lambda\geq 0\)1. Since \(\lambda c\) is independent of the chosen policy \(\pi\), the problem in (5) is equivalent to the following optimization problem Footnote 1: For \(\lambda=0\), one may expect that the optimal policy is to always transmit because the sampling and the transmission processes are costless. However, our simulation results show that this is not always optimal even for cost-free transmissions. \[\min_{\pi\in\Pi}h(\lambda,\pi)=\min_{\pi\in\Pi}\lim_{T\rightarrow\infty}\frac {1}{T}\mathbb{E}^{\pi}\left(\sum_{t=0}^{T-1}C_{t}+\lambda\alpha_{t}c\right) \text{.} \tag{6}\] A policy that achieves \(\mathcal{L}^{*}(\lambda)\) is called \(\lambda\)-optimal, denoted by \(\pi_{\lambda}^{*}\), and is a solution to the following optimization problem \(\min_{\pi_{\lambda}}\mathcal{L}(\pi,\lambda)\). Since the dimension of the state space \(\mathcal{S}\) is finite, the _growth condition_[14, Eq. 11.21] is satisfied. In addition, the immediate cost function is bounded below (\(\geq 0\)). Since these conditions are satisfied, the optimal value of the CMDP problem in (3), \(\bar{C}_{\pi}^{*}\), and the optimal value of the (5), \(\mathcal{L}^{*}(\lambda)\), ensure the following relation [14, Corollary 12.2] \[\bar{C}_{\pi^{*}}=\sup_{\lambda\geq 0}\mathcal{L}^{*}(\lambda)\text{.} \tag{7}\] **Theorem 1** (Mixture of two randomized policies).: _[_15_, Theorem 4.4]_ _The optimal policy \(\pi^{*}\) is a mixture of two deterministic policies \(\pi_{\lambda^{-}}^{*}\), \(\pi_{\lambda^{+}}^{*}\)._ The optimal policy is written symbolically as \(\pi^{*}=\eta\pi_{\lambda^{-}}^{*}+(1-\eta)\pi_{\lambda^{+}}^{*}\), where \(\eta\) is a probabilistic factor. We characterize \(\eta\), \(\lambda^{-}\), and \(\lambda^{+}\), later in this paper. We now proceed to find the solution to the problem (5). To obtain the optimal policy of an infinite horizon average cost Markov Decision Process (MDP), it is sufficient to solve the following Bellman equation [16] \[\theta_{\lambda}+V(S_{t})=\] \[\min_{\alpha_{t}\in\mathcal{A}}\left\{C_{a_{t}}+\lambda\alpha_{t} \mathcal{C}+\sum_{S_{t+1}\in\mathcal{S}}P_{S_{t},S_{t+1}}V(S_{t+1})\right\}, \tag{8}\] where \(\theta_{\lambda}\) is the optimal value of (6), for a given \(\lambda>0\), and \(V(S_{t+1})\) is the _cost-to-go_ or _value function_. This is known to be a challenging problem [16]. We apply the value iteration algorithm and the bisection method to solve the problem and to find the optimal Lagrange multiplier, respectively. The detailed steps are provided in Algorithm 12. Footnote 2: There is no closed form expression for \(\eta\)[17], thus we numerically search for \(\eta\in[0,1]\). ``` 1Initialization: \(\lambda\gets 0\), \(\lambda_{-}\gets 0\), \(\lambda_{+}\) large positive number, and \(\epsilon>0\) 2Run VI(\(\lambda\)) 3if\(\bar{c}\leq c_{\text{max}}\)then 4\(\pi^{*}\leftarrow\pi_{\lambda}^{*}\) 5else 6while\(|\lambda_{+}-\lambda_{-}|>\epsilon\)do 7Run VI(\(\frac{\lambda_{+}-\lambda_{-}}{2}\)) 8if\(\bar{c}\geq c_{\text{max}}\)then 9\(\lambda_{-}\leftarrow\lambda\) 10else 11\(\lambda_{+}\leftarrow\lambda\) 12\(\lambda^{*}\leftarrow\frac{\lambda_{+}+\lambda_{-}}{2}\), \(\lambda_{+}^{*}\leftarrow\lambda_{-}\), \(\lambda_{+}^{*}\leftarrow\lambda_{-}\) 13VI(\(\lambda^{*}\)) 14if\(\bar{c}=c_{\text{max}}\)then 15\(\pi^{*}=\pi_{\lambda}^{*}\) 16else 17\(\pi^{*}=\eta\pi_{\lambda_{-}^{*}}+(1-\eta)\pi_{\lambda_{+}^{*}}\) ``` **Algorithm 1**Value Iteration Algorithm ### _Suboptimal low-complexity algorithm_ Although the value iteration algorithm is proven to converge to the optimal solution, it suffers from high computational complexity, known as the _curse of dimensionality_[18]. Our goal is to provide a low-complexity algorithm that guarantees that the average cost constraints are satisfied and which provides a solution close to the optimal one. Using tools from Lyapunov optimization theory, we provide a real-time algorithm named Drift-Plus-Penalty (DPP). We reformulate the problem in (3), and we define the objective function \(g(t)\) as \[g(t)=\] \[\left\{\left(\sum\limits_{k=i}^{N}C_{k,j}P_{i,k}(1-p_{s})+\sum \limits_{k=i}^{N}C_{k,i}P_{i,k}p_{s}\right)\text{, if }\alpha_{t}=1\text{,}\right.\] \[\left.\left\{\left(\sum\limits_{k=i}^{N}C_{k,j}\mathcal{P}_{i,k} \right)\text{, otherwise.}\right.\right.\] The expected time average of the objective function is defined as \(\bar{g}\triangleq\lim\sup\limits_{T\rightarrow\infty}\frac{1}{T}\sum\limits_{ i=1}^{T}\mathbb{E}\left\{g(t)\right\}.\) The reformulated stochastic optimization problem is the following \[\min_{\alpha_{t}}\quad\bar{g},\quad\text{s. t., }\bar{c}\leq c_{\text{max}}. \tag{10}\] In order to satisfy the average cost constraints, we map the average cost constraint in eqrefoptproblemLyapunov into a virtual queue [19]. We show below that the time average cost problem is transformed into a queue stability problem. Let \(\{Z(t)\}\) be the virtual queue associated with constraint (10). The virtual queue is updated at every time slot \(t\) as \[Z(t+1)=\max[Z(t)-c,0]+\alpha_{t}c. \tag{11}\] Process \(\{Z(t)\}\) can be viewed as a virtual queue with arrivals \(\alpha_{t}\) and service rate \(c\). This idea is based on the fundamental _Lyapunov drift_ theorem [20]. With the above definitions in mind, we can now proceed to describe our proposed algorithm and provide performance guarantees regarding the average cost constraint. **Lemma 1**.: _If \(Z(t)\) is rate stable3, then the constraint in (10) is satisfied._ Footnote 3: A discrete time process \(Q(t)\) is _rate stable_ if \(\lim\limits_{t\rightarrow\infty}\frac{Q(t)}{t}=0\), with probability 1. Proof.: By using the basic sample property [19][Lemma 2.1, Chapter 2], we obtain: \[\frac{Z(t)}{t}-\frac{Z(0)}{t}\geq\frac{1}{t}\sum\limits_{\tau=0}^{t-1}c\alpha_{ t}-\frac{1}{t}\sum\limits_{\tau=0}^{t-1}c. \tag{12}\] If \(Z(t)\) is rate stable, then \(\lim\limits_{t\rightarrow\infty}\frac{Z(t)}{t}=0\). By taking the time average expectations in (12) on both sides, we obtain the result. In order to stabilize virtual queue \(Z(t)\), and therefore by Lemma 1 to satisfy the average cost constraints, we first define the Lyapunov function as \(L(Z(t))\triangleq\frac{1}{2}Z^{2}(t)\) and the Lyapunov drift as \[\Delta(Z(t))\triangleq\mathbb{E}\left\{L(Z(t+1))-L(Z(t))|Z(t)\right\}. \tag{13}\] The above conditional expectation is with respect to the random source state transitions, channel states, and transmission decisions. We apply the DPP algorithm to minimize the time average expected cost while stabilizing the virtual queues, \(Z(t)\). Specifically, this approach seeks to minimize an upper bound on the following expression \[\Delta(Z(t))+W\mathbb{E}\left\{g(t)\right\}, \tag{14}\] where \(W>0\) is an importance factor to scale the penalty. By utilizing \((\max\left[Q-b,0\right]+A)^{2}\leq Q^{2}+A^{2}+b^{2}+2Q(A-b)\), we get the following upper bound on the expression in (14) \[\Delta(Z(t))+W\mathbb{E}\left\{g(t)\right\}\] \[\leq B+W\mathbb{E}\left\{g(t)\right\}+\mathbb{E}\left\{Z(t)(c \alpha_{t}-c_{\text{max}})\right\}, \tag{15}\] where \(B<\infty\), and \(B\geq\frac{(\alpha_{t}c_{\text{max}})^{2}+c_{\text{max}}^{2}}{2}\). ### _Drift-Plus Penalty Algorithm_ At every time slot \(t\), the transmitter observes the state of the source \((X_{t})\) and the estimated state at the destination \((\bar{X}_{t})\), and it takes a decision according to the following optimization problem \[\min_{\alpha_{t}}\quad Wg(t)+Z(t)(c\alpha_{t}-c_{\text{max}}). \tag{16}\] **Lemma 2**.: _We consider a class of stationary policies, possibly randomized, denoted by \(\Omega\). A policy \(\omega(t)\) that belongs to the class \(\Omega\) is an i.i.d. process that takes probabilistic decisions independent of the state of the system, at every time slot \(t\). Let \(y(t)=c\alpha_{t}-c_{\text{max}}\), and \(c(t)=c\alpha_{t}\). Then, if the problem in (10) is strictly feasible, and the second moments of \(y(t)\) and \(g(t)\) are bounded, then there is \(\epsilon>0\) for which there is an \(\omega(t)\) policy such that the following holds_ \[\mathbb{E}\left\{y(t)\right\}\leq\epsilon,\,\mathbb{E}\left\{g^{*}(t)\right\} =g_{\omega}\leq g^{\text{opt}}+\epsilon,\] _where \(y^{*}(t)\) and \(g^{*}(t)\) are the resulting values of the \(\omega\) policy, and \(g^{\text{opt}}\) is the optimal value function in (10) achievable by any optimal stationary randomized policy._ Proof.: Since the cost for sampling and transmission is bounded, the second moment of \(c(t)\) is also bounded. Furthermore, since the values of the matrix \(C\) are bounded, the second moment of \(g(t)\) is also bounded. Therefore, we have \[\mathbb{E}\left\{c(t)^{2}\right\}\leq c^{2},\,\mathbb{E}\left\{g(t)^{2} \right\}\leq C_{\text{max}}^{2},\] where \(C_{\text{max}}\) is the maximum value of instantaneous actuation cost. Then, the boundedness assumptions in [19][Ch. 4.2.1] are satisfied. Therefore, from Theorem 4.5 in [19], we get the result. **Theorem 2**.: _The DPP algorithm satisfies any feasible set of average cost constraints._ Proof.: Since the DPP algorithm seeks to minimize the expression in (16), we obtain that \[\Delta(Z(t))+W\mathbb{E}\left\{g(t)|S_{t}\right\} \tag{17}\] \[\leq B+Z(t)\mathbb{E}\left\{y_{\text{DPP}}(t)\right\}+W\mathbb{E }\left\{g_{\text{DPP}}(t)\right\}\] (18) \[\leq B+Z(t)\mathbb{E}\left\{y^{*}(t)\right\}+W\mathbb{E}\left\{g^ {*}(t)\right\}, \tag{19}\] where \(y^{*}(t)\) and \(g^{*}(t)\) are the resulting values after applying policy \(\omega\). By considering the bound in Lemma 2, we get \[\Delta(Z(t))+W\mathbb{E}\left\{g(t)|S_{t}\right\}\leq B+\epsilon Z(t)+W(g^{ \text{opt}}+\epsilon), \tag{20}\] and taking \(\epsilon\to 0\), we have \[\Delta(Z(t))+W\mathbb{E}\left\{g(t)|S_{t}\right\}\leq B+Wg^{\text{opt}}. \tag{21}\] The above expression is in the exact form of the Lyapunov optimization theory [19][Theorem 4.2]. Therefore, the virtual queue is mean rate stable, and the average constraints are satisfied. ## V Simulation Results In this section, we compare the performance of the two proposed algorithms with the baseline algorithm proposed in [12] in terms of average real-time reconstruction error and cost of actuation error. The baseline policy decides on sampling and transmission whenever there is a discrepancy between the states at the source and at the destination, i.e., \(X_{t}\neq\bar{X}_{t}\). In our setup, the average cost constraint, \(c_{\text{max}}\), is set to \(0.2\), with a cost of sampling and transmission equal to \(1\). Therefore, a feasible policy decides \(20\%\) of the time for sampling and transmission. Note that the baseline algorithm does not take into account the sampling and transmission costs. We consider two cases for the source dynamics: a slowly varying source and a fast varying source. We consider that the Markov source has four states and is modeled as a birth-death process. The cost of the actuation error matrix remains fixed in both cases. The values of the matrix are shown below \[C=\left(\begin{array}{c|cccc}&0&1&2&3\\ \hline 0&0&10&50&30\\ 1&10&0&40&20\\ 2&20&10&0&10\\ 3&30&20&40&0\end{array}\right), \tag{22}\] where element \(C_{i,j}\) is the cost of actuation error for the source being in state \(i\) while the estimated value, \(\hat{X}\), is \(j\). ### _Slowly varying source_ In Fig. 1, we compare the average reconstruction error and the average cost of actuation error resulting from the three algorithms for the case of a slowly varying source. The transition of the Markov source is shown below: \[P=\left[\begin{array}{cccc}0.8&0.2&0&0\\ 0.1&0.8&0.1&0\\ 0&0.1&0.8&0.1\\ 0&0&0.2&0.8\end{array}\right]. \tag{23}\] We consider that the source remains in the same state with a high probability \((0.8)\). In Fig. 0(a), we observe that the proposed DPP and VIA algorithms have very similar performance and provide lower average reconstruction error than the baseline algorithm. Moreover, in Fig. 0(b), we see that the difference between the cost of actuation error performance of the baseline and the proposed algorithms increases. This is because the baseline algorithm decides on sampling and transmission whenever there is a discrepancy between the source and the destination, without explicitly taking into account the cost of actuation error. ### _Rapidly varying source_ In the case of a rapidly varying source, we consider that the Markov source remains in the same state with probability \(0.2\). The corresponding transition matrix is \[P=\begin{bmatrix}0.2&0.8&0&0\\ 0.4&0.2&0.4&0\\ 0&0.4&0.2&0.4\\ 0&0&0.8&0.2\end{bmatrix}. \tag{24}\] In Figs. (a)a and (b)b, we provide results for the average reconstruction error and average cost of actuation error, respectively. We observe that the performance of the baseline algorithm is better than that of VIA and DPP as far as the reconstruction error is concerned. However, the proposed algorithms still provide superior performance in terms of cost of actuation error, which is the metric of interest in this paper. The reason is that the algorithms proposed here take into account both the cost of actuation error and the statistics of the Markov source. Therefore, a main observation from the results is that _low reconstruction error does not necessarily imply a low average cost of actuation error_. The reason is that these are two different performance metrics, and in a remote monitoring system with delayed measurements, it is crucial to take into account the statistics of the source and how the states of the source vary with the time horizon. ## VI Conclusion In this work, we studied the minimization of the actuation error under resource constraints for real-time tracking of a remote source over wireless. We provided an optimal solution to the optimization problem and a low-complexity algorithm that guarantees the satisfaction of the average cost constraints. Our simulation results show that the performance of the low-complexity algorithm is close to optimal. We observed that an optimal policy for this problem takes into account not only the discrepancy in state between the source and the destination, but also the cost of actuation error that occurs due to this discrepancy, as well as the statistics of the source. Depending on the setup, it is sometimes beneficial to remain silent rather than perform sampling and transmitting a status update to the destination when there are delayed measurements sent to the actuator.
2301.12913
Rational verification and checking for Nash and subgame-perfect equilibria in graph games
We study two natural problems about rational behaviors in multiplayer non-zero-sum sequential infinite duration games played on graphs: checking problems, that consist in deciding whether a strategy profile, defined by a Mealy machine, is rational; and rational verification, that consists in deciding whether all the rational answers to a given strategy satisfy some specification. We give the complexities of those problems for two major concepts of rationality: Nash equilibria and subgame-perfect equilibria, and for five major classes of payoff functions: parity, mean-payoff, quantitative reachability, energy, and discounted-sum.
Léonard Brice, Jean-François Raskin, Marie van den Bogaard
2023-01-30T14:14:50Z
http://arxiv.org/abs/2301.12913v2
# Rational verification and checking for Nash and subgame-perfect equilibria in graph games ###### Abstract We study two natural problems about rational behaviors in multiplayer non-zero-sum sequential infinite duration games played on graphs: checking problems, that consist in deciding whether a strategy profile, defined by a Mealy machine, is rational; and rational verification, that consists in deciding whether all the rational answers to a given strategy satisfy some specification. We give the complexities of those problems for two major concepts of rationality: Nash equilibria and subgame-perfect equilibria, and for five major classes of payoff functions: parity, mean-payoff, quantitative reachability, energy, and discounted-sum. ## 1 Introduction ### Motivations Formal methods are essential to guarantee the correctness of safety critical computer systems. Techniques like model-checking [15] or automated theorem proving [16] are now routinely used to develop systematically hardware pieces as well as embedded control systems. Nevertheless, there are contexts in which formal methods have not yet been applied successfully large-scale. That is the case for instance of multi-agent systems, which still represent a challenge for formal verification techniques, because they are usually composed of _heterogeneous_ components, ranging from traditional pieces of reactive code, to wholly autonomous robots or human users. Producing operational model abstractions for this diversity of sub-systems can be challenging, or even impossible. While it may be inconvenient, to say the least, to produce an operational model of the behavior of a human or a complex autonomous robot, it may be easier to identify the high level objectives of those components. And taking into account those objectives is often key for reasoning about the correctness of a system that interacts with those components. Indeed, a system is usually not supposed to be correct in all circumstances, but only when agents in its environment behave in a way that concurs with their own objectives. This is why we need to develop verification frameworks that allow us to reason on correctness in the presence of rational agents: agents whose behaviors are rational with regards to their high level objectives. In _rational verification_, a system needs to enforce a property \(\varphi\), not in all possible executions, but only in those executions in which agents of the environment behave rationally with regards to their own objectives. Rationality is the focus point of game theory and can be formalized in several ways. For instance, rational behavior for the agents can be modeled by the notion of _Nash equilibrium_ (NE) [29] in a multiplayer non-zero sum game graph [34]. NEs have been used in a few promising contributions, like in verification of non-repudiation and fair exchange protocols [26, 25, 14], or planning of self-driving cars interacting with human drivers [31], etc. Nevertheless, those works do not propose a general framework for rational verification and their contributions are rather specific to the particular application domains that they consider. There is thus a need for more systematic study of formal frameworks for rational verification. Such a study has been started recently. For instance, the authors of [22] study the automatic verification of an LTL specification in multi-agent systems that behave according to an NE, and in [13], the authors study a setting in which the environment has multiple objectives and only produces behaviors that are Pareto-optimal with regards to those objectives. This work contributes to that line of research by considering a notion of rationality formalized by _subgame-perfect equilibria_ (SPEs), a refinement of NEs that is better suited to formalize rationality in sequential games, since NEs suffer from non-credible threats in such contexts (see e.g. [30]). More precisely, we consider here two decision problems. First, in the _equilibrium checking problem_, the inputs are: (_i_) a multiplayer game graph, (_ii_) a finite state description of a (potentially infinite) set of strategy profiles for the players in the game, and (_iii_) a description of their objectives. The problem asks to check that all the strategy profiles in the set are NEs, or SPEs. This mathematical setting is well suited to formalize, for instance, that a high level description of a protocol, that contains nondeterminism, is such that all its implementations lead to rational behaviors of the entities participating to the protocol. This setting can be used to formalize elegantly the verification problems solved in [26, 14], for instance. Second, the _rational verification problem_ takes as inputs: (_i_) a multiplayer game graph with a designated player called _Leader_, (_ii_) a finite state description of a (potentially infinite) set of strategies for Leader, (_iii_) a description of the objective for Leader, and (_iv_) a description of the objectives of all the other players. It asks whether for all possible fixed strategies \(\sigma_{\mathbb{L}}\) of Leader (defined by the finite state description), for all possible rational responses of the other agents, the generated outcome satisfies Leader's objective. That problem is well-suited to formalize the verification of correctness of a controller interacting with an environment composed of rational agents, and intending to enforce a given property. To solve those problems, we first provide two _general constructions_ that reduce those problems to simpler ones. We show that they lead to algorithms that are computationally optimal for a large variety of classes of games with objectives ranging from Boolean \(\omega\)-regular objectives, like parity objectives, to quantitative ones, like mean-payoff objectives. Several lower complexity bounds require new constructions. We now detail our technical contributions before comparing our results with the existing results in the literature. ### Contributions We call the general construction that we introduce to solve checking problems the _deviation game_ (Definition 6): a game that simulates the parallel construction of a play compatible with the strategy profile, and of another play in which one player is deviating. Thus, the checking problems reduce to the simpler problem of deciding, given a game, whether there exists a play in which some player gets a better payoff than some other one (Corollary 1). To the best of our knowledge, there is no general polynomial-time reduction in the other direction, hence the latter problem may be strictly harder than the former, but it turns out to be sufficient to prove that all the checking problems in our five classes of games can be solved with simple polynomial time graph algorithms (Theorems 3, 5, 8, 11, and 15). Interestingly, there is one exception: in energy games, the checking problems are \(\mathsf{coNP}\)-complete (Theorem 12) -- and those problems are closely related to the succinct one counter automaton reachability problem. As for rational verification, our contribution also starts with a general construction, the _product game_ (Definition 7): we show that, given a game and a finite-state description of a set of Leader's strategies, one can incorporate the memory states of that finite-state description in the arena of the game in a way that Leader is implicitly forced to follow some strategy in the set. Then, we show that the rational verification problem reduces in polynomial time to the _universal threshold problem_, a problem that is easier to study algorithmically: given a game, does every equilibrium satisfy a given specification? Furthermore, for some of the game classes that we consider, that latter problem has already been tackled, with slightly different definitions, in the literature. Interestingly, we show that there is also a reduction in the reverse direction (Corollary 2). From that result, we build on various results of the literature to establish that rational verification problems are \(\mathsf{coNP}\)-complete in parity games and mean-payoff games (Theorems 4 and 6), and that in quantitative reachability game, Nash rational verification is \(\mathsf{coNP}\)-complete (Theorem 10), and subgame-perfect rational verification is \(\mathsf{PSPACE}\)-complete (Theorem 9). By a reduction from the halting problem of counter machines, we prove the undecidability of rational verification in energy games (Theorem 13 and 14), even when Leader plays against only two players, in the case of subgame-perfect rational verification. We show that Nash rational verification is co-recursively enumerable in those games, and leave that question open for subgame-perfect rational verification -- but we show that contrary to the Nash setting, SPEs may require infinite memory to enforce some payoffs (Proposition 2). In discounted-sum games, we show that Nash and subgame-perfect rational verification are at least as hard as the _target discounted-sum problem_ (Theorem 16), whose decidability is a long-standing open question. Nevertheless, we prove that those problems are recursively enumerable (Theorem 17). We also exhibit a subtlety in the definition of rational verification: if one wants to check that a strategy is such that _every_ rational response satisfies the specification, then when no such response exists, the strategy will be accepted. We show that in the case of mean-payoff games, the possible absence of subgame-perfect responses of the environment to a strategy of Leader may lead to results that can be considered as counter-intuitive. We therefore propose a stronger definition of the rational verification problem, called _achaotic rational verification_, to avoid that weakness: it consists in deciding whether a strategy satisfies the specification against every response of the environment that is _as rational as it can_, using the notions of \(\varepsilon\)-NE and \(\varepsilon\)-SPE, which are quantitative relaxations of NE and SPE. We show that such a problem is \(\mathsf{P^{NP}}\)-complete in mean-payoff games (Theorem 7), and that in every other setting (Nash rational verification, or subgame-perfect rational verification in the four other classes of games), it coincides with rational verification, since rational responses always exist in all those cases (Proposition 1). \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} & Nash-checking & SP-checking & Nash rat. verif. & Ach. Nash rat. verif. & SP rat. verif. & Ach. SP rat. verif. \\ \cline{2-9} & det. & non-det. & det. & non-det. & det. & non-det. & det. & non-det. \\ \hline Parity & \multicolumn{3}{c|}{poly.} & \multicolumn{3}{c|}{coNP-complete} & \multicolumn{3}{c|}{coNP-complete} & \multicolumn{3}{c}{coNP-complete} \\ \hline MP & \multicolumn{3}{c|}{poly.} & \multicolumn{3}{c|}{coNP-complete} & \multicolumn{3}{c|}{coNP-complete} & \multicolumn{3}{c}{\(\mathsf{P^{NP}}\)-complete (\(*\))} \\ \hline QR & \multicolumn{3}{c|}{poly.} & \multicolumn{3}{c|}{coNP-complete} & \multicolumn{3}{c}{\(\mathsf{PSPACE}\)-complete} \\ \hline Energy & poly. & \(\mathsf{coNP}\)-complete (\(*\)) & \multicolumn{3}{c|}{undecidable, co-RE (\(*\))} & \multicolumn{3}{c}{undecidable (\(*\))} \\ \hline DS & \multicolumn{3}{c|}{poly.} & \multicolumn{3}{c|}{TDS-hard, RE (\(*\))} & \multicolumn{3}{c}{TDS-hard, RE (\(*\))} \\ \hline \end{tabular} \end{table} Table 1: Synthesis of our results A synthesis of those results can be found in Table 1. Several of them are direct consequences of our general constructions, combined with results drawn from the literature; while some others required more effort and substantial original work. We highlighted the latter with a star (\(*\)) as they form the main technical contributions of our paper. ### Related works During the last decade, non-zero sum multiplayer games and their applications to reactive synthesis have raised a growing attention. For an overview of the area, the reader may refer to [4, 11, 12, 19, 27], and their references. The concept of _rational verification_ appears in [23], where Gutierrez, Najib, Perelli, and Wooldridge give the complexity of several related problems. They use a definition that is slightly different from ours: their problem consists in deciding, given a game and a specification, whether all NEs (or one of them) in that game satisfy the specification, without any player representing the system (Leader in our setting). Still, as we show with Corollary 2, that problem is strongly related to ours. In [32], they also study if \(\omega\)-regular properties are enforced by NEs induced by mean-payoff objectives. The objectives considered in those paper are only \(\omega\)-regular objectives while we consider a variety of objectives including quantitative objectives. Moreover, both in [23] and in [32] only NEs are considered while our main contributions are about SPEs, that are arguably better suited for reasoning about sequential games [30], but also require substantially more complex techniques. In [18], Filiot, Gentilini, and Raskin study _Stackelberg values_ of mean-payoff and discounted-sum two-player non-zero sum games, i.e. the payoff that Leader gets when the other player, _Follower_, plays the _best response_ that is available with regards to his own objective. Once Leader's strategy is defined, computing that payoff amounts to solve the rational verification problem, with one player in the environment. In [33], and later in [34], Ummels studies SPEs in parity games. He proves that they always exist, and that deciding whether there exists an SPE in a given parity game that gives each player a payoff between two given thresholds (the _constrained existence problem_, very close to the _universal threshold problem_ studied in this paper) is \(\mathsf{EXPTIME}\)-easy and \(\mathsf{NP}\)-hard. In [8], Brihaye, Bruyere, Goeminne, Raskin, and van den Bogaard, study the same problem in quantitative reachability games, and prove that it is \(\mathsf{PSPACE}\)-complete. In [20], Flesch and Predtetchinski give a general procedure to characterize SPEs. That procedure requires in general infinitely many iterations, where infinitely many two-player zero-sum of infinite size must be solved. In [5], Brice, Raskin, and van den Bogaard introduce the _negotiation function_, a tool that turns Flesch and Predtetchinski's procedure into effective algorithms for a large class of games. In [7], they use it to close the gap left by Ummels, proving that the constrained existence problem is \(\mathsf{NP}\)-complete in parity games, with methods that they use later in [6] to prove that the same problem is also \(\mathsf{NP}\)-complete in mean-payoff games. An alternative procedure to solve such SPE problems is proposed in [28], where Meunier constructs a two-player zero-sum game in which one player has a winning strategy if and only if there exists an SPE satisfying the desired constraint in the input game. This technique leads to effective algorithms only for games whose payoff functions have finite codomain. Thus it cannot be used for mean-payoff, discounted sum, and quantitative reachability games studied in this paper. For parity games, this construction does not lead to optimal algorithms. Energy objectives have also been widely studied, in connection with the study of vector additions systems with states and Petri nets. In [3], Bouyer, Fahrenberg, Larsen, Markey, and Srba show that solving two-player zero-sum energy games is in \(\mathsf{NP}\cap\mathsf{coNP}\). In [35], Velner, Chatterjee, Doyen, Henzinger, Rabinovitch, and Raskin generalize that result to energy games with multiple dimensions. In [27], Kupferman, Perelli, and Vardi study two-player zero-sum games where both players have their multi-dimensional energy objectives, and where the goal of one player is to keep her energy positive _or_ to make the other player run out of energy. As for discounted-sum objectives, they are defined for instance by Zwick and Paterson in [36], again in a two-player zero-sum setting. They are strongly related to the target discounted-sum problem, which is a long-standing open problem, as shown in [2] by Boker, Henzinger, and Otop. To the best of our knowledge, no algorithmic results are known for those classes of objectives in a multiplayer non-zero sum setting. ### Structure of the paper In Section 2, we introduce the necessary background. In Section 3, we present our two general tools, the deviation game and the product game. In Section 4, we exploit those tools to study parity games; in Section 5, mean-payoff games; in Section 6, quantitative reachability games; in Section 7, energy games; and in section 8, discounted-sum games. ## 2 Background ### Games We call _graph_ a finite directed graph, i.e. a pair \((V,E)\) where \(V\) is a finite set of _vertices_ and \(E\subseteq V\times V\) is a set of _edges_. For convenience, an edge \((u,v)\) is written \(uv\). It is called an _outgoing edge_ of \(u\), and an _ingoing edge_ of \(v\). A _path_ in \((V,E)\) is a finite or infinite sequence \(\alpha=\alpha_{0}\alpha_{1}\cdots\in V^{*}\cup V^{\alpha}\) such that for every index \(k\), we have \(\alpha_{k}\alpha_{k+1}\in E\). Given a path \(\alpha\), we write \(\operatorname{Occ}(\alpha)\) the set of vertices that occur in \(\alpha\), and \(\operatorname{Inf}(\alpha)\) the set of those that occur infinitely many times in \(\alpha\). For a given index \(k\), we write \(\alpha_{\leq k}\), or \(\alpha_{<k+1}\), the finite prefix \(\alpha_{0}\ldots\alpha_{k}\), and \(\alpha_{\geq k}\), or \(\alpha_{>k-1}\), the (finite or infinite) suffix \(\alpha_{k}\alpha_{k+1}\ldots\) A _cycle_ is a finite path \(c=c_{0}\ldots c_{n}\) with \(c_{n}c_{0}\in E\). A finite path \(\alpha\) is _simple_ if for every two indices \(k\neq\ell\), we have \(\alpha_{k}\neq\alpha_{\ell}\). We call _non-initialized game_ a tuple \(\mathcal{G}=(\Pi,V,(V_{i})_{i\in\Pi},E,\mu)\), where: * \(\Pi\) is a finite set of _players_; * \((V,E)\) is a graph, called the _arena_ of \(\mathcal{G}\), in which every vertex has at least one outgoing edge; * \((V_{i})_{i\in\Pi}\) is a partition of \(V\), in which \(V_{i}\) is the set of vertices _controlled_ by player \(i\); * a _play_ (resp. _history_) in the game \(\mathcal{G}\) is an infinite (resp. finite) path in the graph \((V,E)\), and the set of plays (resp. histories) in \(\mathcal{G}\) is denoted by \(\operatorname{Plays}\mathcal{G}\) (resp. \(\operatorname{Hist}\mathcal{G}\)); * the _payoff function_\(\mu:\operatorname{Plays}\mathcal{G}\to\mathbb{R}^{\Pi}\) maps each play \(\pi\) to the tuple \(\mu(\pi)=(\mu_{i}(\pi))_{i\in\Pi}\) of the players' _payoffs_. Given a set of players \(P\subseteq\Pi\), we often write \(V_{P}=\bigcup_{i\in P}V_{i}\). When \(i\) is a player and when the context is clear, we write \(-i\) for the set \(\Pi\setminus\{i\}\). A special player, called _Leader_ and denoted by the symbol \(\mathbb{L}\), is often assumed to belong to the set \(\Pi\). An _initialized game_ is a pair \((\mathcal{G},v_{0})\), often written \(\mathcal{G}_{\uparrow v_{0}}\), where \(\mathcal{G}\) is a non-initialized game and \(v_{0}\in V\) is a vertex called _initial vertex_. When the context is clear, we use the word _game_ for both initialized and non-initialized games. A play (resp. history) in the initialized game \(\mathbf{\mathcal{G}}_{\restriction\restriction_{0}}\) is a play (resp. history) that has \(v_{0}\) as first vertex. The set of plays (resp. histories) in \(\mathbf{\mathcal{G}}_{\restriction\restriction_{0}}\) is denoted by \(\mathrm{Plays}\mathbf{\mathcal{G}}_{\restriction\restriction_{0}}\) (resp. \(\mathrm{Hist}\mathbf{\mathcal{G}}_{\restriction\restriction_{0}}\)). We also write \(\mathrm{Hist}_{i}\mathbf{\mathcal{G}}\) (resp. \(\mathrm{Hist}_{i}\mathbf{\mathcal{G}}_{\restriction\restriction_{0}}\)) for the set of histories in \(\mathbf{\mathcal{G}}\) (resp. \(\mathbf{\mathcal{G}}_{\restriction\restriction_{0}}\)) whose last vertex is controlled by player \(i\). ### Notable classes of games Hereafter, we will focus on five classes of games. Some are classes of _Boolean games_, i.e. games in which all payoffs are equal either to \(0\) or to \(1\). For such games, we say that player \(i\)_loses_ the play \(\pi\) when \(\mu_{i}(\mathbf{\pi})=0\), and _wins_ it when \(\mu_{i}(\mathbf{\pi})=1\). The other classes are called classes of _quantitative games_. Parity games are Boolean games in which each player wins a play if that play satisfies some _parity condition_, a canonical encoding for \(\mathbf{\omega}\)-regular conditions. **Definition 1** (Parity).: The game \(\mathbf{\mathcal{G}}\) is a _parity game_ if for each player \(i\), there exists a mapping \(\kappa_{i}:V\rightarrow\mathbb{N}\), called _color mapping_, such that for every play \(\pi\), we have \(\mu_{i}(\mathbf{\pi})=1\) if the color \(\min_{v\in\mathrm{Inf}(\pi)}\kappa_{i}(v)\) is even, and \(\mu_{i}(\pi)=0\) if it is odd. Mean-payoff games are typical quantitative games: each action grants each player some _reward_, and the players seek to maximize their asymptotic average reward. **Definition 2** (Mean-payoff).: In a graph \((V,E)\), we define for each _reward mapping_\(r:E\rightarrow\mathbb{Q}\) the _mean-payoff function_\(\mathrm{MP}_{r}:h_{0}\ldots h_{n}\mapsto\frac{1}{n}\sum_{k}r\left(h_{k}h_{k+1}\right)\). Then, we write \(\underline{\mathrm{MP}}_{r}(\mathbf{\pi})=\liminf_{n}\mathrm{MP}_{r}(\mathbf{\pi}_{ \leq n}).\) The game \(\mathbf{\mathcal{G}}\) is a _mean-payoff game_ if there exists a tuple \((r_{i})_{i\in\Pi}\) of reward mappings, such that for each player \(i\), we have \(\mu_{i}=\underline{\mathrm{MP}}_{r_{i}}\). When the context is clear, we write \(\mathrm{MP}_{i}\) for \(\mathrm{MP}_{r_{i}}\), and \(\underline{\mathrm{MP}}_{i}\) for \(\underline{\mathrm{MP}}_{r_{i}}\). Quantitative reachability games constitute a quantitative version of the classical class of reachability games: each player seeks to reach a given target as fast as possible. **Definition 3** (Quantitative reachability).: The game \(\mathbf{\mathcal{G}}\) is a _quantitative reachability game_ if for each player \(i\), there exists a _target set_\(T_{i}\subseteq V\), such that for every play \(\pi\), we have: \[\mu_{i}(\mathbf{\pi})=\frac{1}{1+\inf\{n\in\mathbb{N}\ |\ \mathbf{\pi}_{n}\in T_{i}\}},\] with the conventions \(\inf\mathbf{\emptyset}=+\infty\) and \(\frac{1}{+\infty}=0\). Energy games are Boolean games, but based on a quantitative information, the _energy_ of the players. Like in mean-payoff games, the players receive (positive or negative) rewards for each action, but they seek now to keep the aggregated sum of those rewards, their _energy level_, always nonnegative. That quantity symbolizes any resource that an agent could have to store, such as fuel, money, coffee,... **Definition 4** (Energy).: In a graph \((V,E)\), we associate to each _reward mapping_\(r:E\rightarrow\mathbb{Z}\) the _energy level function_\(\mathrm{EL}_{r}:\mathrm{Hist}\mathbf{\mathcal{G}}\rightarrow\mathbb{N}\cup\{\bot\}\) defined by: * \(\mathrm{EL}_{r}(h_{0})=0\); * \(\mathrm{EL}_{r}(h_{0}\ldots h_{n+1})=\mathrm{EL}_{r}(h_{0}\ldots h_{n})+r(h_{ n}h_{n+1})\) if \(\mathrm{EL}_{r}(h_{0}\ldots h_{n})\neq\bot\), and \(\mathrm{EL}_{r}(h_{0}\ldots h_{n})+r(h_{n}h_{n+1})\geq 0\); * \(\mathrm{EL}_{r}(h_{0}\ldots h_{n+1})=\bot\) otherwise. The game \(\mathcal{G}\) is an _energy game_ if there exists a tuple \((r_{i})_{i\in\Pi}\) of reward mappings such that for each \(i\) and every \(\pi\), we have \(\mu_{i}(\pi)=0\) if \(\operatorname{EL}_{r_{i}}(\pi_{\leq n})=\bot\) for some \(n\), and \(\mu_{i}(\pi)=1\) otherwise. When the context is clear, we write \(\operatorname{EL}_{i}\) for \(\operatorname{EL}_{r_{i}}\). Finally, discounted-sum games are quantitative games, where each player's payoff is obtained by summing the rewards that player obtains with some discount factor applied as the play goes along. **Definition 5** (Discounted-sum).: In a graph \((V,E)\), we define for each _reward mapping_\(r:E\to\mathbb{Q}\) and each rational _discount factor_\(\lambda\in(0,1)\) the _discounted sum function_\(\operatorname{DS}^{\lambda}_{r}:h\mapsto\sum_{k}\lambda^{k}r(h_{k}h_{k+1})\). Then, we write \(\operatorname{DS}^{\lambda}_{r}(\pi)=\lim_{n}\operatorname{DS}^{\lambda}_{r} (\pi_{\leq n}).\) The game \(\mathcal{G}\) is a _discounted-sum game_ if there exists a discount factor \(\lambda\in(0,1)\) and a tuple \((r_{i})_{i\in\Pi}\) of reward mappings such that for each \(i\) and every \(\pi\), we have \(\mu_{i}(\pi)=\operatorname{DS}^{\lambda}_{r_{i}}(\pi)\). When the context is clear, we write \(\operatorname{DS}_{i}\) for \(\operatorname{DS}^{\lambda}_{r_{i}}\). Every game \(\mathcal{G}\) from one of those five classes can be encoded with a finite number of bits. We write \(\|\mathcal{G}\|\) that number. An example of mean-payoff game is given in Figure 1. That game has two players: player \(\bigcirc\), who controls the vertices \(a\) and \(c\), and player \(\Box\), who controls the vertex \(b\). The initial vertex is \(v_{0}=a\). We wrote above each edge the rewards that both players get when that edge is taken. Thus, three types of plays are possible in that game: the one that loops on the vertex \(a\) gives both players the payoff \(0\); the ones that loop on the vertex \(b\) give both players the payoff \(1\); and the ones that loop on the vertex \(c\) give both players the payoff \(0\). ### Strategies A _strategy_ for player \(i\) in the initialized game \(\mathcal{G}_{\restriction v_{0}}\) is a mapping \(\sigma_{i}:\operatorname{Hist}_{i}\mathcal{G}_{\restriction v_{0}}\to V\), such that \(v\sigma_{i}(hv)\) is an edge of \((V,E)\) for every \(hv\). A history \(h\) is _compatible_ with a strategy \(\sigma_{i}\) if and only if \(h_{k+1}=\sigma_{i}(h_{0}\ldots h_{k})\) for all \(k\) such that \(h_{k}\in V_{i}\). That definition naturally extends to plays. A _strategy profile_ for \(P\subseteq\Pi\) is a tuple \(\bar{\sigma}_{P}=(\sigma_{i})_{i\in P}\), where each \(\sigma_{i}\) is a strategy for player \(i\) in \(\mathcal{G}_{\restriction v_{0}}\). A play, or a history, is _compatible_ with \(\bar{\sigma}_{P}\) if it is compatible with every \(\sigma_{i}\) for \(i\in P\). A _complete_ strategy profile, usually written \(\bar{\sigma}\), is a strategy profile for \(\Pi\). Exactly one play is compatible with the strategy profile \(\bar{\sigma}\): we call it its _outcome_ and write it \(\langle\bar{\sigma}\rangle\). When \(\bar{\tau}_{P}\) and \(\bar{\tau}^{\prime}_{Q}\) are two strategy profiles with \(P\cap Q=\emptyset\), we write \((\bar{\tau}_{P},\bar{\tau}^{\prime}_{Q})\) the strategy profile \(\bar{\sigma}_{P\cup Q}\) such that \(\sigma_{i}=\tau_{i}\) for \(i\in P\), and \(\sigma_{i}=\tau^{\prime}_{i}\) for \(i\in Q\). In a strategy profile \(\bar{\sigma}_{P}\), the \(\sigma_{i}\)'s domains are pairwise disjoint. Therefore, we can consider \(\bar{\sigma}_{P}\) as one function: for \(hv\in\operatorname{Hist}\mathcal{G}_{\restriction v_{0}}\) such that \(v\in\bigcup_{i\in P}V_{i}\), we liberally write \(\bar{\sigma}_{P}(hv)\) for \(\sigma_{i}(hv)\) with \(i\) such that \(v\in V_{i}\). ### Equilibria and rational responses In what follows, we study rational behaviors of players in games: we have, therefore, to define our rationality concepts. Let us start with the most classical one: _Nash equilibrium_. Figure 1: An example of mean-payoff game The strategy profile \(\bar{\sigma}\) is a _Nash equilibrium_ (resp. \(\mathbb{L}\)_-fixed Nash equilibrium_) -- or (\(\mathbb{L}\)-fixed) _NE_ for short -- in \(\mathcal{G}_{\uparrow v_{0}}\) if for each player \(i\) (resp. each player \(i\neq\mathbb{L}\)) and every strategy \(\sigma^{\prime}_{i}\), called _deviation of \(\sigma_{i}\)_, we have \(\mu_{i}\left(\langle\sigma^{\prime}_{i},\bar{\sigma}_{-i}\rangle\right)\leq \mu_{i}\left(\langle\bar{\sigma}\rangle\right)\). When \(\bar{\sigma}\) is _not_ an NE (or \(\mathbb{L}\)-fixed NE), we call _profitable deviations_ the deviations that do not satisfy that inequality. As an example, in the game given in Figure 1, two types of Nash equilibria can be found: those that eventually loop on the vertex \(b\), and therefore give both players the payoff \(1\); and those that loop on the vertex \(a\), but in which player \(\Circle\) has no profitable deviation, because if she goes to the vertex \(b\), player \(\Box\) threatens to go to the vertex \(c\) -- and player \(\Box\) has no profitable deviation, because he does never have a choice to do. However, player \(\Box\)'s threat is not _credible_, since going to the vertex \(c\) would give him the payoff \(0\), while he could stay on the vertex \(b\) and get the payoff \(1\). A stronger rationality concept, that avoids that phenomenon, is the concept of _subgame-perfection_. Let \(h\!v\) be a history in the game \(\mathcal{G}\). The _subgame_ of \(\mathcal{G}\) after \(h\!v\) is the game \(\mathcal{G}_{\uparrow h\!v}=\left(\Pi,V,(V_{i})_{i},E,\mu_{\uparrow h\!v} \right)_{\uparrow v}\), where \(\mu_{\uparrow h\!v}\) maps each play \(\pi\) to its payoff in \(\mathcal{G}\), assuming that the history \(h\!v\) has already been played, i.e. to the payoff \(\mu_{\uparrow h\!v}(\pi)=\mu(h\!\pi)\). Note that the initialized game \(\mathcal{G}_{\uparrow v_{0}}\) is also the subgame of \(\mathcal{G}\) after the one-vertex history \(v_{0}\). If \(\sigma_{i}\) is a strategy in \(\mathcal{G}_{\uparrow v_{0}}\), its _substrategy_ after \(h\!v\) is the strategy \(\sigma_{i\!\uparrow h\!v}\) in \(\mathcal{G}_{\uparrow h\!v}\), defined by \(\sigma_{i\!\uparrow h\!v}(h^{\prime})=\sigma_{i}(hh^{\prime})\) for every \(h^{\prime}\in\operatorname{Hist}_{\mathcal{G}_{\uparrow h\!v}}\). The strategy profile \(\bar{\sigma}\) is a (\(\mathbb{L}\)_-fixed_) _subgame-perfect equilibrium_ -- or (\(\mathbb{L}\)_-fixed_) _SPE_ for short -- in \(\mathcal{G}_{\uparrow v_{0}}\) if and only if for every history \(h\) in \(\mathcal{G}_{\uparrow v_{0}}\) (resp. every history \(h\) compatible with \(\sigma_{\mathbb{L}}\)), the strategy profile \(\bar{\sigma}_{\uparrow h}\) is a (\(\mathbb{L}\)-fixed) Nash equilibrium in the subgame \(\mathcal{G}_{\uparrow h}\). Nash and subgame-perfect equilibria entail two notions of rationality for the environment's responses to a strategy \(\sigma_{\mathbb{L}}\) of Leader. A _Nash response_ to \(\sigma_{\mathbb{L}}\) is a strategy profile \(\bar{\sigma}_{-\mathbb{L}}\) such that the strategy profile \(\bar{\sigma}=(\sigma_{\mathbb{L}},\bar{\sigma}_{-\mathbb{L}})\) is an \(\mathbb{L}\)-fixed NE. A _subgame-perfect response_ to \(\sigma_{\mathbb{L}}\) is a strategy profile \(\bar{\sigma}_{-\mathbb{L}}\) such that \(\bar{\sigma}\) is an \(\mathbb{L}\)-fixed SPE. The set of Nash (resp. subgame-perfect) responses to \(\sigma_{\mathbb{L}}\) is written \(\operatorname{NR}(\sigma_{\mathbb{L}})\) (resp. \(\operatorname{SPR}(\sigma_{\mathbb{L}})\)). Finally, let \(\rho\in\{\operatorname{Nash},\operatorname{subgame-perfect}\}\). We then call \(\rho\)_-equilibria_ the Nash equilibria when \(\rho=\operatorname{Nash}\), and the subgame-perfect equilibria when \(\rho=\operatorname{subgame-perfect}\). We will similarly talk about \(\mathbb{L}\)_-fixed \(\rho\)-equilibria_, and \(\rho\)_-responses_. We write \(\rho\mathrm{R}(\sigma_{\mathbb{L}})\) the set of \(\rho\)-responses to a strategy \(\sigma_{\mathbb{L}}\). ### Mealy machines A _multiplayer Mealy machine_ on a game \(\mathcal{G}\) is a tuple \(\mathcal{M}=(Q,q_{0},\Delta)\) where \(Q\) is a finite set of _states_, where \(q_{0}\in Q\) is the _initial state_, and where \(\Delta\subseteq(Q\times V\times Q\times V)\) is a finite set of _transitions_, such that for every \((p,u,q,v)\in\Delta\), we have \(uv\in E\), and such that for every \(p\in Q\) and \(u\in V\), there exists a transition \((p,u,q,v)\in\Delta\). The specialist reader will have noted that this definition is more general than the classical one, in which it is often assumed that for each \(p\) and \(u\), there exists exactly one such transition. Hereafter, such a Mealy machine will be called _deterministic_ Mealy machine. Results about deterministic Mealy machines can be applied to _programs_, which are supposed to run deterministically; we chose to take a more general definition to capture also _protocols_, which may be given to agents who would still have some room for manoeuvre in how they apply it. A strategy profile \(\bar{\sigma}\) in \(\mathcal{G}_{\uparrow v_{0}}\) is _compatible_ with \(\mathcal{M}\) if there exists a mapping \(h\mapsto q_{h}\) that maps every history \(h\) in \(\mathcal{G}_{\uparrow v_{0}}\) to a state \(q_{h}\in Q\), such that for every \(h\!v\in\operatorname{Hist}\mathcal{G}_{\uparrow v_{0}}\), we have \((q_{h},v,q_{h\!v},\bar{\sigma}(h\!v))\in\Delta\). The set of strategy profiles in \(\mathcal{G}_{\uparrow v_{0}}\) compatible with \(\mathcal{M}\) is written \(\operatorname{Comp}_{\uparrow v_{0}}(\mathcal{M})\). If \(\mathcal{M}\) is deterministic, then there is exactly one strategy profile compatible with \(\mathcal{M}\); we call it a _finite-memory_ strategy profile. A _Mealy machine for player \(i\)_ on a game \(\mathcal{G}\) is a tuple \(\mathcal{M}=(Q,q_{0},\Delta)\) where \(Q\) is a finite set of _states_, where \(q_{0}\in Q\) is the _initial state_, and where \(\Delta\subseteq(Q\times V_{-i}\times Q)\cup(Q\times V_{i}\times Q\times V)\) is a finite set of _transitions_, such that for every \((p,u,q)\) or \((p,u,q,v)\in\Delta\), we have \(uv\in E\), and such that for every \(p\in Q\) and \(u\in V\), there exists a transition \((p,u,q)\) or \((p,u,q,v)\in\Delta\). Again, a one-player Mealy machine is _deterministic_ if for each \(p\) and \(u\), there exists exactly one such transition. The strategies _compatible_ with \(\mathcal{M}\), as well as _finite-memory_ strategies, are defined analogously to the multiplayer case. Every Mealy machine \(\mathcal{M}\) can be encoded with a finite number of bits. We write \(\|\mathcal{M}\|\) that number. As an example, Figures 2 and 3 depict two Mealy machines on the game of Figure 1. Each arrow from a state \(p\) to a state \(q\) labeled \(u|v\) denotes the existence of a transition \((p,u,q,v)\) (from the state \(p\), the machine reads the vertex \(u\), it switches to the state \(q\) and outputs the vertex \(v\)). Each arrow from a state \(p\) to a state \(q\) labeled \(u\) denotes the existence of a transition \((p,u,q)\) (from the state \(p\), the machine reads the vertex \(u\), it switches to the state \(q\) and outputs nothing). The first one is a deterministic multiplayer Mealy machine. The only strategy profile compatible with it can be described as follows: from the vertex \(a\), player \(\Circle\) stays in \(a\). If she goes to the vertex \(b\), then player \(\Box\) loops once on \(b\), and then goes to the vertex \(c\). The second one is a machine for player \(\Box\), that is not deterministic: from the state \(q_{0}\), reading the vertex \(b\), the machine stays in \(q_{0}\) but it can output either \(b\) or \(c\). The strategies that are compatible with it can be described as follows: when player \(\Box\) has to play, if the vertex \(a\) was seen an odd number of times, then he stays in \(b\); in the opposite case, he can either stay in \(b\) or eventually go to \(c\). ### Decision problems Let us now define the decision problems we are studying in this paper. The following problems are defined for each class of games \(\mathcal{C}\), for each \(\rho\in\{\operatorname{Nash},\operatorname{subgame}\text{-perfect}\}\), and in both deterministic and non-deterministic setting. **Problem 1** ((Deterministic) \(\rho\)-checking problem in the class \(\mathcal{C}\)).: Given a game \(\mathcal{G}_{\upharpoonright v_{0}}\in\mathcal{C}\) and a (deterministic) Mealy machine \(\mathcal{M}\) on \(\mathcal{G}\), is every \(\bar{\sigma}\in\operatorname{Comp}_{\upharpoonright v_{0}}(\mathcal{M})\) a \(\rho\)-equilibrium? **Problem 2** ((Deterministic) \(\rho\)-rational verification problem in the class \(\mathcal{C}\)).: Given a game \(\mathcal{G}_{\upharpoonright v_{0}}\in\mathcal{C}\), a threshold \(t\in\mathbb{Q}\) and a (deterministic) Mealy machine \(\mathcal{M}\) on \(\mathcal{G}\), is every \(\mathbb{L}\)-fixed \(\rho\)-equilibrium \(\bar{\sigma}\) with \(\sigma_{\mathbb{L}}\in\operatorname{Comp}_{\upharpoonright v_{0}}(\mathcal{M})\) such that \(\mu_{\mathbb{L}}(\langle\bar{\sigma}\rangle)>t\)? Note that every SPE is an NE. So, for the checking problems, a positive instance for SPEs is also a positive instance for NEs. And conversely, for rational verification problems, a positive instance for NEs is also a positive instance for SPEs. Figure 3: A non-deterministic one-player Mealy machine Figure 2: A deterministic multiplayer Mealy machine ### A characterization of Nash equilibria Before moving on to our results, let us end this section with a lemma that will be used often along this paper, stating that an NE outcome is a play in which no player has a payoff smaller than what they can enforce when they deviate, and when all the other players make everything they can to punish them. That is a classical result that can be found under various wordings (see for instance Theorem 1 in [5]); here, we present it under the form that will be the most useful for us hereafter. **Lemma 1**.: _In every game \(\mathcal{G}\), the NE outcomes are exactly the plays \(\pi\) such that for each player \(i\), for every vertex \(\pi_{k}\in V_{i}\), there exists a strategy profile \(\bar{\tau}_{-i}^{k}\) from \(\pi_{k}\) such that \(\sup_{\tau_{i}}\mu_{i}(\pi_{<k}\langle\bar{\tau}_{-i}^{k},\tau_{i}\rangle)\leq \mu_{i}(\pi)\)._ Proof.: Let \(\pi\) be a Nash equilibrium outcome, and let \(\bar{\sigma}\) be an NE such that \(\langle\bar{\sigma}\rangle=\pi\): then, for each \(i\) and every \(\pi_{k}\), the strategy profile \(\bar{\sigma}_{|\pi_{\leq k}}\) satisfies the hypothesis given above. Conversely, if such strategy profiles exist, then we can define \(\bar{\sigma}\) as the strategy profile that follows the play \(\pi\) and that, after a one-shot deviation \(\pi_{\leq k}v\) with \(v\neq\pi_{k+1}\), follows the strategy profile \(\tau_{-i}^{k}\), and any strategy for player \(i\). In the next section, we present two general constructions that we will use in the rest of the paper. ## 3 General tools ### To solve checking problems: the deviation games Deciding the checking problems amounts to searching for a profitable deviation, either to the outcome (Nash-checking problem) or in some subgame (subgame-perfect-checking problem). That can be achieved through a new game structure, called _deviation game_, in which a play simulates two parallel plays in the original game: one in which the players have to follow the outputs of the Mealy machine, and one in which one of them is allowed to deviate from it. In that game, two fresh players will measure the payoffs of the deviating player in the original game: one, called _Adam_, is measuring the player's payoff in the non-deviating play, and the other one, called _Eve_, is measuring their payoff in the deviating one. **Definition 6** (Deviation games).: Let \(\mathcal{G}_{|\forall v_{0}}\) be a game, and let \(\mathcal{M}\) be a multiplayer Mealy machine in \(\mathcal{G}_{|\forall v_{0}}\). The associated _Nash deviation game_ is the game \(\mathrm{NDev}(\mathcal{G},\mathcal{M})_{|\{q_{0},v_{0}\}}=(\{\mathbb{A}, \mathbb{E}\},V^{\prime},(V^{\prime}_{\mathbb{A}},V^{\prime}_{\mathbb{E}}),E^{ \prime},\mu^{\prime})_{|\{q_{0},v_{0}\}}\), where: * the players are _Adam_, written \(\mathbb{A}\), and _Eve_, written \(\mathbb{E}\). * The vertex space is \(V^{\prime}=\{(q_{0},v_{0})\}\cup(Q\times V\times\Pi)\cup(Q\times V\times\Pi \times Q\times V)\), and Eve controls every vertex. * The set \(E^{\prime}\) contains: * the edge \((q_{0},v_{0})(q,v,i)\) for each player \(i\in\Pi\) and each transition \((q_{0},v_{0},q,v)\in\Delta\) (the player \(i\) is chosen as the deviating player); * the edge \((q_{0},v_{0})(q,v,i,q,v^{\prime})\) for each player \(i\in\Pi\), each transition \((q_{0},v_{0},q,v)\in\Delta\), and each edge \(v_{0}v^{\prime}\in E\) with \(v^{\prime}\neq v\) (the player \(i\) is chosen as the deviating player, and starts to deviate immediately); * the edge \((p,u,i)(q,v,i)\) for each \((p,u,q,v)\in\Delta\) (player \(i\) has not started to deviate yet); * the edge \((p,u,i)(q,v,i,q,v^{\prime})\) for each transition \((p,u,q,v)\in\Delta\) with \(u\in V_{i}\) and each edge \(uv^{\prime}\in E\) with \(v^{\prime}\neq v\) (player \(i\) starts to deviate); * the edge \((p,u,i,p^{\prime},u^{\prime})(q,v,i,q^{\prime},v^{\prime})\) for every two transitions \((p,u,q,v)\), \((p^{\prime},u^{\prime},q^{\prime},w)\in\Delta\) with either \(w=v^{\prime}\) or \(u^{\prime}\in V_{i}\) (player \(i\) is deviating). * Let \(\pi\) be a play in this game, of the form: \[\pi =(q_{0},v_{0})(q_{1},v_{1},i)\ldots(q_{k-1},v_{k-1},i)\] \[(q_{k},v_{k},i,q^{\prime}_{k},v^{\prime}_{k})(q_{k+1},v_{k+1},i, q^{\prime}_{k+1},v^{\prime}_{k+1})\ldots\] Then, we define \(\mu^{\prime}_{A}(\pi)=\mu(v_{0}\ldots v_{k}v_{k+1}\ldots)\), and \(\mu^{\prime}_{B}(\pi)=\mu(v_{0}\ldots v_{k}v^{\prime}_{k+1}\ldots)\). When \(\pi\) has the form \(\pi=(q_{0},v_{0})(q_{1},v_{1},i)(q_{2},v_{2},i)\ldots\), i.e. when player \(i\) does never deviate, we define \(\mu^{\prime}_{A}(\pi)=\mu^{\prime}_{B}(\pi)=\mu(v_{0}v_{1}v_{2}\ldots)\). The _subgame-perfect deviation game_\(\mathrm{SPDev}(\boldsymbol{\mathcal{G}},\boldsymbol{\mathcal{M}})_{\uparrow(q_{0},v_{0})}\) is defined similarly with, additionnally: * the edge \((p,u,i)(q,w,i)\) for each player \(i\in\Pi\), each transition \((p,u,q,v)\in\Delta\) and each edge \(uw\in E\) (player \(i\) has not started to deviate, and Eve is looking for a subgame in which a profitable deviation exists); * the edge \((q_{0},v_{0})(q,w,i)\) for each \((q_{0},v_{0},q,v)\in\Delta\) and every \(v_{0}w\in E\) (player \(i\) is chosen as the deviating player, and Eve is looking for a subgame). Figure 4: A Nash deviation game Figure 5: A subgame-perfect deviation game As an example, if \(\mathbf{\mathcal{G}}_{\mathbb{I}\nu_{0}}\) is the game of Figure 1 and if \(\mathbf{\mathcal{M}}\) is the machine given in Figure 2, then Figure 4 represents the game \(\mathrm{NDev}(\mathbf{\mathcal{G}}_{\mathbb{I}\nu_{0}},\mathbf{\mathcal{M}})\), and Figure 5 represents the game \(\mathrm{SPDev}(\mathbf{\mathcal{G}}_{\mathbb{I}\nu_{0}},\mathbf{\mathcal{M}})\). For the sake of readability, the vertices that are not accessible from \((\mathbf{q}_{0},\nu_{0})\) have been omitted. These games are also mean-payoff games, and the rewards are given on the figures -- the rewards equal to \(0\) have been omitted. On Figure 5, the thick arrows highlight a play that gives a better payoff to Eve than to Adam; as we will see now, that means that the only strategy profile compatible with the machine \(\mathbf{\mathcal{M}}\) is not an SPE. No such play can be found on the game of Figure 4, because that strategy profile is an NE. In other words, the game \(\mathbf{\mathcal{G}}_{\mathbb{I}\nu_{0}}\) and the machine \(\mathbf{\mathcal{M}}\) form a positive instance of the (deterministic) Nash checking problem, but not of the (deterministic) subgame-perfect checking problem. **Theorem 1** (App. A).: _There exists a strategy profile \(\bar{\sigma}\in\mathrm{Comp}_{\mathbb{I}\nu_{0}}(\mathbf{\mathcal{M}})\) that is not an NE (resp. SPE) if and only if there exists a play \(\pi\) in the game \(\mathrm{NDev}(\mathbf{\mathcal{G}},\mathbf{\mathcal{M}})_{\mathbb{I}(q_{0},\nu_{0})}\) (resp. \(\mathrm{SPDev}(\mathbf{\mathcal{G}},\mathbf{\mathcal{M}})_{\mathbb{I}(q_{0},\nu_{0})}\)) such that \(\mu_{\mathbb{A}}(\pi)<\mu_{\mathbb{B}}(\pi)\)._ In other words, the checking problems amount to solve the following problem in the deviation games. **Problem 3** (Privilege problem in the class \(\mathcal{C}\)).: Given a game \(\mathbf{\mathcal{G}}_{\mathbb{I}\nu_{0}}\in\mathcal{C}\) with two players \(\mathbb{A}\) and \(\mathbb{E}\), called _Adam_ and _Eve_, is every play \(\pi\) in \(\mathbf{\mathcal{G}}_{\mathbb{I}\nu_{0}}\) such that \(\mu_{\mathbb{A}}(\pi)\geq\mu_{\mathbb{B}}(\pi)\)? Moreover, the size of the deviations games are bounded by a polynomial function of \(\|\mathbf{\mathcal{G}}\|\) and \(\|\mathbf{\mathcal{M}}\|\); and, when the game \(\mathbf{\mathcal{G}}\) belongs to a class \(\mathcal{C}\) among the five classes defined in Section 2.2, then all deviation games that can be constructed from it also belong to the class \(\mathcal{C}\). Hence the following. **Corollary 1**.: _Let \(\mathcal{C}\) be a class of games, among the classes of parity games, mean-payoff games, quantitative reachability games, energy games, and discounted-sum games. Then, in the class \(\mathcal{C}\), the Nash-checking and the subgame-perfect checking problems, deterministic or not, reduce to the privilege problem._ ### To solve rational verification: the product game Although very intuitive, the rational verification problems are quite hard to study as they are. That is because their instances include two graph structures: a game, and a Mealy machine. However, for the environment of Leader, responding rationally to Leader's strategies that are compatible with \(\mathbf{\mathcal{M}}\) amounts to play rationally in a greater game, in which the memory states of Leader have been incorporated in the arena. **Definition 7** (Product game).: Let \(\mathbf{\mathcal{G}}_{\mathbb{I}\nu_{0}}\) be a game, and let \(\mathbf{\mathcal{M}}\) be a Mealy machine for Leader in \(\mathbf{\mathcal{G}}\). Their _product game_ is the game \(\mathbf{\mathcal{G}}_{\mathbb{I}\nu_{0}}\otimes\mathbf{\mathcal{M}}=(\Pi\cup\{ \mathbb{D}\},V^{\prime},(V^{\prime}_{i})_{i},E^{\prime},\mu^{\prime})_{\mathbb{ I}(\nu_{0},q_{0})}\) where the player \(\mathbb{D}\), called _Demon_, interpretes the non-deterministic choices of the machine \(\mathbf{\mathcal{M}}\). Formally: * \(V^{\prime}=(V\times Q)\cup(V\times Q\times Q)\); * \(V^{\prime}_{\mathbb{L}}=\emptyset\), \(V^{\prime}_{i}=V_{i}\times Q\times Q\) for every \(i\in\Pi\setminus\{\mathbb{L}\}\), and \(V^{\prime}_{\mathbb{D}}=(V\times Q)\cup(V_{\mathbb{L}}\times Q\times Q)\); * the set \(E^{\prime}\) contains: * the edge \((u,p)(u,p,q)\) for each transition \((p,u,q)\in\Delta\) (with \(u\not\in V_{\mathbb{L}}\)), or \((p,u,q,v)\in\Delta\) (with \(u\in V_{\mathbb{L}}\)); * the edge \((u,p,q)(v,q)\) for each transition \((p,u,q,v)\in\Delta\) (with \(u\in V_{\mathbb{L}}\)); * the edge \((u,p,q)(v,q)\) for each transition \((p,u,q)\in\Delta\), and each edge \(uv\in E\) (with \(u\not\in V_{\mathbb{L}}\)); * each payoff function \(\mu^{\prime}_{i}\) maps every play \((\pi_{0},q_{0})(\pi_{0},q_{0},q_{1})(\pi_{1},q_{1})\dots\) to the payoff \(\mu_{i}(\pi)\) if \(i\neq\mathbb{D}\), and to the payoff \(0\) if \(i=\mathbb{D}\). As an example, we have represented on Figure 6 the game \(\mathcal{G}_{\upharpoonright v_{0}}\otimes\mathcal{M}\), when \(\mathcal{G}_{\upharpoonright v_{0}}\) is the game of Figure 1 and \(\mathcal{M}\) the machine of Figure 3. Leader is then assimilated to player \(\Box\), and Demon's vertices are represented by dotted boxes. Again, the unreachable vertices have been omitted, and we have given only the non-zero rewards. Since, from the vertex \((a,q_{0},q_{1})\), player \(\Circle\) has always the possibility to go to the vertex \((b,q_{1})\) and to get the payoff \(1\), it can be shown that every NE and every SPE in that game gives player \(\Box\) the payoff \(1\). As we will see now, that means that the strategies compatible with the machine \(\mathcal{M}\) guarantee the payoff \(1\) to player \(\Box\) against Nash-rational or subgame-perfect rational responses, i.e. that \(\mathcal{G}_{\upharpoonright v_{0}},1\), and \(\mathcal{M}\) form a positive instance of the Nash and subgame-perfect rational verification problems. **Theorem 2** (App. B).: _Let \(\rho\in\{\mathit{Nash},\mathit{subgame}\text{-perfect}\}\). Let \(\mathcal{G}_{\upharpoonright v_{0}}\) be a game, let \(\mathcal{M}\) be a Mealy machine for Leader in \(\mathcal{G}\), and let \(t\in\mathbb{Q}\). Then, every \(\rho\)-response \(\bar{\sigma}_{-\mathbb{L}}\) to every strategy \(\sigma_{\mathbb{L}}\in\mathrm{Comp}_{\upharpoonright v_{0}}(\mathcal{M})\) satisfies \(\mu_{\mathbb{L}}((\bar{\sigma}))>t\) if and only if every \(\rho\)-equilibrium \(\bar{\tau}\) in the game \(\mathcal{G}_{\upharpoonright v_{0}}\otimes\mathcal{M}\) satisfies \(\mu^{\prime}_{\mathbb{L}}((\bar{\tau}))>t\)._ Thus, we find the following problem, simpler in its wording and already studied in various settings in the literature: **Problem 4** (\(\rho\)-universal threshold problem in the class \(\mathcal{C}\)).: Given a game \(\mathcal{G}_{\upharpoonright v_{0}}\in\mathcal{C}\), a player \(i\in\Pi\), and a threshold \(t\in\mathbb{Q}\), is every \(\rho\)-equilibrium \(\bar{\sigma}\) in \(\mathcal{G}_{\upharpoonright v_{0}}\) such that \(\mu_{i}(\langle\bar{\sigma}\rangle)>t\)? Moreover, the size of the product game is bounded by a polynomial function of \(\|\mathcal{G}\|\) and \(\|\mathcal{M}\|\); and when the game \(\mathcal{G}\) belongs to a class \(\mathcal{C}\) of games among the five classes defined in Section 2.2, then all deviation games that can be constructed from it also belong to the class \(\mathcal{C}\). Hence the following. **Corollary 2**.: _Let \(\mathcal{C}\) be a class among the classes of parity games, mean-payoff games, quantitative reachability games, energy games, and discounted-sum games. Then, in the class \(\mathcal{C}\), for a given \(\rho\in\{\mathit{Nash},\mathit{subgame}\text{-perfect}\}\), the three following problems are reducible to each other in polynomial time:_ * _the_ \(\rho\)_-universal threshold problem;_ * _the_ \(\rho\)_-rational verification problem;_ Figure 6: A product game * _the deterministic_ \(\rho\)_-rational verification problem._ Proof.: * _The deterministic_ \(\rho\)_-rational verification problem reduces to the_ \(\rho\)_-rational verification problem,_ because a non-deterministic Mealy machine is a Mealy machine. * _The_ \(\rho\)_-universal threshold problem reduces to the deterministic_ \(\rho\)_-rational verification problem._ Let \(\mathcal{G}_{\uparrow v_{0}}\), \(i\) and \(t\) form an instance of the \(\rho\)-universal threshold problem. We define the game \(\mathcal{G}_{\uparrow v_{0}}^{\prime}\) as equal to the game \(\mathcal{G}_{\uparrow v_{0}}\), where a Leader has been added to the set of players, but controls no vertex. We define the payoff function \(\mu_{\mathbb{L}}=\mu_{i}\). Thus, if \(\mathcal{G}\) belongs to the class \(\mathcal{C}\), so does \(\mathcal{G}^{\prime}\). We define \(\mathcal{M}\) as the one-state deterministic Mealy machine on \(\mathcal{G}^{\prime}\) that never outputs anything. Then, a strategy profile \(\bar{\sigma}\) in \(\mathcal{G}_{\uparrow v_{0}}^{\prime}\) is an \(\mathbb{L}\)-fixed \(\rho\)-equilibrium, if and only if it is an \(\mathbb{L}\)-fixed \(\rho\)-equilibrium with \(\sigma_{\mathbb{L}}\in\operatorname{Comp}_{\uparrow v_{0}}(\mathcal{M})\), if and only if the strategy profile \(\bar{\sigma}_{\neg\mathbb{L}}\) is a \(\rho\)-equilibrium in the game \(\mathcal{G}_{\uparrow v_{0}}\). As a consequence \(\mathcal{G}_{\uparrow v_{0}}\), \(i\), and \(t\) form a positive instance of the \(\rho\)-universal threshold problem, if and only if \(\mathcal{G}_{\uparrow v_{0}}^{\prime}\), \(\mathcal{M}\), and \(t\) form a positive instance of the deterministic \(\rho\)-rational verification problem. Moreover, the latter can be constructed from the former in polynomial time. * _The_ \(\rho\)_-rational verification problem reduces to the_ \(\rho\)_-universal threshold problem._ By Theorem 2, and since the product game \(\mathcal{G}_{\uparrow v_{0}}\otimes\mathcal{M}\) can be constructed from \(\mathcal{G}_{\uparrow v_{0}}\) and \(\mathcal{M}\) in polynomial time. While Corollary 1 states the existence of a reduction in one direction, this corollary says that the three mentioned problems are inter-reducible. Thus, it will enable us to provide upper, but also lower bounds to the rational verification problems. ## 4 Parity games Let us now apply those general constructions to our first class of games, and the easiest to study: parity games. ### Checking problems By Corollary 1, in parity games, the checking problems reduce to the privilege problem, which consists in finding a play in a given game that satisfies a parity condition and that falsify another. That can be done in polynomial time. **Theorem 3**.: _In the class of parity games, the Nash-checking and the subgame-perfect checking problems, deterministic or not, can be decided in polynomial time._ Proof.: By Corollary 1, those four problems reduce to the privilege problem. Let \(\mathcal{G}_{\uparrow v_{0}}\) be a parity game in which there exists a play \(\pi\) such that \(\mu_{\mathbb{A}}(\pi)<\mu_{\mathbb{E}}(\pi)\). Then, we have \(\mu_{\mathbb{A}}(\pi)=0\), and \(\mu_{\mathbb{E}}(\pi)=1\). Therefore, there exist two vertices \(u,v\in\operatorname{Inf}(\pi)\), such that \(\kappa_{\mathbb{A}}(u)\) is odd, \(\kappa_{\mathbb{E}}(v)\) is even, and there exists a path from \(u\) to \(v\) and a path from \(v\) to \(u\) that both traverse only vertices \(w\) such that \(\kappa_{\mathbb{A}}(w)\geq\kappa_{\mathbb{A}}(w)\) and \(\kappa_{\mathbb{E}}(w)\geq\kappa_{\mathbb{E}}(w)\). Conversely, if such vertices \(u\) and \(v\) exist, then there exists a play \(\pi\) satisfying \(\mu_{\mathbb{A}}(\pi)<\mu_{\mathbb{E}}(\pi)\). The existence of such vertices can be checked in polynomial time. ### Rational verification As for rational verification problems, they reduce by Corollary 2 to universal threshold problems, which are subproblems of problems already studied in [33], [34], and later in [7]. In a nutshell, those problems belong to the class \(\mathsf{coNP}\), because when there exists a NE or SPE outcome that makes some player \(i\) lose, there exists one that has a simple form, and that can be guessed in polynomial time. The lower bounds can be obtained by a slight adaptation on a reduction from \(\mathsf{coSat}\) that was already presented in [33]. **Theorem 4** (App. C).: _In the class of parity games, the Nash rational and the subgame-perfect rational verification problems, deterministic or not, are \(\mathsf{coNP}\)-complete._ ## 5 Mean-payoff games While parity objectives constitute a core class in the study of qualitative objectives, mean-payoff objectives occupy a similar place among quantitative objectives. ### Checking problems As in parity games, Corollary 1 enables us to solve the checking problems in polynomial time, since the privilege problem itself reduces to the search of a negative cycle. **Theorem 5**.: _In the class of mean-payoff games, the Nash-checking and the subgame-perfect checking problems, deterministic or not, can be decided in polynomial time._ Proof.: By Corollary 1, those four problems reduce to the privilege problem. Let \(\boldsymbol{\mathcal{G}}_{\mathrm{\wr}_{0}}\) be a mean-payoff game. If every simple cycle \(c\) accessible from \(v_{0}\) is such that \(\mathrm{MP}_{\mathbb{A}}(c)\geq\mathrm{MP}_{\mathbb{B}}(c)\), then we also have \(\mu_{\mathbb{A}}(\pi)\geq\mu_{\mathbb{B}}(\pi)\) for every play \(\pi\); conversely, if there is a simple cycle \(c\) accessible from \(v_{0}\) such that \(\mathrm{MP}_{\mathbb{A}}(c)<\mathrm{MP}_{\mathbb{B}}(c)\), then there is a play \(\pi\) in \(\boldsymbol{\mathcal{G}}_{\mathrm{\wr}_{0}}\) such that \(\mu_{\mathbb{A}}(\pi)<\mu_{\mathbb{B}}(\pi)\) -- a play that reaches that cycle and loops there forever. Such a cycle can also be seen as a negative cycle for the reward function \(r=r_{\mathbb{A}}-r_{\mathbb{B}}\). Note that the equality \(\mathrm{MP}_{\pi}(\pi)=\mathrm{MP}_{\mathbb{A}}(\pi)-\mathrm{MP}_{\mathbb{B}}(\pi)\) does not hold for every play \(\pi\) (because the limit inferior and the additive inverse do not commute in general), but it does when \(\pi\) has the form \(c^{\omega}\). Thus, an algorithm that solves the privilege problem in polynomial time consists in searching for such a negative cycle, using Bellman-Ford's algorithm. ### Rational verification Again, the reduction to universal threshold problems enables to apply results and methods that already exist in the literature. **Theorem 6** (App. D).: _In the class of mean-payoff games, the Nash rational and the subgame-perfect rational verification problems, deterministic or not, are \(\mathsf{coNP}\)-complete._ ### The temptation of chaos It is worth noting here that the definition we gave of rational verification entails, in the case of mean-payoff games, results that may be considered as counter-intuitive. For instance, consider the game of Figure 7, where Leader owns no vertex, and consider the only (vacuous) strategy available for Leader. Does that strategy guarantee a payoff greater than 1? that game, and that threshold form a positive instance of subgame-perfect rational verification, because no \(\mathbb{L}\)-fixed SPE exists in that game (see [5]). More generally, the definition we give of rational verification considers that a _good_ strategy for Prover is a strategy such that for every response of the environment that is rational, the generated outcome observes some specification. But a strategy is then good, in that sense, if _no_ rational response of the environment exists, which can occur in mean-payoff games. ### Achaotic rational verification: a definition To avoid such phenomena, we introduce an alternative definition of rational verification, _achaotic rational verification_: a good strategy for Leader will be a strategy that guarantees the specification (here, always a threshold for Leader's payoff) against every response of the environment that is _as rational as possible_. To define that new problem, we need to define quantitative relaxations to the notions of NEs and SPEs. Let \(\boldsymbol{\mathcal{G}}_{\uparrow\!\!\!\!\uparrow_{0}}\) be a game and let \(\boldsymbol{\varepsilon}\geq 0\). The strategy profile \(\bar{\sigma}\) is an _\(\boldsymbol{\varepsilon}\)-NE_ (resp. \(\mathbb{L}\)-_fixed \(\boldsymbol{\varepsilon}\)-NE_) in \(\boldsymbol{\mathcal{G}}_{\uparrow\!\!\!\!\uparrow_{0}}\) if and only if for each player \(i\) (resp. each \(i\neq\mathbb{L}\)) and for every strategy \(\sigma^{\prime}_{i}\), called _deviation of \(\sigma_{i}\)_, we have the inequality \(\mu_{i}\left(\langle\sigma^{\prime}_{i},\bar{\sigma}_{-i}\rangle\right)\leq \mu_{i}\left(\langle\bar{\sigma}\rangle\right)+\boldsymbol{\varepsilon}\). The strategy profile \(\bar{\sigma}\) is an (\(\mathbb{L}\)-_fixed) \(\boldsymbol{\varepsilon}\)-SPE_ in \(\boldsymbol{\mathcal{G}}_{\uparrow\!\!\!\!\uparrow_{0}}\) if and only if for every history \(h\) in \(\boldsymbol{\mathcal{G}}_{\uparrow\!\!\!\!\uparrow_{0}}\) (resp. every history \(h\) compatible with \(\bar{\sigma}_{-\mathbb{L}}\)), the strategy profile \(\bar{\sigma}_{\uparrow h}\) is a (\(\mathbb{L}\)-fixed) \(\boldsymbol{\varepsilon}\)-NE in the subgame \(\boldsymbol{\mathcal{G}}_{\uparrow\!\!\!\!\uparrow h}\). Now, let \(\sigma_{\mathbb{L}}\) be a strategy for Leader in a game \(\boldsymbol{\mathcal{G}}_{\uparrow\!\!\!\!\uparrow_{0}}\), and let \(\boldsymbol{\varepsilon}\geq 0\). An _\(\boldsymbol{\varepsilon}\)-Nash response_ (resp. _\(\boldsymbol{\varepsilon}\)-subgame-perfect response_) to \(\sigma_{\mathbb{L}}\) is a strategy profile \(\bar{\sigma}_{-\mathbb{L}}\) such that the strategy profile \(\bar{\sigma}=(\sigma_{\mathbb{L}},\bar{\sigma}_{-\mathbb{L}})\) is an \(\mathbb{L}\)-fixed \(\boldsymbol{\varepsilon}\)-NE (resp. \(\mathbb{L}\)-fixed \(\boldsymbol{\varepsilon}\)-SPE). The set of \(\boldsymbol{\varepsilon}\)-Nash (resp. \(\boldsymbol{\varepsilon}\)-subgame-perfect, resp. \(\boldsymbol{\varepsilon}\rho\)) responses to \(\sigma_{\mathbb{L}}\) is denoted by \(\boldsymbol{\varepsilon}\text{NR}(\sigma_{\mathbb{L}})\) (resp. \(\boldsymbol{\varepsilon}\text{SPR}(\sigma_{\mathbb{L}})\), resp. \(\boldsymbol{\varepsilon}\rho\text{R}(\sigma_{\mathbb{L}})\)). Note that \(0\)-NEs coincide with NEs, \(0\)-SPEs with SPEs, and so on: those notions are generalizations of the notions we have studied so far. We can now define our new decision problem. **Problem 5** (Achaotic (deterministic) \(\rho\)-rational verification in the class \(\mathcal{C}\)).: Given a game \(\boldsymbol{\mathcal{G}}_{\uparrow\!\!\!\!\uparrow_{0}}\in\mathcal{C}\), a threshold \(t\in\mathbb{Q}\), and a Mealy machine (resp. a deterministic Mealy machine) \(\mathcal{M}\) on \(\boldsymbol{\mathcal{G}}\), does there exist \(\boldsymbol{\varepsilon}\geq 0\) satisfying: * \(\boldsymbol{\varepsilon}\rho\text{R}(\sigma_{\mathbb{L}})\neq\emptyset\) for some strategy \(\sigma_{\mathbb{L}}\in\text{Comp}_{\uparrow\!\!\!\!\uparrow_{0}}(\mathcal{M})\); * and \(\mu_{\mathbb{L}}(\langle\sigma_{\mathbb{L}},\bar{\sigma}_{-\mathbb{L}}\rangle)>t\) for every strategy \(\sigma_{\mathbb{L}}\in\text{Comp}_{\uparrow\!\!\!\!\uparrow_{0}}(\mathcal{M})\), and for every response \(\bar{\sigma}_{-\mathbb{L}}\in\boldsymbol{\varepsilon}\rho\text{R}(\sigma_{ \mathbb{L}})\)? Intuitively: is there a level of rationality that is achievable for the environment, and that enforce the specification? Figure 7: The temptation of chaos: an illustration ### Coincidence with rational verification in other settings Among all the problems we are studying here, the temptation of chaos is a phenomenon that affects only one case: subgame-perfect rational verification in mean-payoff games. In every other cases, the rational verification problems are equivalent to their achaotic versions, because Nash responses and subgame-perfect responses are guaranteed to exist. **Proposition 1** (App. E).: _Let \(\mathcal{C}\) be a class of games, among the classes of parity games, quantitative reachability games, energy games, and discounted-sum games._ _Let \(\rho\in\{\text{Nash},\text{subgame-perfect}\}\). Then, the positive instances of the achaotic \(\rho\)-rational verification problem in \(\mathcal{C}\) are exactly the positive instances of the \(\rho\)-rational verification problem. Similarly, the positive instances of the achaotic Nash-rational verification problem in mean-payoff games are exactly the positive instances of the \(\rho\)-rational verification problem._ ### Complexity Now, an optimal algorithm for achaotic subgame-perfect rational verification in mean-payoff games requires the following technical lemma: there exists a least \(\varepsilon\) such that \(\varepsilon\)-SPEs exist, and it can be written with a polynomially bounded number of bits. To prove that, we need to use the notion of _negotiation function_, defined in [5]: a function from vertex labellings to vertex labellings whose least \(\varepsilon\)-fixed point (i.e., the least vertex labelling \(\lambda\) that is a fixed point of that function up to \(\varepsilon\)) characterizes \(\varepsilon\)-SPEs. Our result can be obtained by revisiting a proof of [6], that was designed to bound the number of bits required to write that least \(\varepsilon\)-fixed point, for a fixed \(\varepsilon\). **Lemma 2** (App. F).: _There exists a polynomial \(P_{1}\) such that in every mean-payoff game \(\mathcal{G}_{\mathbb{I}\mathbb{V}_{0}}\), there exists \(\varepsilon_{\min}\) with \(\|\varepsilon_{\min}\|\leq P_{1}(\|\mathcal{G}\|)\) such that \(\varepsilon_{\min}\)-SPEs exist in \(\mathcal{G}_{\mathbb{I}\mathbb{V}_{0}}\), and \(\varepsilon\)-SPEs, for every \(\varepsilon<\varepsilon_{\min}\), do not._ We are now equipped to prove the following theorem. **Theorem 7** (App. G).: _In the class of mean-payoff games, the achaotic subgame-perfect rational verification problem, deterministic or not, is \(\mathsf{P}^{\mathsf{NP}}\)-complete._ Proof sketch.: Using Lemma 2 and the same arguments as in the proof of Theorem 2, those two problems are interreducible with the following one: given a game \(\mathcal{G}_{\mathbb{I}\mathbb{V}_{0}}\) and a threshold \(t\in\mathbb{Q}\), does every \(\varepsilon_{\min}\)-SPE \(\bar{\sigma}\) in \(\mathcal{G}_{\mathbb{I}\mathbb{V}_{0}}\) satisfy \(\mu_{\mathbb{L}}(\langle\bar{\sigma}\rangle)>t\)? Let us prove that that problem is \(\mathsf{P}^{\mathsf{NP}}\)-complete. * _Easiness._ By [6], there is an \(\mathsf{NP}\) algorithm deciding, given \(\varepsilon\) and \(\mathcal{G}_{\mathbb{I}\mathbb{V}_{0}}\), whether there is an \(\varepsilon\)-SPE in \(\mathcal{G}_{\mathbb{I}\mathbb{V}_{0}}\), i.e. whether \(\varepsilon\geq\varepsilon_{\min}\). We know, moreover, that \(\varepsilon_{\min}\leq 2\max_{i}\max_{uv}|r_{i}(uv)|\). Using Lemma 2, a dichotomic search can thus compute \(\varepsilon_{\min}\) using polynomially many calls to that algorithm. Then, one last call can decide whether there exists an \(\varepsilon_{\min}\)-SPE \(\bar{\sigma}\) such that \(\mu_{i}(\langle\bar{\sigma}\rangle)\leq t\). * _Hardness._ We proceed by reduction from the following \(\mathsf{P}^{\mathsf{NP}}\)-complete problem: given a Boolean formula \(\varphi\) in conjunctive normal form over the ordered variables \(x_{1},\ldots,x_{n}\), is the lexicographically first valuation \(\nu_{\min}\) satisfying \(\varphi\) such that \(\nu_{\min}(x_{n})=1\)? Let us write \(\varphi=\bigwedge_{j=1}^{p}C_{j}\). We construct a game \(\mathcal{G}_{\mathbb{I}a}\), with a player called _Witness_ and written \(\mathbb{W}\), in which there exists an \(\varepsilon_{\min}\)-SPE \(\bar{\sigma}\) such that \(\mu_{\mathbb{W}}(\langle\bar{\sigma}\rangle)\leq 0\) if and only if \(\varphi\) is satisfiable and \(\nu_{\min}(x_{n})=1\). That game, depicted in Figure 8, has \(2n+p+4\) players: the literal players \(x_{1},\neg x_{1},\ldots,x_{n},\neg x_{n}\); the clause players \(C_{1},\ldots,C_{p}\); the player _Solver_, written \(\mathbb{S}\); the player _Witness_, written \(\mathbb{W}\); the player _Alice_, written \(\mathbb{A}\); and the player _Bob_, written \(\mathbb{B}\). It contains \(3n+p+4\) vertices: * the initial vertex \(v_{0}=a\), controlled by Alice; * two vertices \(b\) and \(c\), controlled by Bob; * for each variable \(x_{i}\), a vertex \(?x_{i}\in V_{\mathbb{S}}\), a vertex \(x_{i}\in V_{x_{i}}\), and a vertex \(\neg x_{i}\in V_{\neg x_{i}}\); * for each clause \(C_{j}\), a vertex \(C_{j}\in V_{C_{j}}\); * a sink vertex \(\blacktriangledown\) (drawn three times on Figure 8 for convenience); which are connected by the following edges (unmentioned rewards are equal to \(0\), and we write \(m=2n+p\)): * from the vertex \(a\) to the vertex \(b\) and from the vertex \(b\) to the vertex \(a\), two edges that give Alice the reward \(0\), Bob the reward \(3\), and Witness the reward \(1\); * from \(a\) to \(?x_{1}\) and from \(b\) to \(c\), an edge; * from \(c\) to itself, an edge giving both Alice and Bob the reward \(2\), and giving Witness the reward \(1\); * from each \(?x_{i}\) to \(\neg x_{i}\) and from \(\neg x_{i}\) to \(?x_{i+1}\) (or to \(C_{1}\) if \(i=n\)), an edge giving: * the reward \(2m\) to \(\neg x_{i}\), * the reward \(m\) to every player \(C_{j}\) such that the clause \(C_{j}\) contains the literal \(\neg x_{i}\), * the reward \(2\) to Alice; * and if \(i=n\), the reward \(1\) to Witness; from each \(?x_{i}\) to \(x_{i}\) and from \(x_{i}\) to \(?x_{i+1}\) (or to \(C_{1}\) if \(i=n\)), an edge giving: * the reward \(2m\) to \(x_{i}\), * the reward \(m\) to every player \(C_{j}\) such that the clause \(C_{j}\) contains the literal \(x_{i}\), * and the reward \(2-\frac{m}{2^{i+1}}\) to Alice; * from each \(C_{j}\) to \(C_{j+1}\) (or \(?x_{1}\) if \(j=p\)), an edge giving the reward \(2\) to Alice; * from the sink vertex \(\blacktriangledown\) to itself, an edge giving the reward \(1\) to Alice, the reward \(2\) to each clause player, the reward \(4\) to each literal player, and \(1\) to Witness. This game is based on the classical example of mean-payoff game in which SPEs do not exist, already presented in Section 5.3. In the latter, from the vertex \(a\), Alice can access a simple sink vertex, where Bob and her both get the payoff \(1\). Here, they access instead to a region where the choices of Solver define a valuation of \(x_{1},\ldots,x_{n}\) -- unless one of the literal players chooses to go to the sink vertex \(\blacktriangledown\), which will be a profitable deviation if Solver makes inconsistent choices (one literal and, later, its negation). That valuation \(\nu\) defines Alice's payoff \(\mu_{\mathrm{A}}(\pi)=2-\sum_{i=1}^{n}\frac{\nu(x_{i})}{2^{i}}\), and therefore defines how much deviating and reaching the vertex \(c\) is profitable for her. Consequently, as we show in the complete version of this proof, the valuation \(\nu_{\min}\) is the binary encoding of the quantity \(\varepsilon_{\min}\), and there is an \(\varepsilon_{\min}\)-SPE in which Witness gets the payoff \(0\) or less if and only if \(\nu_{\min}(x_{n})=1\). ## 6 Quantitative reachability Again, Corollaries 1 and 2 enable us to solve the problems we are interested in; readily in the case of checking problems and subgame-perfect rational verification, and with some further work in the case of Nash rational verification. ### Checking problems **Theorem 8**.: _In quantitative reachability games, the Nash-checking and the subgame-perfect checking problems, deterministic or not, can be decided in polynomial time._ Proof.: By Corollary 1, those four problems reduce to the privilege problem. In a quantitative reachability game, a play in which Adam's payoff is strictly smaller than Eve's one is a play that reaches Eve's target set without traversing Adam's one. The existence of such a play can be decided by polynomial time classical graph search algorithms. ### Subgame-perfect rational verification **Theorem 9**.: _In quantitative reachability games, the subgame-perfect rational verification problem, deterministic or not, is \(\mathsf{PSPACE}\)-complete._ Proof.: It has been proved in [8] that the complement of the universal threshold problem was \(\mathsf{PSPACE}\)-complete1. The result follows by Corollary 2. ### Nash rational verification Negative instances of the Nash rational verification problem can be recognized by guessing an NE outcome, and checking it using Lemma 1; hence that problem is \(\mathsf{coNP}\)-easy. The matching lower bound can be established by reduction from the problem \(\mathsf{coSat}\). **Theorem 10** (App. H).: _In quantitative reachability games, the Nash rational verification problem, deterministic or not, is \(\mathsf{coNP}\)-complete._ ## 7 Energy games We now move to classes that were less studied in multiplayer non-zero sum settings so far: first, energy objectives. ### Checking problems Energy games are the only one of our five classes in which the checking problems cannot be solved in polynomial time (unless \(\mathsf{P}=\mathsf{NP}\)); except the simplest of them, the deterministic Nash-checking problem. Indeed, only one play must be compared to potential profitable deviations: the outcome deterministically generated by the Mealy machine. **Theorem 11** (App. I).: _In energy games, the deterministic Nash-checking problem can be decided in polynomial time._ In the other cases, either the non-determinacy or the need to study subgames entails a \(\mathsf{coNP}\) lower bound, that we can prove by reduction from the problem \(\mathsf{SubsetSum}\). The matching upper bound can be obtained by an algorithm that searches for configurations (a vertex, a state of the memory and an energy vector) from which a profitable deviation exists, and that is accessible from the initial configuration. We can then use the fact that reachability in \(1\)-dimensional vector addition systems with states is \(\mathsf{NP}\)-easy -- see [24]. **Theorem 12** (App. J).: _In energy games, the Nash-checking, the subgame-perfect-checking, and the deterministic subgame-perfect-checking problems are \(\mathsf{coNP}\)-complete._ ### Nash rational verification Rational verification problems are undecidable in this class, as we will show by reduction from the halting problem of two-counter machines -- the reader who is not familiar with two-counter machines may refer to Appendix K. However, Nash rational verification is co-recursively enumerable. **Theorem 13** (App. L).: _In energy games, the Nash rational verification problem, deterministic or not, is undecidable and co-recursively enumerable._ Proof sketch.: We prove here that the Nash universal threshold problem is undecidable and co-recursively enumerable. The theorem will follow by Corollary 2. * _Undecidability._ We show undecidability by reduction from the halting problem of a two-counter machine. Let \(\mathcal{K}\) be a two-counter machine. We define an energy game \(\mathcal{G}_{\mathfrak{f}_{\mathfrak{q}_{0}^{1}}}\) with five players -- players \(\mathcal{C}_{1}^{\top}\), \(\mathcal{C}_{1}^{\bot}\), \(\mathcal{C}_{2}^{\top}\), \(\mathcal{C}_{2}^{\bot}\), and \(\mathbb{W}\), called _Witness_ -- by assembling the gadgets presented in Figures 9, 10, 11, and 12 -- the rewards that are not presented are equal to \(0\), and the players controlling relevant vertices are written in blue. For each state of \(\mathcal{K}\), we define from one to two vertices, plus the additional vertex \(\blacktriangle\). Then, a play in \(\mathcal{G}_{\mathbb{I}\mathbb{V}_{0}}\) that does not reach the vertex \(\blacktriangle\) simulates a sequence of transitions of \(\mathcal{K}\), that can be a valid run or not: at each step, the counter \(\boldsymbol{C}_{i}\) is captured by the energy level of player \(\boldsymbol{C}_{i}^{\top}\), always equal to the energy level of player \(\boldsymbol{C}_{i}^{\bot}\). For each counter \(\boldsymbol{C}_{i}\), the player \(\boldsymbol{C}_{i}^{\bot}\) will have a profitable deviation if that play fakes a test to \(0\), by going to the vertex \(\blacktriangle\); and the player \(\boldsymbol{C}_{i}^{\top}\) will lose, and therefore have a profitable deviation by staying in \(q_{0}^{i}\) if it fakes a positive test. Thus, as shown in the complete version of this proof, every NE outcome in the game \(\mathcal{G}_{\mathbb{I}q_{0}^{1}}\) is won by Witness if and only if the machine \(\mathcal{K}\) does not terminate. As a consequence, the halting problem of two-counter machines reduces to the Nash universal threshold problem in energy games, which is therefore undecidable. * _Co-recursive enumerability._ As shown in the complete version of this proof, in an energy game \(\mathcal{G}_{\mathbb{I}\mathbb{V}_{0}}\), if there exists an NE that makes some player \(i\) lose, then there exists a finite-memory one. Thus, a semi-algorithm that recognizes the negative instances of the universal threshold problem consists in enumerating the deterministic multiplayer Mealy machines on \(\mathcal{G}_{\mathbb{I}\mathbb{V}_{0}}\), and for each of them, to check (by diagonalization): * whether the only strategy profile compatible with it is an NE: that is the deterministic Nash-checking problem, decidable (in polynomial time) by Theorem 11; * whether that strategy profile makes player \(i\) lose: that is recursively enumerable, by constructing step by step its outcome and computing the energy levels on the fly. We have a negative instance of the universal threshold problem if and only if at least one Mealy machine satisfies those two conditions. The Nash universal threshold problem is therefore co-recursively enumerable. ### Subgame-perfect rational verification In the subgame-perfect setting, the previous construction could also prove undecidability. But we choose to present a refinement of it, that proves a stronger result. **Theorem 14** (App. M).: _In energy games, the subgame-perfect rational verification problem, deterministic or not, is undecidable, even when Leader plays against only two players._ Again, the proof shows that, in particular, that problem is not recursively enumerable in energy games. It might still be the case that it is co-recursively enumerable. If finite memory was sufficient for an SPE to make any player \(i\) lose, when that is possible, as in the case of NEs, then it would again imply co-recursive enumerability. Unfortunately, one cannot follow this approach, because such an statement is false. **Proposition 2** (App. N).: _In the energy game presented in Figure 13, there exists an SPE that makes player \(\Box\) lose, but no finite memory SPE can achieve that result._ ## 8 Discounted-sum games Finally, let us end with discounted-sum objectives. While Corollary 1 suffices to provide an efficient algorithm for checking problems, we will not be able to state more than recursive enumerability for rational verification. ### Checking problems As in mean-payoff games, the privilege problem, and therefore the checking problems, can be solved in polynomial time by a Bellman-Ford-like algorithm. **Theorem 15** (App. O).: _In discounted-sum games, the Nash-checking and the subgame-perfect checking problems, deterministic or not, can be decided in polynomial time._ Figure 12: Gadget for tests of counter \(C\) Figure 13: A game where infinite memory is necessary to make player \(\Box\) lose ### Rational verification First, let us define the following decision problem. **Problem 6** (Target discounted-sum problem).: Given four quantities \(\lambda,a,b,t\in\mathbb{Q}\) with \(0<\lambda<1\), is there a sequence \((u_{n})_{n\in\mathbb{N}}\in\{a,b\}^{\omega}\) such that \(\sum_{n\in\mathbb{N}}u_{n}\lambda^{n}=t\)? Although it is a quite natural problem that appears in many different fields, the target discounted-sum problem -- or TDS problem for short -- turns out to be surprisingly hard to solve, and its decidability status is still open. The interested reader may refer to [2] for more details. The following theorem shows that rational verification problems are at least as difficult. **Theorem 16**.: _The TDS problem reduces to the complements of the (deterministic) Nash rational and subgame-perfect rational verification problems in discounted-sum games._ Proof.: We present here a reduction to the complements of the Nash universal and subgame-perfect universal threshold problems; the existence of reductions to the complements of rational verification problems follows by Corollary 2. Let \(a,b,t\in\mathbb{Q}\), let \(\lambda\in\mathbb{Q}\cap(0,1)\), and let \(\mathcal{G}_{\mathfrak{f}_{\mathfrak{f}_{\mathfrak{f}_{\mathfrak{f}_{ \mathfrak{f}_{\mathfrak{f}_{\mathfrak{f}_{\mathfrak{f}_{\mathfrak{f}_{\mathfrak{f} _{\mathfrak{f}}}}}}}}}}}}\) be the discounted-sum of Figure 14, with discount factor \(\lambda\). In that game, there exists an NE \(\bar{\sigma}\) with \(\mu_{\circlearrowleft}(\langle\bar{\sigma}\rangle)<0\), if and only if there exists an SPE \(\bar{\sigma}\) with \(\mu_{\circlearrowleft}(\langle\bar{\sigma}\rangle)<0\), if and only if \(a,b,t\), and \(\lambda\) form a positive instance of the TDS problem. Indeed, if such an NE or SPE exists, it necessarily reaches the vertex \(a\). But then, player \(\Box\) must get at least the payoff \(\mu_{\Box}(v_{0}v_{1}^{\omega})=t\lambda^{2}\), and player \(\Diamond\) the payoff \(\mu_{\circlearrowleft}(v_{0}v_{2}v_{3}^{\omega})=-t\lambda^{2}\), otherwise they would have a profitable deviation. If such a play exists, then we have a positive instance of the TDS problem. Conversely, from a positive instance of the TDS problem, one can construct a play from \(v_{0}\) in which player \(\Diamond\) gets the payoff \(\frac{\lambda^{2}}{1-\lambda}\) and both \(\Box\) and \(\Diamond\) get the payoff \(t\lambda^{2}\), and none of them has a profitable deviation in any subgame. The previous theorem suggests that finding algorithms solving those problems is a very ambitious objective, since it would result in an algorithm solving the TDS problem, whose decidability is a long-standing open question. However, in the sequel, we will show that like the TDS problem, the rational verification problems are recursively enumerable. The key idea is the following: a property of discounted-sum objectives is that when a play gives to some player a payoff that is strictly smaller than another, that can be seen on finite prefixes of those plays. Therefore, although strategy profiles are in general infinite objects that exist in uncountable number, profitable deviations can be found by analyzing their behaviors on a finite (but unbounded) number of histories. Figure 14: A game constructed from an instance of TDS **Theorem 17** (App. P).: _In discounted-sum games, the Nash rational and the subgame-perfect rational verification problems, deterministic or not, are recursively enumerable._
2307.10252
A Machine Learning based Empirical Evaluation of Cyber Threat Actors High Level Attack Patterns over Low level Attack Patterns in Attributing Attacks
Cyber threat attribution is the process of identifying the actor of an attack incident in cyberspace. An accurate and timely threat attribution plays an important role in deterring future attacks by applying appropriate and timely defense mechanisms. Manual analysis of attack patterns gathered by honeypot deployments, intrusion detection systems, firewalls, and via trace-back procedures is still the preferred method of security analysts for cyber threat attribution. Such attack patterns are low-level Indicators of Compromise (IOC). They represent Tactics, Techniques, Procedures (TTP), and software tools used by the adversaries in their campaigns. The adversaries rarely re-use them. They can also be manipulated, resulting in false and unfair attribution. To empirically evaluate and compare the effectiveness of both kinds of IOC, there are two problems that need to be addressed. The first problem is that in recent research works, the ineffectiveness of low-level IOC for cyber threat attribution has been discussed intuitively. An empirical evaluation for the measure of the effectiveness of low-level IOC based on a real-world dataset is missing. The second problem is that the available dataset for high-level IOC has a single instance for each predictive class label that cannot be used directly for training machine learning models. To address these problems in this research work, we empirically evaluate the effectiveness of low-level IOC based on a real-world dataset that is specifically built for comparative analysis with high-level IOC. The experimental results show that the high-level IOC trained models effectively attribute cyberattacks with an accuracy of 95% as compared to the low-level IOC trained models where accuracy is 40%.
Umara Noor, Sawera Shahid, Rimsha Kanwal, Zahid Rashid
2023-07-17T08:58:39Z
http://arxiv.org/abs/2307.10252v1
A Machine Learning based Empirical Evaluation of Cyber Threat Actors High Level Attack Patterns over Low level Attack Patterns in Attributing Attacks ###### Abstract Cyber threat attribution is the process of identifying the actor of an attack incident in cyberspace. An accurate and timely threat attribution plays an important role in deterring future attacks by applying appropriate and timely defense mechanisms. Manual analysis of attack patterns gathered by honeypot deployments, intrusion detection systems, firewalls, and via trace-back procedures is still the preferred method of security analysts for cyber threat attribution. Such attack patterns are low-level Indicators of Compromise (IOC). They represent Tactics, Techniques, Procedures (TTP), and software tools used by the adversaries in their campaigns. The adversaries rarely re-use them. They can also be manipulated, resulting in false and unfair attribution. To empirically evaluate and compare the effectiveness of both kinds of IOC, there are two problems that need to be addressed. The first problem is that in recent research works, the ineffectiveness of low-level IOC for cyber threat attribution has been discussed intuitively. An empirical evaluation for the measure of the effectiveness of low-level IOC based on a real-world dataset is missing. The second problem is that the available dataset for high-level IOC has a single instance for each predictive class label that cannot be used directly for training machine learning models. To address these problems in this research work, we empirically evaluate the effectiveness of low-level IOC based on a real-world dataset that is specifically built for comparative analysis with high-level IOC. The experimental results show that the high-level IOC trained models effectively attribute cyberattacks with an accuracy of 95% as compared to the low-level IOC trained models where accuracy is 40%. Cyber Threat Attribution, Threat Actor, Indicators of Compromise (IOC), Tactics Techniques and Procedures (TTP), Machine learning models, Cyber Threat Intelligence (CTI) ## 1 Introduction Cyber threat attribution is determining the identity or location of the adversary or the adversary's intermediary. It can be used in forensic inquiries by enterprises or in indictments by law enforcement agencies. This results in slowing down the pace of cyber-attacks by deterring future attempts, as people will be less inclined towards carrying out a cyber-crime if they know they will be caught. Governments and organizations require cyber threat attribution as a measure of proactive defense by identifying the adversary in the planning stage. For instance, the Cybersecurity Information Sharing Act (CISA) was enacted into law by the United States of America (USA) Congress in 2015 [1]. According to this rule, firms that have experienced cyber data leakage must provide information on threats to all parties, particularly their clients [2]. The attribution of cyber-attacks has changed throughout time in terms of perception and behavior. Almost a decade ago, attribution used to be focused on locating the perpetrator or middleman who launched a Distributed Denial of Service (DDOS) attack. This was designed for two purposes: first, to detect the culprit, and second, to prevent suspicious information from reaching its destination through IP trace-back. As a result, cyber-attack attribution worked as tracing-back and pinpointing the source of IP packets using routing tables. Trace-back procedures are discussed in depth in [3][4]. The ability of IP address spoofing and anonymization constraint the effectiveness of source trace-back systems in identifying cyber adversaries. Reflection hosts, modest Time To Live (TTL) values, use of botnets, and increasing attack duration can all be used by a clever attacker to make identification difficult. With passing years, new forms of cyber-attacks known as Advanced Persistent Threats (APTs) have arisen, which include multi-stage multi-attack vector campaigns aimed at financial benefit, spying, and data theft against corporations, governments, and military. APTs are well-planned attacks that use advanced means to break into an infrastructure that is remotely administered via planting back doors. To timely detect and prevent an attack, the security community is paying attention towards recording and conveying detailed discussion of an event in Cyber Threat Intelligence (CTI) reports. Several standard languages have been proposed to represent CTI data. A brief introduction of the CTI tools and standards can be found in [5]. Among all such standards Structured Threat Information Expression (STIX) is the most comprehensive and widely adopted standard [6]. STIX stores the specifics of cyber-attacks as multi-level IOC characterized by observable, TTP, indicator, and exploit target components. Based on the CTI, current practices of cyber threat detection mostly rely on identifying the perpetrator based on the IOC associated with that threat actor. There is a six-level hierarchy of IOC as discussed in [7]. The low-level indicators in the hierarchy represent file names, their hashes, IP addresses, and domain names. These indicators are simple and easy to detect and mitigate but they have a short life span that makes them less likely to be reused. Currently, low-level indicators are used for cyber threat attribution. The low-level IOC are transformed into firewall rules by the network administrator to prohibit fraudulent traffic. As discussed previously this attribution scheme based on low-level IOC has the same flaw as was in the attribution of a DDOS attack. The IP addresses can be easily spoofed or anonymized by the attacker which leads to inaccurate and biased attribution. Similarly, the malware hashes and domain names can be easily changed. Thus the problem of cyber threat attribution was redefined and evaluated for effectiveness according to the current demands of cyberinfrastructure and security information measurement and management architecture [8]. However, there are two main problems with the existing approach that need to be addressed. The first problem is that in recent research works [8], the ineffectiveness of low-level IOC for cyber threat attribution has been discussed intuitively. An empirical evaluation for the measure of the effectiveness of low-level IOC based on a real-world dataset is missing. The second problem is that the available dataset for high-level IOC has a single instance for each predictive class label that cannot be used directly for training machine learning models. Based on the above research problems, the objective of this research work is to empirically evaluate and compare the effectiveness of both kinds of IOC based on real-world datasets. To address the first problem, we empirically evaluate the effectiveness of low-level IOC based on a real-world dataset that is specifically built for comparative analysis with high-level IOC. To address the second problem, an updated version of the high-level IOC dataset is built that can be used to train machine learning models and predict the culprit behind the cyber attack. Both datasets are provided to the research community for further research and exploration. The datasets are used to train different machine learning algorithms, such as, _(Random Forest (RF), Naive Byes (NB), Naive Byes kernel, Decision Tree (DT), K-nearest neighbors (KNN), Artificial Neural Network (ANN), Deep Learning, Gradient Boosted Trees, Generalized Linear Model, and Ensemble Learning Models)_. The results show that high-level IOC-trained models effectively attribute cyber attacks (accuracy: 95%) as compared to low-level IOC (accuracy: 40%). Also, ANN is more effective in cyber attack attribution as compared to other algorithms. The contributions of this research work are: 1. The low-level IOC training data set required to perform the experiment is built from publicly available CTI reports as there is no dataset available for low-level IOC with respect to attribution. The dataset associates cyber adversaries with their low-level IOC used in multiple instances. 2. A multi-instance high-level IOC training data set is built from publicly available adversaries' attack pattern taxonomy provided by MITRE [9]. The taxonomy is built from cyber-attack incidents described in security news articles, and technical reports by security service providers. They report the goals, motives, and capabilities of the attacker and sometimes associate them with the state sponsoring them. 3. Different statistical techniques are applied to both datasets to analyze their characteristics. 4. A comparative analysis of the effectiveness of machine learning models for both datasets is performed via empirical evaluation, which was intuitively done in previous literature. 5. The multi-instance high-level IOC dataset is used to attribute a recent cyber data breach incident on Red Cross for which the cyber threat actor is not known. The paper is organized as follows. In the second section related work of the research domain in the context of cyber threat attribution and machine learning-based malware prediction are discussed. In the third section, the research methodology of cyber threat attribution is given. In the fourth section, the results of the experiment are given and discussed thoroughly. In the fifth section, a case study is presented that attributes a recent data breach incident based on their high-level IOC. In the sixth section, we highlight the factors limiting the scalability and feasibility of the proposed approach and suggest technological improvements to overcome them. Finally, in the seventh section, we conclude our proposed research work and provide a future plan. Related Work Cyber threat attribution connected with high adversary assault patterns revealed in CTI documents is a relatively new study subject that cannot be readily compared and referenced to existing threat attribution methodologies. However, we review prior work linked to the challenge of attribution and the use of low-level IOCs for attack and malware detection in order to clarify the notion and interpretation of cyber threat attribution. In order to support the proposed approach, the significance of machine learning approaches is mentioned with respect to malware and intrusion detection systems. To prevent DDOS attacks, source trace-back techniques are used which attribute cyber-attacks based on the IP addresses [3]. Identity spoofing, short TTLs, reflector hosts, and botnets as stepping stones are employed as features that make attribution challenging. Hunker et al. [4] suggest using information about an attack in order to attribute it. The organizations must share vulnerabilities, incident details, and new mechanisms for attribution. A set of actors (entities interested in attribution), the attributed objects, metrics to determine the degree of confidence in the attribution results and the entity providing attribution, an acceptable attribution policy, and the need to know privacy factors in attribution are among the attribution requirements highlighted [10]. Clark et al. [11] examine many types of attributions in the context of various attacks. The IP address is the primary observer of the situation. According to the conclusions of the study, IP trace-back is only effective in the event of a DOS/DDOS attack to stop the attack traffic deluge. The OpenIOC framework reports low-level IOCs based on malware forensic analysis [22]. The research states that the low-level IOCs are firmly associated with the malware composition, i.e., hashes of binaries. They can be easily altered by polymorphism and metamorphism approaches composition. A detailed analysis of the cyber state criminals is reported by Kenneth et al [23]. According to the analysis, cyber criminals are globally classified into regions: Asia Pacific, Russia/ Eastern Europe, the Middle East, and the United States. It states that the ethnicity and cultures have a high impact on the behaviors and actions of cyber attackers. However, the analysis does not describe the attribution aspect experimentally. The role of machine learning in detecting intrusions and malware analysis is discussed in several research works. A comparative analysis of machine learning models to detect malware in the Android operating system is given in [24]. A blend of supervised and unsupervised machine-learning techniques for malware analysis is proposed in [25]. Artificial neural network is employed as an offline IDS to analyze cyber-attacks in Internet of Things (IoT) networks [26]. A deep neural network is detecting in malicious binaries by Saxe et al. [27], Saied et al. [28], and Kang et al. [29]. Noor et al. [8] proposed a new model to identify cyber threat actors' attack patterns automatically. They used Latent Semantic Index (LSI) search system to create their dataset of high-level IOCs from ATT&CK MITRE and corpus of intelligence reports. Data from intelligence reports covered events from (May2012 to FEB 2018) and 36 cyber threat actors' data is used. Also, they used machine learning algorithms to train the model. Accuracy of one model is 94%. However, LSI is not effective for cyber threat attribution and chances of false positive rate are high. Also, no comparison with low-level IOCs is provided using empirical evaluation and they claimed intuitively that low-level IOCs cannot identify cyber threat actors. Haddapajo uh et al [31] proposed a model using fuzzy pattern tree, Multi-modal fuzzy, fuzzy C-mean partitioning to effectively address the cyber threat attribution. Basically, their research is about malware attribution. Accuracy of this proposed model is 95.2%. The dataset which is used in research consists of malware payloads of cyber threat actors. However, they considered only five cyber threat actors' data while 129 cyber threat actors' data is available on internet. Arun Warikoo [32] provides only conceptual idea for cyber threat attribution. The aim of this research is to help analyst within an organization for the effective attribution of cyber threat in the event of an attack. This conceptual triangle model is made up of three indicators: sector, tools and tactics, and techniques and procedures (TTPs).However, this model provides conceptual idea. Also, no empirical evaluation is performed on dataset. Naveen et al [33] proposed a model for cyber threat attribution using Word2Vec (SIMVER) and neural networks. The primary objective is to use the cyber threat intelligence reports with minor preprocessing, perform an effective attribution and to improve accuracy. The dataset used in the research was consisting of 12 APTs which is collected from various intelligence reports. Originally, the dataset is published in [34]. This proposed model achieves 86.5% accuracy for cyber threat attribution. However, only use data of 12 APTs (Small dataset). They have used domain specific word embedding technique but this is not explicitly extracting attack patterns from CTI documents. Themes using for attack pattern have not defined. Sentuna et al [35] proposed a model for cyber threat attribution using naive byes posterior probability. The objective of this research is to enhance processing time and prediction accuracy of security against attack patterns of cyber threat actors. The accuracy of this proposed model is 95% and processing time is 0.021%. However, they only used 10 cyber threat actors' data which is small dataset. Also, noisy data in case of lost and poisoned attack patterns is not considered in the dataset. Due to higher detection accuracy, real-time automated response and low computational resources, we also employ machine learning algorithms for solving the problem of accurate cyber-attack attribution. ## 3 Research Methodology In this section, the methodology for the proposed research work is discussed. There are three steps: 1) Data Collection, and Dataset Preparation, 2) Statistical Analysis of Dataset, and 3) Cyber Threat Attribution. In the first section, the details of data collection for both kinds of IOCs are discussed. The sources from where the data is collected are mentioned. In the second section, the steps for dataset preparation are discussed. The features and predictive classes of both kinds of IOCs are elaborated using excerpts of the dataset. In the third section, the statistical analysis of the dataset is performed to reveal its intrinsic properties. In the fourth section, a brief review of the machine learning algorithms used for cyber threat attribution using both the datasets is given. ### 3.1 Data Collection and Dataset Preparation To empirically evaluate the effectiveness of low-level IOC for cyber threat attribution, we couldn't find a compatible and ready to use dataset. Thus, we built the dataset of low-level IOC from CTI documents. The data for low-level IOC was collected from textual, publicly available and comprehensive CTI documents published by notable security solution providing IT and software enterprises such as Kaspersky [36, 37, 38, 39], Cylance [40, 41], Crowdstrike [42, 43], Novetta [44], Palo Alto Networks [45], Forcepoint [46], ThreatConnect [47], Cymmetria IT security company [48], F-secure IT security company [49], The citizen lab [50], Symantec Software Company [51], Fireeye cyber security company [52, 53], and IBM X-Force exchange [54]. A sample of a reconstructed CTI document about a cyber threat actor known as Deep Panda is shown in figure 1[55]. The BlackBerry Cylance Threat Research Team are the authors of the document. The title of the document describes the cyber threat incident associated with deep panda. It can be seen that deep panda is also termed as shell crew by the security community. Along with the textual description, low-level IOCs are also present in the document in the form of file and malware hashes, IP addresses, and command and control server domains. Our goal is to collect these low-level IOCs from such textual CTI documents associated with a particular cyber threat actor that are used in their multiple campaigns. Using this data, via empirical evaluation the effectiveness of low-level IOC in cyber threat attribution is determined. For this purpose, CTI documents related to 16 cyber threat actors were collected. A brief description of cyber threat actors is given in table 1. It can be seen that the cyber threat actors belong to different ethnicities. Their goals and motives distinguish them from one another, e.g., Carbanak has specifically targeted financial institutions and Poseidon is famous for black mailing the organizations. Also, the high-level IOCs of cyber threat actors are different. Figure 1: Textual CTI document of “Deep Panda” Cyber Threat Actor After collecting textual documents from the publicly available sources mentioned in the previous section, the low-level IOCs are manually extracted from them. The reason for manual extraction is the careful construction of the dataset without any discrepancies. In table 2, an excerpt of the low-level IOC dataset is shown. Each instance of the table represents a cyber threat actor and the associated malicious file hashes, IP addresses, and domains. The actual statistics of low-level IOCs for each cyber threat actor are given in section 3.2. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \multicolumn{1}{|c|}{**Cyber Threat Actor**} & \multicolumn{1}{c|}{**Description**} \\ \hline Naikon [47] & Naikon is a suspected Chinese cyber threat group. It has been active in Southeast Asia since at least 2010. The main motive of this cyber threat group is to target government, military, and civic groups. & \\ \hline Deep Panda [42] & Deep Panda is a suspected Chinese threat group that has been linked to the government, defense, finance, and telecommunications sectors. Deep Panda is blamed for the hacking of Anthem, a healthcare organization. Shell Crew, WebMasters, KungFu Kittens, and PinkPanther are some of the other names. Based on attribution of both group names to the Anthem intrusion, Deep Panda also appears to be Black Vine. & \\ \hline DustStorm [41] & Dust Storm is a cyber-threat actor that has attacked a variety of businesses in Japan, South Korea, the United States, Europe, and Southeast Asia. & \\ \hline Suckfly [51] & Suckfly is a threat gang in China that has been active since at least 2014. & \\ \hline Carbonak [36, 38] & Carbonank has been linked to distinct organizations such as Cobalt Group and FIN7, both of which have employed Carbonank malware. Carbonank is a cyber-terminal enterprise that has been targeting financial institutions with Carbonank malware since at least 2013. & \\ \hline Sandworm Team [49] & Sandworm Team is a destructive threat organization assigned to Russian military unit 74455 of the General Staff Main Intelligence Directorate (GRU) Main Center for Special Technologies (GTsST). Since at least 2009, this group has been active. & \\ \hline Lazarus Group [44] & The Lazarus Group is a North Korean state-sponsored cyber threat group. & \\ \hline Cleaver [40] & Cleaver is a threat group that has been linked to Iranian entities and is responsible for the Operation Cleaver activity. Cleaver appears to be tied to Threat Group 2889 based on strong circumstantial evidence (TG-2889). & \\ \hline Dark Hotel [37] & Dark hotel has been a suspected South Korean threat group that has targeted victims predominantly in East Asia. The name goal of this cyber threat group is to perform espionage operations and spear phishing campaigns by using peer-to-peer and file-sharing networks. & \\ \hline Poseidon Group [39] & The Poseidon Group is a Portuguese-speaking terrorist organization that has been active since 2005. The Poseidon Group has a history of blackmailing victims’ firms into hiring the Poseidon Group as a security agency using information stolen from them. & \\ \hline APT30 [53] & APT30 is a cyber-threat outfit that has been linked to the Chinese government. While Naikon and APT30 have significant similarities, they do not appear to be identical duplicates. & \\ \hline Stealth Falcon [50] & Stealth Falcon is a threat group, circumstantial evidence shows a link between this organization and the government of the United Arab Emirates (UAE) since at least 2012. & \\ \hline GCMN [38] & GCMN is a threat group that primarily targets banks in order to transfer funds to e-currency providers. & \\ \hline APT 28 [43] & APT28 is a threat group linked to Russia’s GRU military intelligence agency since at least 2004, this group has been active. APT28 allegedly hacked the Hillary Clinton campaign, the Democratic National Committee, and the Democratic Congressional Campaign Committee in 2016. & \\ \hline Patchwork [48] & Patchwork is a cyberspionage gang that was discovered for the first time in December 2015 circumstantial evidence implies it is pro-Indian or Indian. Patchwork has been seen focusing on industries that deal with diplomacy and government institutions. & \\ \hline \end{tabular} \end{table} Table 2: An excerpt of the low-level IOC dataset High-level IOCs are extracted from ATT&CK Mitre [9]. It is a free repository of adversaries' tactics and techniques based on real-world real world security incidents. Tactics and techniques are termed as high-level IOC. They are taxonomized according to the post-compromise stages of the cyber kill chain model [56]. Cyber kill chain describes how an adversary or attacker launches an attack on a specific organization. Cyber kill chain has seven phases, i.e., reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on objective. In the reconnaissance phase, an adversary finds a target and investigates vulnerabilities and weaknesses that can be exploited within the network. He may collect login credentials or other information, such as email addresses, user IDs, physical locations, software applications, and operating system characteristics. In the second phase the adversary designs an attack vector that can exploit a known weakness, such as remote access malware, ransomware, virus, or worm. The adversary may also set up back doors at this phase so that they can continue to access the system even if their original point of entry is recognized and closed by network administrators. The adversary starts the intrusion attempt in the delivery step. The particular actions they take will be determined by the type of attack they plan to launch. The attacker may send email attachments or a malicious link to encourage users to participate in the plan. The malicious code is executed within the victim's system during the exploitation phase. On the victim's computer, malware or another attack vector will be installed in the fifth phase. The attacker can use the virus to remotely control a device or identity within the target network under command and control phase. In the last phase, the attacker takes efforts to accomplish their objectives, which may involve destruction, encryption, and data theft. The ATT&CK repository is constantly being updated via manual analysis of the publicly available CTI documents. Along with the high-level IOC, the repository also provides comprehensive details of the cyber threat actors examined by the security analysts globally. An interesting feature of ATT&CK is that it has exclusively identified the high-level IOC of the cyber threat actors which makes it a suitable source for cyber threat attribution. At the time of writing, ATT&CK has archived 567 techniques and 637 software tools related to 129 cyber threat actors. In [8] and [57], the older versions of ATT&CK are used. Each technique is represented as a text document with a unique identification that describes it along with the mechanisms for its detection and mitigation. The technique document also identifies the cyber threat actors who have used it in their campaigns. Currently the resources provided by ATT&CK for the purpose of education and research do not have a ready to use high-level IOC dataset available. There are two challenges that need to be addressed. The first challenge is that the data about techniques and cyber threat actors is in separate web pages that are connected by hyperlinks. The machine readable ATT&CK data is accessible in the form of STIX [58]. The STIX version of ATT&CK, currently available on the website do not specify the cyber threat actors in the techniques documents available on github. This important connection between attack techniques and cyber threat actors is also missing in the excel spreadsheets provided by ATT&CK [59]. The first challenge is addressed by constructing the high-level IOC dataset manually from the ATT&CK website. An excerpt of the constructed high-level IOC dataset is given in table 3. The cyber threat actor is the class to be predicted. 'Drive by Compromise' and 'File and Directory Discovery' represents the high level IOC used by the cyber threat actor. Their IDs are 'T1189' and 'T1083'. It is mentioned earlier that high-level IOC are also termed as TTP. The 'China Chopper' and 'Bandook' represent the software tools with 'S0020' and 'S0234' IDs. The table entry of '1' shows that the cyber threat actor has used this technique in his campaigns and '0' shows that it is not used by the adversary. The second challenge is that the constructed ATT&CK dataset is a single instance dataset. Each cyber threat actors' high level IOC are specified as a single instance or record in the dataset which cannot be used to train the machine learning models. To address this challenge, we synthesized the high-level IOC dataset to include multiple instances of a cyber threat actor. The justification behind the synthesis process is that a cyber threat actor does not use all the high-level IOC in a cyber threat incident. It can also be stated as it is not possible to detect all the high-level IOC of the cyber threat from a single incident. There is a possibility that some of the high-level IOC will be missed by the security analyst. Also, there is a possibility that the security analyst will detect some random high-level IOCs that are not part of the particular cyber threat and might belong to some other cyber threat. Such stray high-level IOCs can poison the actual detection process. These two possibilities are discussed in [57] as lost and poisoned TTP scenarios. Based on this fact, we have synthesized cyber threat actor's high-level IOC instances by adding noise to the original ATT&CK dataset. The noise represents the missed and poisoned high-level IOCs of the cyber threat actors. For each cyber threat actor, three more instances are added in the original dataset having 10%, 20%, and 30% noise. There were 129 cyber threat actors' instances in the original ATT&CK dataset. The synthesized dataset has 516 cyber threat actors' instances. The low-level and high-level IOC datasets used in this research work are provided for further use and analysis on github [60]. ### 3.2 Statistical Analysis of Dataset In this section, the statistical analysis of both low-level and high-level IOC datasets is discussed. The statistical analysis helps determine if the data distribution is normal or not. The reasons behind a specific state of a data distribution are mentioned. The statistics of low-level IOC are shown in table 4. The summary of the data distribution spread of file hashes, IP addresses, and malicious domains is shown in figure 2. It can be seen that the data of all three variables is not normally distributed. Mean and standard deviation are used for normally distributed data while median, and inter quartile range are used for non-normal distributions because they are not effected by outliers [61; 62]. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Cyber Threat Actor** & **File Hashes** & **IP Addresses** & **Malicious Domains** \\ \hline Naikon & 22 & 0 & 52 \\ \hline Deep Panda & 31 & 3 & 2 \\ \hline Dust Storm & 54 & 0 & 61 \\ \hline Suckfly & 18 & 1 & 7 \\ \hline Carbanak & 37 & 81 & 35 \\ \hline Sandworm & 73 & 13 & 0 \\ \hline Lazarus & 76 & 0 & 4 \\ \hline \end{tabular} \end{table} Table 4: Statistics of low-level IOC \begin{table} \begin{tabular}{|l|l|c|c|c|c|c|c|} \hline **Sr.** & & & & & & & \\ **Sr.** & **Cyber Threat Actor** & **Drive-by** & **File and Directory** &.... & **China Chopper** & **Bandook** &.... \\ **No.** & & **Compromise (T1189)** & **Discovery (T1083)** & **(S0020)** & **(S0234)** & **(S0234)** \\ \hline 1 & Dark Caracal & 1 & 1 &.... & 0 & 1 &.... \\ \hline 2 & Dark hotel & 1 & 1 &.... & 0 & 1 &.... \\ \hline 3 & Backdoor Diplomacy & 0 & 0 &.... & 1 & 0 &.... \\ \hline 4 & Elder wood & 1 & 0 &.... & 0 & 0 &.... \\ \hline 5 & Fercious Kitten & 0 & 1 &.... & 1 & 1 &.... \\ \hline 6 & Gall maker & 1 & 1 &.... & 0 & 1 &.... \\ \hline 7 & Indigo Zebra & 0 & 0 &.... & 1 &.... \\ \hline \end{tabular} \end{table} Table 3: An excerpt of high-level IOC dataset Therefore the median of file hashes, IP addresses, and malicious domain is 26.5, 1, and 13.5. The interquartile range of file hashes, IP addresses, and malicious domain is 32.25, 7.25, and 31.25. The data distribution of the high level IOC dataset is shown in figure 3. It can be seen that the datasets are approximately normally distributed. Thus mean, and standard deviation are used as a measure of centrality and spread. The mean of high-level IOC for ATT&CK dataset, 10% noise, 20% noise, 30% noise is 20.71, 47.02, 73.28, and 99.38. The standard deviation value of high-level IOC for ATT&CK dataset, 10% noise, 20% noise, 30% noise is 19.45, 17.43, 15.56, and 13.49. To estimate the difference between the original ATT&CK dataset and the synthesized dataset, \(Z\) test [63] is performed. \(Z\) test is used to determine if the mean of two datasets have significant different when the variances are known and the sample size is large. Here the sample size is 129 cyber threat actors. Here we have original ATT&CK data as the first group and the noisy data is the second group. The value of \(Z\) is-9.8551. The p-value is \(<.00001\). It means that the result is significant at p \(<.05\). This result shows that there is significant difference between the original ATT&CK dataset and synthesized dataset which is an evidence that the noise is randomly added to the original dataset and the synthesized dataset is unbiased. Figure 3: Data Distribution of High-Level IOC for original and noisy datasets Figure 2: Data Distribution of Low-Level IOC ### 3.3 Cyber Threat Attribution In this phase, we train machine learning models using both low-level and high-level IOC datasets. The machine learning techniques used are: Naive Bayes [64], KNN [65], Decision Tree [66], Random Forest [67], Artificial Neural Network [68], Deep Learning [69], Generalized Linear Model [70], and Ensemble Learning (Voting, Stacking, Bagging, and Boosting) [71]. These models are used to anticipate the cyber threat actors behind an unseen cyber attack. In the following, a brief review of each technique is given. #### 3.3.1 Naive Bayes Naive Bayes is a probabilistic classification system based on Thomas Bayes' posthumous theory of Bayesian Theorem [64]. It is employed in a broad range of classification problems. The Bayes theorem is a mathematical model that determines the conditional probability of an event occurring based on prior knowledge of factors that may be associated with the event. It has multiple variants such as Naive Bayes (Kernel). It is a class of algorithms that all follow the same principle: each pair of features to be categorized is independent of the others. The feature matrix and the response vector are the two elements of the dataset. All of the dataset's vectors (rows) are represented in the feature matrix, with each vector having the value of dependent features. The response vector holds the value of the class variable for each row of the feature matrix (prediction or output). It's incredibly beneficial when the inputs' dimensionality is high. It is simple and quick to forecast the test data set's class. It's also good at multi-class prediction. It doesn't take a lot of training data to discover intriguing insights, and when the data set is small, it may outperform complex machine learning models. Based on these features Naive Bayes classification algorithm intuitively seem suitable for both kinds of datasets under consideration in this research work. #### 3.3.2 Knn The supervised machine learning algorithm k- Nearest Neighbor Classifier is beneficial for classification tasks [65]. Evelyn Fix and Joseph Hodges originated this concept in 1951 and Thomas Cover later expanded it. The KNN algorithm is based on feature similarity. It assigns a classification to a data point based on the classification of its neighbors. It keeps track of all available cases and categorizes new ones using a similarity matrix. The number of nearest neighbors to include in the majority voting process is denoted by K in KNN. The Elbow Approach is a commonly used method for obtaining the ideal value of k. With a positive integer k and a new sample, we select the k items in our data set that are closest to the new sample, and discover the most common classification of these entries. Then we assign this classification to the new sample. It is trivial to put into action. It can endure noisy training data. If the training data is large, it may be more effective. #### 3.3.3 Decision Tree and Random Forest John Ross Quinlan developed a novel algorithm in 1986 known as decision trees [66]. They belong to the supervised learning category. They can be used to address problems involving regression and classification. Using the decision tree, we may depict any boolean expression on discrete characteristics. A decision tree is a graphical representation of all the possible outcomes of a decision based on a set of criteria. Each internal node in the decision tree algorithm represents a feature test, each leaf node represents a class label (a judgment made after computing all features), and branches represent feature combinations that lead to those class labels. The categorization rules are represented by the pathways from root to leaf. Using the decision tree, we may depict any boolean expression on discrete characteristics. The algorithm's focus is to develop a model that can predict the value of a target variable. This approach is simple to comprehend, interpret, and visualize data processing. It requires minimum data preparation effort, and can handle both numerical and categorical data. It is further unaffected by non linear parameters. A limitation of decision trees is that they fail spectacularly on real data sets, leading them to overfit. Overfitting is a phenomenon in which a model performs well on training data but not on real or test data, resulting in a high variance. Decision trees have low bias, which means they fit well on your training data set, but high variance, which means they don't work well with data sets they haven't seen before. We still employ decision trees to solve this problem, but in a different form called random forest. Random forest is a technique that uses Ensemble Learning and is focused on the bagging algorithm [67]. It constructs as many trees as feasible on a sample of data and then integrates the results of all of the trees. As a corollary, the overfitting problem in decision trees is reduced, as is the variance, which enhances accuracy. #### Artificial Neural Network and Deep Learning In the late nineteenth and early twentieth millennia, the field of artificial neural networks (ANN) arose. An artificial neural network (ANN) is a biologically motivated computer model made up of many processing components that accept inputs and outputs based on their activation functions. ANN models replicate brain and nervous system electrical activity. A layer of input nodes and a layer of output nodes make up an ANN, which is connected by one or more layers of hidden nodes. By executing activation functions, input layer nodes send information to hidden layer nodes, while hidden layer nodes either activate or remain dormant depending on the evidence given. The evidence is weighted in the hidden layers, and when the value of a node or collection of nodes in the hidden layer hits a certain threshold, a value is sent to one or more nodes in the output layer [68]. The ability to build actual correlations between independent and dependent variables, as well as extract delicate information and complicated knowledge from representative data sets, is a distinguishing feature of ANN and it has the ability to handle the noisy data while A collection of statistical machine learning algorithms used to learn feature hierarchies, frequently based on artificial neural networks, is referred to as deep learning. The biological neuron serves as the motivation for neural networks, which are used to implement deep learning [69]. The human brain is made up of neurons, neural networks are built up of layers of nodes. Individual layer nodes are linked to nodes in neighboring layers. The number of layers in the network indicates that it is deeper. Deep learning's ability to collaborate with unstructured data is one of its most appealing features. #### Generalized Linear Model In 1972, John Nelder and Robert Wedderburn introduced the Generalized Linear Model, an advanced statistical modeling technique [70]. It's a catch-all phrase for a variety of models that allow the response variable to have an error distribution different from the normal distribution. The generalized linear model extends linear regression by allowing the linear model to be linked to the response variable via a link function and the size of each measurement's variance to be a function of its predicted value. The response variable does not need to be modified every time to have a normal distribution, unlike ordinary least square regression. This feature enhances its classification capability. Modeling allows for additional flexibility because selecting a link differs from selecting a random component. #### Ensemble Learning The ensemble learning approach is a collaborative decision-making process that combines the predictions of learnt classifiers to create new instances. Early study has demonstrated that ensemble classifiers are both empirically and logically more reliable than single part classifiers. To improve predictive efficiency, an ensemble model is a way for generating a predictive model by merging numerous models to tackle a single problem. It combines the output capability of more than one classifier to generate the final result which results in effective classification [71]. Voting, stacking, bagging and boosting (Adaboost) are some of the ensemble learning strategies employed. In voting, multiple models produce predictions about a class in voting. The term "vote" refers to these forecasts. The majority of the models vote to determine the final prediction. The initial stage in this strategy is to generate several classification models from a training dataset. Each base model can be built by combining multiple splits of the same training dataset with the same algorithm, or by combining the same dataset with different algorithms, or by any other approach. Stacking is the process of fitting multiple types of models to the same data and then using another model to learn how to integrate the predictions in the best way possible. On the test set, this model is utilized to make predictions. Leo Breiman created the Bagging classifier in 1994 as an ensemble learning method for constructing a community of learners. Bagging is the process of fitting multiple decision trees to different samples of the same dataset and then averaging the results. Parallel ensemble is another name for bagging. During the learning phase, the base learners happen simultaneously. Boosting includes sequentially adding ensemble members that correct prior model predictions and produces a weighted average of the predictions. Boosting strategies are made up of a collection of steps. From the original dataset, a subset is formed. All data points are given equal weighting at the start. On this subset, a baseline model is developed. On the entire dataset, this model is utilized to create predictions. The actual and forecasted numbers are combined to calculate the errors. Higher weights are assigned to data that were mistakenly anticipated. A new model is developed, and the dataset is used to make predictions. The weighted mean of all the models makes up the final model (weak learners). AdaBoost is implemented in this paper. ## 4 Experimental Evaluation To attribute cyber-attacks to their perpetrators, the machine learning models are trained with both the low-level and high-level IOC datasets. The experiments are performed on a PC with Intel Core m3-7Y30 processors and 1.00 GHz and 1.61 GHz processing speed, 8 GB of RAM running a 64 bit Windows 10. The machine learning models are evaluated using the k-fold cross validation resampling method. The method of cross validation approximate the strength of the machine learning model for unseen samples by splitting and grouping the data multiple times. The parameter 'k' determines the number of sub samples of the original data sample. The value of k is selected as 10. The whole dataset gets an equal chance in training and testing the machine learning model. The final results are obtained by averaging the results of the k-1 folds which is an unbiased and optimistic approach. The effectiveness of the machine learning models was evaluated with accuracy, precision, recall, and f-measure. The efficiency of the machine learning models was evaluated with the execution time of the experiment. The results of the effectiveness and efficiency of the machine learning models for the low-level IOC dataset is given in table 5. It can be seen that the highest accuracy achieved is 40.25% by ensemble learning model using stacking. In this model generalized linear model and another ensemble learning model, i.e., voting are stacked together. The reason is generalized linear model and voting approach has the second and third highest accuracies respectively as compared to other models. The accuracy, precision, recall, and f-measure for low-level IOC dataset is quiet low which clearly depicts that low-level IOC are incapable of effectively attributing cyber attacks. The execution time of low-level IOC trained models is reasonable. The results of the effectiveness and efficiency of the machine learning models for the high-level IOC dataset is given in table 6. It can be seen that the highest accuracy achieved is 94.88% by Artificial Neural Network (ANN). The precision (93.95%), recall (94.88%), and f-measure (0.94) are also high as compared to other machine learning models. However, the efficiency evaluated by the model execution time is quiet low as compared to other models. It takes almost 6 hours to train the ANN model. However, once trained the model works efficiently for the unseen instances of dataset. The effectiveness of ANN is analyzed individually for each cyber threat actor. It is found that ANN is not able to attribute seven cyber threat actors, i.e., APT 16 [72], Bouncing golf [73], DragonOK [74], lotus blossom [45], NEODYMIUM [75], Scarlet Mimic [76], and Silver Terrier [77]. When the reason is examined, it is found that these cyber threat actors had minimum number of high-level IOC reported by the ATT&CK Mitre repository. APT 16 has one, Bouncing golf has two, DragonOK has two, lotus blossom has two, NEODYMIUM has one high-level IOC, Scarlet Mimic has five, and Silver Terrier has six high-level IOC reported in ATT&CK repository. The second highest accuracy achieved is 89.44% by bagging approach of ensemble learning. In the bagging approach, the base machine learning model was random forest. The reason for selecting random forest is that it effectively attributed cyber attacks with an accuracy of 84.79% as compared to other approaches, i.e., naive bayes (69.61%), \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}{} \end{tabular} }} & \multirow{2}{*}{**Accuracy**} & \multirow{2}{*}{**Precision**} & \multirow{2}{*}{**Recall**} & \multirow{2}{*}{**F-measure**} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{} \end{tabular} }} \\ \multicolumn{1}{|c|}{\multirow{-2}{*}{ \begin{tabular}{} \end{tabular} }} & \multirow{2}{*}{35.75\%} & \multirow{2}{*}{25.53\%} & \multirow{2}{*}{9.9\%} & \multirow{2}{*}{0.143} & \multicolumn{1}{c|}{00:18} \\ \hline **Naïve Bayes (Kernel)** & 35.75\% & 25.53\% & 9.76\% & 0.141 & 00:05 \\ \hline **K Nearest Neighbor** & 9.85\% & 2.71\% & 2.94\% & 0.028 & 00:05 \\ \hline **Decision Tree** & 19.52\% & 0.53\% & 2.7\% & 0.009 & 00:05 \\ \hline **Random Forest** & 26.6\% & 3.89\% & 5\% & 0.043 & 00:08 \\ \hline **Gradient Boosted Trees** & 33\% & 10.93\% & 10.17\% & 0.105 & 03:43 \\ \hline **Deep Learning** & 23.67\% & 2.33\% & 6.55\% & 0.034 & 00:35 \\ \hline **Generalized Linear Model** & 37.66\% & 4.43\% & 8.01\% & 0.057 & 00:09 \\ \hline **Ensemble Voting (Generalized Linear Model,** & 35.75\% & 8.06\% & 7.85\% & 0.079 & 00:31 \\ **Naïve Bayes, Random Forest)** & & & & & \\ \hline **Ensemble Stacking (Voting, Generalized** & **40.25\%** & **5.76\%** & **9.77\%** & **0.072** & **00:46** \\ \hline **Ensemble Stacking (Voting, Bagging)** & 28.84\% & 5.34\% & 5.89\% & 0.056 & 01:02 \\ \hline **Ensemble Bagging (Generalized linear** & 37.66\% & 4.61\% & 8.03\% & 0.058 & 01:38 \\ **Model)** & & & & & \\ \hline **Ensemble Ada boost (Generalized Linear Model)** & 8.12\% & 0.22\% & 2.7\% & 0.004 & 00:10 \\ \hline \end{tabular} \end{table} Table 5: Experimental Results for Effectiveness and Efficiency of low-level IOC KNN (39.69%), and decision tree (20.78%). The execution time of bagging approach is 9 minutes approximately. The effectiveness of ensemble learning bagging approach is also analyzed individually for each cyber threat actor. It is found that it is not able to attribute eleven cyber threat actors, i.e., APT 16 [72], APT 30 [78], Black Oasis [79], Bouncing golf [73], DragonOK [74], Indigo Zebra [80], lotus blossom [45], NEODYMIUM [75], Orange worm [81], Scarlet Mimic [76], and Silver Terrier [77]. The first reason is the same reason that these cyber threat actors had minimum number of TTPs reported by the ATT&CK Mitre repository. The second reason is that the ensemble learning model is incapable of learning features of these cyber threat actors. The accuracy, precision, recall, and f-measure for the synthesized high-level IOC dataset is promising which clearly depicts that high-level IOC are capable of effectively attributing cyber attacks even in case of lost and poisoned IOCs. ## 5 Red Cross Data Breach Case Study The high-level IOC dataset is tested with a recent and unseen cyber threat actor's data breach incident on Red Cross [87]. In this data breach incident, the data of 0.5 million people was compromised. The data is about victims of conflict, war, and disasters. The data breach remained undetected for 70 days. However, the breached records are not publicly disclosed on any forum which depicts that the purpose of the cyber threat actor is espionage using stealth approaches. The sophistication of the attack shows that it is an Advanced Persistent Threat (APT). Currently, the cyber threat actor of this incident is unknown which makes this case a suitable study for analysis in this research work. In this section, we will attribute this data breach incident to its perpetrators using the high-level IOC of the incident. These attributes of the incident are provided as a test instance to the machine learning models trained using high-level IOC dataset discussed in section 4. The high-level IOC of the Red Cross data breach incident are collected from publicly available international committee of Red Cross, news articles, and threat reports of security organizations [87, 88, 89, 90, 91, 92, 93, 94, 95]. There are 27 high-level IOC identified via manual analysis of the publicly available information about the incident. Here, a brief overview of high-level IOC is given. The cyber threat actors used sophisticated hacking tools which are employed in APT. They evaded security controls and remained stealth by using obfuscation techniques to hide their malware. The malware was customized according to the servers storing the data records. A critical vulnerability, CVE-2021-40539, in the authentication module lead to the network intrusion. Using this vulnerability, the cyber threat actors installed web shells in the server system and compromised administrator's credentials. Once getting the control of the server, hackers installed hidden tools on the server and accessed data records. Here, it can be seen that a major high-level IOC or tactic of the cyber threat actor is the stealth behavior. It is also known as "living off the land" approach [99]. In this tactic, the attackers make use of those tools that are legitimate and already installed on victim's machine. Another way is to run malicious codes directly in main memory. As a result fewer or even no new files are created in hard disk which can provide clues for detection. By using living off the land approaches it takes longer to detect data breaches. In table 7, the cyber threat actor's prediction results of Red Cross data breach incident are shown. The machine learning models that have accuracy above 80% are considered for prediction. The ANN model which has highest \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline & & & & & \\ \multicolumn{1}{|c|}{**Machine Learning Algorithm**} & **Accuracy** & **Precision** & **Recall** & **F-measure** & \multicolumn{1}{|c|}{**Execution Time**} \\ \cline{2-5} \multicolumn{1}{|c|}{**Naïve Bayes**} & 69.61\% & 83.23\% & 69.61\% & 0.76 & 0:01 \\ \hline **Naïve Bayes (Kernel)** & 68.68\% & 82.31\% & 68.68\% & 0.75 & 0:18 \\ \hline **K Nearest Neighbor** & 39.69\% & 61.37\% & 39.69\% & 0.48 & 0:01 \\ \hline **Decision Tree** & 20.78\% & 33.41\% & 20.78\% & 0.26 & 0:02 \\ \hline **Random Forest** & 84.79\% & 82.56\% & 84.81\% & 0.84 & 0:58 \\ \hline **Artificial Neural Network** & **94.88\%** & **93.95\%** & **94.88\%** & **0.94** & **5:59:42** \\ \hline **Deep Learning** & 75.50\% & 79.42\% & 75.97\% & 0.78 & 0:49 \\ \hline **Generalized Linear Model** & 81.86\% & 82.20\% & 81.86\% & 0.82 & 21:54 \\ \hline **Ensemble Voting (Random Forest,** & & & & & \\ \multicolumn{1}{|c|}{**Generalized Linear Model)**} & 83.26\% & 40.93\% & 41.97\% & 0.41 & 23:24 \\ \hline **Ensemble Stacking (Random Forest,** & & & & & \\ \multicolumn{1}{|c|}{**Random Forest)**} & 86.5\% & 84.86\% & 86.51\% & 0.86 & 1:57 \\ \hline **Ensemble Stacking (Bagging, Bagging)** & 89.6\% & 87.64\% & 89.61\% & 0.89 & 20:04 \\ \hline **Ensemble Bagging (Random Forest)** & **89.44\%** & **87.66\%** & **89.46\%** & **0.89** & **8:52** \\ \hline **Ensemble Ada boost (Random Forest)** & 84.79\% & 82.56\% & 84.81\% & 0.84 & 4:52 \\ \hline \end{tabular} \end{table} Table 6: Experimental Results for Effectiveness and Evaluation of High-level IOC accuracy of 95% predicted "Thrip" as cyber threat actor [96, 97]. The ensemble learning model using bagging and stacking approach which has the second and third highest accuracy of 89% and 86% respectively predicted "Threat Group 1314" as cyber threat actor [98]. Random forest, ensemble learning using voting, and Adaboost models predicted "FIN 10" as cyber threat actor [100]. The results depict that based on the high-level IOC information publicly available about the Red Cross data breach incident, the cyber threat actor predicted by the machine learning models may be one of these three actors, i.e., Thrip, Threat Group 1314, and FIN10 [96-100]. We closely examined the high-level IOC and previous data breach incidents of these cyber threat actors to find any possible link between these and the actor of Red Cross data breach. First, FIN 10 is examined. It is a financially motivated cyber threat actor whose incidents are related to financial fraud in casinos, and mining organizations. As mentioned earlier, the aim of the attackers of Red Cross data breach is espionage and there have been no report of any financial fraud and demand of ransom so far from the attackers so we belief that F10 is not the correct prediction. The prediction result may be due to the possibility of a few common high-level IOC in the related incidents. Second, Threat Group 1314 is examined. The aims and motivations of the Threat Group 1314 are currently unknown. However, this group have used "living off the land" approaches to remain stealth which are also used by the cyber threat actor of Red Cross data breach incident. This important connection is identified by five of the machine learning models shown in table 7 which increases the possibility that the cyber threat actor of Red Cross data breach might be Threat Group 1314. Finally, Thrip predicted by ANN is examined. As shown in section 4 that ANN has the highest accuracy of prediction. Thus it is expected that the prediction results of ANN model is more reliable than others. Thrip is an espionage group that has targeted communications of satellite, telecom industry, defense contractors, MapXtreme Geographic Information System (GIS) software, Google Earth Server, and Garmin imaging software. This shows an important connection with the cyber threat actor of Red Cross data breach incident whose aim is also espionage. The second important connection between the cyber threat actor of Red Cross data breach and Thrip is the employment of living off the land techniques to remain stealth and blend in with the victim's resources. Based on the publicly available information, both these important connections increases the possibility that the cyber threat actor of Red Cross data breach is none other than Thrip. Thus, in this case study the cyber threat actor predicted for Red Cross data breach incident is Thrip. ## 6 Limitations and Suggestions The primary goal of this research is to elaborate on the importance of high-level IOC for cyber threat attribution.Our hypothesis was supported by the experiments with a good accuracy of the results. In order to actually implement the proposed solution in the real security environment there are certain limitations with the current CTI data and ATT&CK framework that need to be addressed. The first limitation is the difference in format of high-level IOC representation. High-level IOCs are normally represented as humanly understandable enumerations that are often long detailed sentences. While in ATT&CK repository high-level IOC are represented by a general concept with a unique ID assigned to each. This mapping between the high-level IOC requires human intervention. This limitation causes hurdle in deploying an automated solution for the proposed cyber threat attribution problem. Here we suggest to reference TTPs, software tools and threat actors in CTI documents with their ATT&CK assigned IDs just like the same way vulnerabilities are quoted and referenced with their specific CVE IDs [82] and low-level IOCs are addressed using common language framework i.e. OpenIOC [83] and Mitre CybOX [84]. The second limitation is there are \begin{table} \begin{tabular}{|l|l|} \hline **Machine Learning Algorithm** & **Cyber Threat Actor Prediction** \\ \hline Random Forest & FIN 10 \\ \hline ANN & Thrip \\ \hline Generalized Linear Model & Threat Group 1314 \\ \hline Ensemble Bagging (Random Forest) & Threat Group 1314 \\ \hline Ensemble Voting (Random Forest, Generalized Linear Model) & FIN 10 \\ \hline Ensemble Stacking (Random Forest, Random Forest) & Threat Group 1314 \\ \hline Ensemble Stacking (Random Forest, Generalized Linear Model) & Threat Group 1314 \\ \hline Ensemble Stacking (Bagging (RF), Bagging (RF) & Threat Group 1314 \\ \hline Ensemble Ada boost (Random Forest) & FIN10 \\ \hline Ensemble Bagging (ANN) & Thrip \\ \hline \end{tabular} \end{table} Table 7: Cyber Threat Prediction for Red Cross Data Breach Incident certain TTPs that represent non-technical adversary traits e.g. the time at which adversary operates, language specific quotes and strings. In table 3, a few examples of such traits related to threat groups are given. Such traits play an important role in attributing cyber threats to their adversaries also some of them have high observability in the case of an attack as compared to other IOCs. Currently, ATT&CK framework lacks such traits. The third limitation is related to the completeness of CTI data. ATT&CK repository is the first effort to standardize the adversary's tactics techniques and common attack knowledge by categorizing TTPs and software tools to their associated adversaries. The problem is ATT&CK repository is not complete. It lacks important TTPs, tools and threat groups that can be found in other threat repositories such as IBM X-Force [54], Hail a Taxii [85] and Symantec [86]. Also, some important connections between these entities are also missing e.g. the famous Axiom threat group use HTTPs port 43 to bypass firewall and intrusion detection system. This TTP is not connected to the TTPs of Axiom group in the database. Similarly, the structured CTI data is manually managed and shared by different threat sources and platforms where certain intelligence is reported by a source and the other source lacks that. This limitation can be overcome with enriching CTI data with semantic data integration techniques. However, this is out of the scope of this research work. ## 7 Conclusion and Future Work In this research work to attribute cyber attacks to their perpetrators, the effectiveness of high-level attack patterns over low-level attack patterns is discussed. To empirically evaluate and compare the effectiveness of both kinds of IOC, the appropriate gold standard datasets are required. The dataset for low-level IOC for cyber threat actor attribution was not available. The standard high-level IOC dataset has a single instance for each predictive class label that cannot be used directly for training machine learning models. To address the first problem, the low-level IOC dataset is built from real-world cyber attack documents. To address the second problem, a synthesized version of the high-level IOC dataset is provided that can be used to train machine learning models and predict the culprit behind the cyber-attack. The datasets are empirically evaluated for their effectiveness and efficiency using machine learning models. Both the datasets are provided to the research community for further research and exploration. The experimental results show that the high-level IOC trained models effectively attribute cyberattacks with an accuracy of 95% as compared to the low-level IOC trained models where accuracy is 40%. Based on the results we conclude that high-level IOCs can attribute cyber threats to their perpetrators effectively. Currently the ATT&CK mitre repository is manually built that has a lot of missing high-level IOC for cyber threat actors. This effects the accuracy of attribution as well. In the future, we aim to automate the process of identifying high-level IOC of cyber threat actors from the textual cyber attack documents using clustering and non-parametric classification approaches to improve the effectiveness of the cyber threat attribution. ### Availability of data and materials The low-level and high-level cyber attack Indicators of Compromise (IOC) datasets are provided to the research community for further research and exploration on the following link. [https://github.com/UmaraNoor/Cyber-Attack-Patterns-Dataset](https://github.com/UmaraNoor/Cyber-Attack-Patterns-Dataset)
2304.09253
Towards Advantages of Parameterized Quantum Pulses
The advantages of quantum pulses over quantum gates have attracted increasing attention from researchers. Quantum pulses offer benefits such as flexibility, high fidelity, scalability, and real-time tuning. However, while there are established workflows and processes to evaluate the performance of quantum gates, there has been limited research on profiling parameterized pulses and providing guidance for pulse circuit design. To address this gap, our study proposes a set of design spaces for parameterized pulses, evaluating these pulses based on metrics such as expressivity, entanglement capability, and effective parameter dimension. Using these design spaces, we demonstrate the advantages of parameterized pulses over gate circuits in the aspect of duration and performance at the same time thus enabling high-performance quantum computing. Our proposed design space for parameterized pulse circuits has shown promising results in quantum chemistry benchmarks.
Zhiding Liang, Jinglei Cheng, Zhixin Song, Hang Ren, Rui Yang, Kecheng Liu, Peter Kogge, Tongyang Li, Yongshan Ding, Yiyu Shi
2023-04-18T19:27:30Z
http://arxiv.org/abs/2304.09253v2
# Towards Advantages of Parameterized Quantum Pulses ###### Abstract. The advantages of quantum pulses over quantum gates have attracted increasing attention from researchers. Quantum pulses offer benefits such as flexibility, high fidelity, scalability, and real-time tuning. However, while there are established workflows and processes to evaluate the performance of quantum gates, there has been limited research on profiling parameterized pulses and providing guidance for pulse circuit design. To address this gap, our study proposes a set of design spaces for parameterized pulses, evaluating these pulses based on metrics such as expressivity, entanglement capability, and effective parameter dimension. Using these design spaces, we demonstrate the advantages of parameterized pulses over gate circuits in the aspect of duration and performance at the same time thus enabling high-performance quantum computing. Our proposed design space for parameterized pulse circuits has shown promising results in quantum chemistry benchmarks. + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: volume + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: volume: + Footnote †: volume: volume + Footnote †: volume: + Footnote †: volume: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: volume + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: volume: + Footnote †: volume: + Footnote †: volume: volume: + Footnote †: volume: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: volume + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: + Footnote †: volume: designed to avoid low-level details about the physical implementation of quantum computing. These languages typically provide constructions of quantum circuits, gates, measurements, and other operations. Once the quantum program is written in a high-level language, it needs to be optimized and transformed into a format that can be efficiently executed on real quantum hardware. This involves techniques such as gate fusion, gate cancellation, and gate commutation, which are used to reduce the number of gates and improve the circuit's efficiency. The next step is to map the optimized circuit onto the physical qubits of the quantum hardware. This involves assigning each logical qubit in the circuit to a physical qubit on the hardware, taking into account factors such as connectivity constraints and the availability of resources such as gates and measurements. The overall goal is to make quantum circuits compatible with hardware topology while minimizing the total number of SWAP gates that needs to be inserted. Next, these mapped circuits need to be decomposed into basis gates that are natively supported by the quantum backend. At the current phase, the compilation remains at the gate level. Pulse-level controls come into play when we need to interact with physical qubits. For superconducting quantum computers, once the decomposition stage is finished, the circuits that are composed of basis gates will be "translated" into pulses. Finally, these pulses will be transmitted to physical qubits as control signals. Recent studies [15; 17; 18; 20; 30; 34; 47; 50; 59; 62] have highlighted the benefits of utilizing pulse-level controls over gate-level programming for a specific set of quantum algorithms, particularly variational quantum algorithms. The key insight is that the gate-level abstraction layer is designed with precise calibration of basis gates for higher fidelities. The calibration of these basis gates aims to minimize their deviation from the ideal theoretical representation. However, the ansatz circuits in variational quantum algorithms may not demand such accurate implementation. The "training" process of variational quantum algorithms can inherently correct such imprecision. By incorporating parameterized pulses within ansatz circuits, the search for the desired state in the Hilbert space can be achieved with much shorter circuit latency in comparison to gate circuits. For instance, the current implementation of a U3 gate on IBM's quantum hardware would require two well-calibrated pulses and one virtual-Z operation for execution, whereas at the pulse level, a single pulse can achieve the same functionality. At present, the majority of quantum programs are designed and executed at the gate level, with a well-established workflow in place for profiling the performance of quantum gates on quantum hardware [1; 53]. Profiling the performance of quantum gates involves measuring and analyzing the physical characteristics and behavior of the gates used in a quantum circuit. This information can be leveraged to understand the limitations and sources of errors in the gates, optimize their performance, and improve the overall fidelity and accuracy of the quantum computation. By understanding the performance of individual quantum gates, researchers can optimize the design of quantum circuits to minimize error rates and improve the overall performance of quantum algorithms [11; 23; 49; 51; 54; 66; 67; 68; 69; 70; 71; 72]. Profiling the performance of quantum gates also enables benchmarking of different quantum hardware platforms [33; 40; 65], and it is crucial for developing error correction and mitigation techniques [13; 24; 55]. In summary, profiling the performance of quantum gates is vital for improving the overall performance and reliability of quantum computing systems. Consequently, profiling parameterized pulses is crucial for the advancement of pulse-level controls and pulse-level quantum programming. We employ various criteria as metrics for evaluating parameterized pulses and examine different pulse templates to investigate their differences. Our aim is to offer guidance for the design and application of parameterized pulses in quantum circuits. We attempt to build detailed guidance surrounding the topics related to parameterized quantum pulses. To this end, our contribution includes: * A set of pulse-level design spaces provided that guides the design of parameterized quantum pulses of good quality. * A group of criteria to profile the pulse-level design space and characterize the property and power of parameterized quantum pulses. * Benchmark applications include quantum chemistry tasks and quantum finance tasks to validate the effectiveness of proposed pulse-level design spaces. The rest of the paper is organized as follows. We provide a concise overview of parameterized quantum circuits and systems, followed by a discussion on related work in Section 2. In Section 3, we outline various criteria for pulse-level circuit design and demonstrate the results on benchmarks. In Section 4, we propose the pulse-level design space which we examine with the aforementioned criteria. Then, we employ the parameterized pulses in different applications and obtain the performance results in Section 5. In Section 6, we discuss the advantages and limitations of parameterized quantum pulses and present an outlook for future work. ## 2. Parameterized Pulse Circuits The investigation of pulse-level quantum computing has gained significant attention from researchers. In this study, we focus primarily on parameterized quantum pulses. Most prior applications of parameterized quantum pulses have been focused on the calibration of a quantum computer's basis gates. For example, experiments Figure 2. Illustrations of the function of parameterized single qubit pulse. a) Samplings 5000 times for amplitude while fix angle as 0. b) Samplings 5000 times for angle while fix amplitude = 0.08. The operation of amplitude indicates the rotation with respect to the X-axis and the change of angle is corresponding to the rotation with respect to the Z-axis. on Rabi oscillations are carried out to calibrate the angle and amplitude of single-qubit gates [(56)]. To calibrate two-qubit gates, Hamiltonian tomography is employed, with the aim of maximizing the proportion of ZX interaction, as it is a crucial element of the CNOT gate [(43; 57)]. Several runtime-efficient and noise-resilient strategies have been proposed for parameterized quantum pulses, allowing for precise calibration of multi-qubit gates [(12; 44; 57)]. Quantum optimal control (QOC) is regarded as one of the most promising options for generating parameterized pulses for a given unitary matrix. Nevertheless, QOC is excessively costly [(6; 10; 19; 29; 61)]. As depicted in Fig. 1, typical pulse-level parameters for a superconducting quantum computer consist of amplitude, angle, duration, and frequency. Adjusting these parameters affects the driving Hamiltonian of the quantum operation, thereby impacting the state of the quantum circuit [(41)]. Given an example to understand the parameters function in the single qubit parameterized pulse as follows [(31)]: \[\vec{H}_{d}=-\frac{\Omega}{2}V_{0}s(t)(I\sigma_{x}+Q\sigma_{y}) \tag{1}\] where \(s(t)\) is a dimensionless envelope function, so that the amplitude of the drive is set by \(V_{0s}(t)\). Thus, we know amplitude determines the rotation speed, while the duration determines the rotation time. For the function of the angle, refer to the definition of I and Q [(31)]: \[I=\cos(\varphi)\quad(the\ `in-phase\ `component\ ) \tag{2}\] \[Q=\sin(\varphi)\quad(the\ `out-of-phase\ `component\ ) \tag{3}\] \(I\) is the in-phase pulse which corresponds to rotations around the \(x\)-axis, while \(Q\) is the out-of-phase pulse which corresponds to rotations about the \(y\)-axis. \(I\) and \(Q\) are both dependent on parameter angle \(\varphi\), it can be inferred that the angle determines the position of the rotation axis in the XY plane. Prior research has investigated the optimization of pulse amplitude, frequency, and duration to mitigate decoherence and attain superior performance in calculating the ground state energy of chemical molecules [(18; 36; 46)], in classification tasks of machine learning [(38; 45; 48)], and in the max-cut problem of QAOA on different quantum hardware including superconducting quantum computers [(26; 37)] and neutral atom quantum computers [(14)]. Figure 3. Comparison of the expressivity of single qubit gate-level circuit and single qubit pulse circuit. a) Four different single qubit gate-level circuits: Rz gate, Rx gate, Rx + Rz circuit, and U3 gate (Rx + Rz + Rx), with sequence duration 0dt, 320dt, 320dt, and 320dt, respectively. And a single qubit pulse circuit that takes amplitude and angle as parameters, with a sequence duration of 160dt. For each circuit, we use Qiskit-Dynamics for 5000 times sampling on Bloch Sphere. b) The histograms showcase the estimated fidelities of the enumerated circuits, with a theoretical Haar-distributed fidelity plotted as a reference. ## 3. Pulse-level circuit criteria To measure the efficacy of pulse-level circuits and emphasize their superiority over gate-level circuits, we present four metrics: expressivity, entangling capability, effective parameter dimension, and sequence duration. Previous research on parameterized quantum circuits (PQC) at the gate level has utilized both expressivity and entangling capability (Garfani et al., 2017; Garfani et al., 2017), where sampling random circuits are used as benchmarks. In addition, effective parameter dimension is a criterion for analyzing redundant parameters in gate-level quantum circuits. Nonetheless, our objective is to investigate parameterized quantum circuits at the pulse level, namely parameterized pulse circuits (PPC). To evaluate the effectiveness of parameterized pulse circuits, we compare their deviation with that of random circuits, as well as the deviation of PQC with random circuits. By shifting from gate-level to pulse-level, we can achieve improvements in sequence duration while maintaining similar expressivity. ### Sequence Duration The term sequence duration refers to the total time required for the execution of a quantum program. Sequence duration plays a vital role in determining the overall execution time of the program and can significantly impact its performance. Given that the current decoherence time on quantum machines is limited, sequence duration is a critical consideration when designing quantum programs. Thus, optimizing the sequence duration involves adjusting the pulse parameters, adding delays or idle pulses, or utilizing different pulse shapes to minimize the overall duration and enhance program performance. In our Qiskit-Pulses test cases, the sequence duration of a pulse schedule is always defined using a basic unit dt, where dt is the duration of a single sample of the arbitrary waveform generator, typically equal to 0.222 ns. This duration is discretely mapped from the hardware level to the software level using the unit dt. While the search space of duration and amplitude partially overlap, they jointly determine the "strength" of pulses. As hardware is usually more precise for amplitude control, the hardware control typically selects a fixed duration and then performs amplitude control. We decide to retain duration control because we aim to adaptively reduce the decoherence time of the quantum algorithm. ### Expressivity The capacity of a circuit to generate pure states is defined as expressivity, which is introduced in previous PQC works (Garfani et al., 2017). By taking samples from these states on the Bloch sphere, we can evaluate a quantum circuit's capacity to explore the sphere for a single qubit. This ability extends to the quantum circuit's ability to explore the Hilbert space when multiple qubits are involved. #### 3.2.1. Single-Qubit Pulse A single-qubit pulse is a microwave pulse that is precisely tuned and controlled to manipulate the state of a single qubit in a quantum computing system. The effectiveness of the operation depends on the critical parameters of the pulse including amplitude, duration, and shape. Achieving precise and efficient manipulation of the qubit is crucial for performing quantum computations, which is why these parameters must be adjustable. We conduct our experiments on IBM's superconducting quantum computer. Specifically, we define a single-qubit pulse whose duration and \(\beta\) are obtained from the calibrated X gate from the backend and serve as its fixed parameters. The parameters are the amplitude \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Expressivity** & **Quito** & **Lima** & **Jakarta** \\ \hline expr\_z & 3.8918 & 3.8918 & 3.8918 \\ expr\_x & 0.2344 & 0.1771 & 0.1752 \\ expr\_xz & 0.0222 & 0.0202 & 0.0177 \\ expr\_zyz & 0.0173 & **0.0169** & 0.0235 \\ expr\_amp & 0.1985 & 0.1913 & 0.1673 \\ expr\_ang & 0.1709 & 0.7790 & 0.1719 \\ expr\_pulse & **0.0138** & 0.0202 & **0.0157** \\ \hline \end{tabular} \end{table} Table 1. The expressivity of different single qubit circuits sampled on three different backend models by Qiskit-Dynamics, expr_z, expr_x, expr_xz, expr_zyz, expr_amp, expr_ang, expr_pulse correspond to Rz gate, Rx gate, Rx + Rz circuit, Rx + Ry + Rz (U3 gate) circuit, single qubit pulse circuit with amplitude sampled while fix angle as 0, single qubit pulse circuit with angle sampled while fix amp as 0.08, general single qubit pulse circuit with angle and amplitude all sampled. Figure 4. Two-qubit pulse circuit structure and corresponding pulse schedule. a) Dressedpulse consists of four single-qubit pulses (SQP) and one cross-resonance pulse (CR). b) Blockpulse consists of six SQPs and two CRs. and angle, which can fine-tune the operation on the qubit. To explore the effects of these two parameters, we first fixed the angle to 0 and randomly sampled the amplitude 5000 times to obtain Fig. 2 a). We then fixed the amplitude at 0.08 and randomly sampled the angle 5000 times to obtain Fig. 2 b). From the results, we observe changes in amplitude correspond to rotations around the X-axis, while changes in angle correspond to rotations around the Z-axis. This implies that we have the potential to use a single qubit pulse with a short sequence duration to implement U3 gate operations. #### 3.2.2. Expressivity of Single-Qubit Pulse and Gate In this section, we provide a detailed comparison between various models of single-qubit gates and our proposed model of single-qubit pulses, by defining their expressivity as a benchmark. Firstly, we consider the RZ gate, which rotates around the Z-axis when the initial state is \(|0\rangle\), and all the samples on the Bloch sphere are on the \(|0\rangle\) state. Secondly, we consider the Rx gate, which rotates around the X-axis when the initial state is \(|0\rangle\), and the samples are distributed along a latitude line on the Bloch sphere. We then observe that the Rz+Rx circuit produces states that better cover the Bloch sphere. Next, we express the U3 gate using the Rz+Ry+Rz circuit, which can achieve rotation on the X, Y, and Z axes, resulting in higher degrees of freedom and better expressivity. We then demonstrate the single-qubit pulse with amplitude and angle as parameters, where amplitude indicates the amount of rotation around the X-axis and angle has enabled the rotation around the Z-axis. We observe that the sequence duration of the single-qubit pulse is half of Rx, Rx+Rz, and Rz+Ry+Rz. Finally, as a reference, we uniformly sample the single-qubit matrix to simulate the most expressive circuit theoretically. Fig. 3 a) shows the simulated state on the Bloch sphere after 5000 samples and the corresponding gate-level and pulse-level circuits. We follow the same definition of expressivity proposed in Ref. (Kolle et al., 2019) for both gate-level and pulse-level ansatz \[\mathrm{Expr}=D_{\mathrm{KL}}\left(P_{\mathrm{Ansatz}}(F;\mathbf{\theta})\|P_{ \mathrm{Haar}}(F)\right), \tag{4}\] where the Kullback-Leibler (KL) divergence (Kolle et al., 2019) is a measure of distance between two probability distributions. \(P_{\mathrm{Ansatz}}(F;\mathbf{\theta})\) is a distribution of state fidelities between two randomly sampled parameterized states \(|\psi(\theta)\rangle\) and \(|\psi(\phi)\rangle\) obtained from the ansatz \[P_{\mathrm{Ansatz}}(F;\mathbf{\theta})\equiv P(F=|\langle\psi(\theta)|\psi(\phi) \rangle|^{2}). \tag{5}\] The latter quantity \(P_{\mathrm{Haar}}(F)\) is also a state fidelity distribution but for the ensemble of Haar random states. In this case, the analytical probability density function (PDF) is known as (Kolle et al., 2019) \[P_{\mathrm{Haar}}(F)=(N-1)(1-F)^{N-2}, \tag{6}\] where \(N\) is the dimension of the Hilbert space. If the calculated \(\mathrm{Expr}\) is closer to 0 for one particular ansatz, we say it is more expressive than other candidates since it is able to sample uniformly from the full Hilbert space and hence approximate any possible state. This capability is essential for variational quantum algorithms where we want to to train the ansatz to generate a particular quantum state with limited prior information about that target state. In Fig. 3 b), the estimated confidence histograms for each circuit are displayed, with the theoretical Haar distribution shown as a reference. The bin size for generating the histograms was defined as 50, which affects the accuracy of the Kullback-Leibler (KL) divergence calculation. However, by keeping the settings consistent across all circuits, the observed results can be fairly compared. The KL divergence is reported in Table 1 to quantify the deviation, where lower KL divergence values correspond to circuit simulations that are closer to the Haar distribution, representing circuits with stronger expressive power or states closer to random states. In Table 1, two out of three backend models, single-qubit pulses demonstrate slightly better expressive power than single-qubit gate-level circuits. The poor performance observed in the Lima case is attributed to unfavorable backend settings. The parameter angle of a quantum pulse is equivalent to a ShiftPhase operation, which causes the qubit state to rotate around the Z-axis. We discovered that as the angle changes, the qubit state is unable to fully rotate around the Z-axis, resulting in a decrease in expressivity on the Lima backend. Generally speaking, single-qubit pulses exhibit stronger expressive power than single-qubit gate-level circuits, although it is worth noting that single-qubit pulses have only half the sequence length of Rx, Rx+Rz, and U3 circuits. #### 3.2.3. Multi-Qubit Pulse A multi-qubit pulse is a quantum operation that acts on multiple qubits simultaneously. The ability to manipulate entangled quantum states between multiple qubits is crucial in quantum computing. The controlled-not (CNOT) gate is a common example of a multi-qubit quantum operation at the gate \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Model** & **Expr** & **Ent** & **Param** & **EPD** & **Dur** \\ \hline 2QDresedpulse & 0.0259 & 0.1145 & 11 & 11 & 276-1328dt \\ \hline 2QDresedpulse\_fixamp & 0.0183 & 0.1837 & 10 & 10 & 276-1328dt \\ \hline 2QDresedpulse\_fixang & 0.0163 & 0.1115 & 10 & 10 & 276-1328dt \\ \hline 2QDresedpulse\_fixduration & 0.0253 & 0.1146 & 10 & 10 & 276-1328dt \\ \hline 2QBlockpulse & 0.0225 & 0.1977 & 18 & 18 & 992-2496dt \\ \hline 2QBlockpulse\_fixamp & 0.0161 & 0.3127 & 16 & 16 & 992-2496dt \\ \hline 2QBlockpulse\_fixang & 0.0175 & 0.1933 & 16 & 16 & 992-2496dt \\ \hline 2QBlockpulse\_fixduration & 0.0318 & 0.0619 & 16 & 16 & 992-2496dt \\ \hline Universal2QGateCircuit & 0.0253 & 0.4011 & 15 & 9 & 4768dt \\ \hline RXCX2QGateCircuit & 0.3971 & 0.4883 & 4 & 3 & 1696dt \\ \hline \end{tabular} \end{table} Table 2. Performance metrics of various two-qubit pulse models and two-qubit gate models are provided for comparison. The table lists the expressivity (Expr), entanglement capability (Ent), number of parameters (Param), effective parameter dimension (EPD), and duration of the quantum operations (Dur) for each model. For pulse models, the amplitude, angle, and duration have been sequentially fixed and the analysis performed. level, which can be implemented using multiple cross-resonance (CR) pulses and echoed pulses. CR is a type of multi-qubit pulse where a pulse is applied to the control qubit at the frequency of the target qubit. The cross-resonance Hamiltonian can be expressed as: \[H =Z\otimes A_{2}+I\otimes B_{2}\] \[=a_{x}\hat{Z}\hat{X}+a_{y}\hat{Z}\hat{Y}+a_{z}\hat{Z}\hat{Z}+b_{x }\hat{I}\hat{X}+b_{y}\hat{I}\hat{Y}+b_{z}\hat{I}\hat{Z}. \tag{7}\] At the gate level, cross-resonance pulses need to be elaborately calibrated to implement desired function. Specifically, we want to keep the \(ZX\) term and eliminate other interactions. Such calibration process requires techniques including echoed CR and phase calibration as discussed in (Wang et al., 2018). At the pulse level, however, there is no need to remove terms other than \(ZX\). In this way, we trade precise controllability for more flexibility and capacity to further explore the Hilbert space. ### Entanglement Capability Entanglement is a unique quantum phenomenon that refers to the non-classical interdependence between two or more qubits, even when they are located far apart. In variational quantum algorithms and quantum machine learning, generating shallow circuits with strong entanglement capabilities is essential. Consequently, understanding how to measure a quantum circuit's entanglement ability is crucial in selecting an appropriate design space. This helps to create more effective solution spaces for tasks such as ground state preparation or machine learning benchmarks while also preserving the non-trivial correlations in quantum data. One commonly used standard for measuring the entanglement ability of a quantum circuit is the Meyer-Wallach (MW) entanglement \(Q\)-measure. It can be employed to quantify the global entanglement measure of pure-state qubits entanglement, with the benefits of scalability and ease of computation. We define the entanglement capability as the average \(Q\)-measure from an ensemble of randomly sampled states \[\text{Ent}\,=\frac{1}{|S|}\,\sum_{\mathbf{\theta}_{i}\in S}Q\left(\big{|}\psi_{ \mathbf{\theta}_{i}}\big{)}\right), \tag{8}\] where the Meyer-Wallach \(Q\)-measure is proposed to estimate the number and types of entangled states an ansatz can generate. It can be calculated though the average of the purity of each qubit (Becker et al., 2017) \[Q(|\psi\rangle)=2\left(1-\frac{1}{n}\sum_{k=1}^{n}\text{Tr}\left[\rho_{k}^{2} \right]\right), \tag{9}\] where \(\rho_{k}\) is the reduced density matrix of the \(k\)-th qubit. Notice for any product state \(|\psi\rangle\otimes|\psi\rangle\otimes\cdots\), \(Q=0\), while \(Q=1\) for the Greenberger-Horne-Zeilinger (GHZ) state \(|\psi\rangle_{\text{GHZ}}=(|0\rangle^{\otimes N}+|1\rangle^{\otimes N})/ \sqrt{2}\). In order to explore the entanglement capabilities of parameterized pulse circuits, we constructed two types of two-qubit parameterized pulse circuits as shown in the Fig. 4. We then sampled and calculated the entanglement capabilities of the corresponding circuits. In particular, we use the Hamiltonian tomography of the CR to investigate the impact of phase and duration variations on the Figure 5. Proposed pulse-level design space and the corresponding Pulse ID 1, 3, 5, and the proposed design space with fixed CR amplitude be marked as Pulse ID 2, 4, 6. Random-generated pulses with the same number of parameters and the same number of CR pulses as the proposed design space are introduced as Pulse ID 7, 8, 9, 10, 11, 12, i.e., Pulse ID 7 has same number of parameters and CR pulses with Pulse ID 1. Figure 6. Q measurement result through 500 times sampling of 2QBlockpulse and 2QBlockpulse fizxmp. components of IX, IZ, and IY, and result in altering the proportion of the ZX term, which directly affects the entanglement power (Bradner et al., 2016; Bradner et al., 2016). Thus, it is necessary to analyze each parameter's effect in CR. As a result, we measured the entanglement and expressivity capabilities while fixing the amplitude, angle, and duration, respectively, to observe the effects of the corresponding parameters. In Table 2, we introduce two pulse models within the structure as shown in Fig. 4, and fix each type of parameter sequentially. We also put two gate models as baseline to compare, one is the Universal2QgateCircuit, a quantum circuit composed of 18 gates from CNOT, Ry, and Rz being the minimal gate-based circuit that simulates an arbitrary two-qubit unitary operator up to global phase (Zhu et al., 2017). Another gate model consists of four Rx gates and one CNOT gate, where two of Rx gates are in front of the CNOT implemented on the first and second qubit, respectively, and another two Rx gates are behind the CNOT. From Table 2, fixing the amplitude (2QDressedpulse_fixamp and 2QBlockpulse_fixamp) resulted in a significant improvement in the entanglement capability and a slight improvement in expressivity, while fixing the angle (2QDressedpulse_fixang and 2QBlockpulse_fixang) resulted in a small improvement in expressivity but did not significantly affect the entanglement capability. Fixing the duration (2QDressedpulse_fixduration and 2QBlockpulse_fixduration) resulted in a negative impact in all metrics. Overall, Universal2QGateCircuit had the strongest entanglement capability, while 2QBlockpulse_fixamp had the best expressivity. Pulse models generally exhibit better expressivity and have shorter sequence durations compared to gate models. Although the pulse-level circuit exhibits a relatively low entanglement capability than the gate-level circuit, it still possesses points with Q values approaching 1 as shown in the Fig. 6. This characteristic is advantageous for variational quantum algorithms (VQAs) tasks, as it allows us to search for and identify points with Q values close to 1 during the training and optimization processes. ### Effective Parameter Dimension Some parameters of PQC can be eliminated without affecting its expressivity. Effective parameter dimension (EPD) is a measure that quantifies the number of independent parameters of quantum states generated by PQC (Kang et al., 2016). It reflects how many independent directions the parameters can explore over the Hilbert space. Figure 7. Illustration of the change in the expressivity of various pulse sets as the number of qubits increases. The main plot shows the overall trend, with a zoomed-in section highlighting the results of six proposed pulse-level design spaces. The figure demonstrates that expressivity increases with an increasing number of qubits until a small decrease in expressivity is observed when the number of qubits reaches four, which may be due to insufficient circuit depth. Additionally, the six subplots show the change in the expressivity of each of the pulse-level design spaces as the number of layers increases for four qubits. EPD can be calculated through Quantum Fisher information (QFI) (Steiner, 2007), which characterizes the sensibility of the state generated by PQC to its parameter changes (Birshman et al., 2016). Concretely, the QFI matrix is \[\mathcal{F}_{ij}=\mathrm{Re}\left(\left\langle\partial_{i}\psi(\mathbf{\theta}) \left|\,\partial_{j}\psi(\mathbf{\theta})\right.\right\rangle-\left\langle\partial _{i}\psi(\mathbf{\theta})\left|\,\psi(\mathbf{\theta})\right.\right\rangle\left\langle \psi(\mathbf{\theta})\left|\,\partial_{j}\psi(\mathbf{\theta})\right.\right\rangle\right) \tag{10}\] Here the gradient of parameters is calculated by finite difference: \[\partial_{i}\psi(\mathbf{\theta})\approx\left[\psi\left(\mathbf{\theta}+\epsilon \epsilon_{i}\right)-\psi\left(\mathbf{\theta}-\epsilon\epsilon_{i}\right)\right]/2\epsilon. \tag{11}\] EPD is the rank of QFI matrix: \(d_{\mathrm{eff}}=\mathrm{rank}\,\mathcal{F}(\mathbf{\theta})\), i.e., the number of linearly independent parameters over the design space. The experiments presented in Table 2 suggests that all parameters of the pulse-level model proposed in the Fig. 4 are valid. Compared to two-qubit gate models, the parameter dimensions at the gate level are more susceptible to redundancy, with Universal2QGateCircuit exhibiting six redundant parameters. Based on our analysis in the previous section, we conclude that the richness of EPDs is a significant contributing factor to the superior expressivity of the PPC model over gate-based PQC within a shorter duration. ## 4. Proposed Pulse-Level Design Space ### Insights of Proposed Design Space In this section, we introduce three different PPC model designs with a polynomial scaling of parameters. The first PPC template is the hardware-efficient pulse in Fig. 5 a, where we only apply two-qubit CR pulse on physically connected qubits. Hence, such a pulse template is efficient in the sense that requires least compiling resources to implement on the hardware. The second pulse template is a decay layer pulse in Fig. 5 b. designed to improve the trainability of PPC. The hope is to gradually reduce the operating scope (involves fewer qubits) and hence alleviate the barren plateaus (BP) phenomenon (Steiner, 2007). Most studies on BP are conducted on the gate-level, and similar research on the pulse level is minimally explored. The last pulse template is the block-dressed CR pulse in Fig. 5 c. We first dress the native two-qubit CR pulse with four single-qubit pulses on both sides to enhance expressivity. Then, we arrange such a dressed CR pulse as a block and loop through all the qubits. In general, one can always adopt neural architecture search (NAS) methods to construct an automatic workflow to build customized PPC models. But those approaches are usually too expensive to run on real quantum hardware. The proposed pulse templates in this Figure 8. The main plot of this figure shows the entanglement capability of various pulse sets as the number of qubits increases. The zoomed-in section of the plot displays the results of six pulse-level design spaces proposed in this study. The plot illustrates that as the number of qubits increases, the entanglement capability continues to improve. The six subplots depict the entanglement capability of each of the six pulse-level design spaces as the number of layers increases, with the qubit count fixed at four. The subplots demonstrate that with an increasing number of layers, the entanglement capability of each pulse-level design space also continues to increase. study are aiming to make pulse-level research more accessible to the community. ### Evaluation and Analysis of Design Space In this section, we conduct an analysis of the proposed pulse-level design space and summarize our observations and insights for each analyzed criterion. These analyses are performed with respect to all evaluation criteria and highlighted general trends. Before introducing our observations and insights, we first conduct a detailed investigation using the criteria mentioned in the previous section, in order to better understand the insights obtained from the subsequent presentation of the comprehensive trends. We include evaluations of expressivity, entanglement capability, as well as cost estimation for the proposed pulse-level design space. #### 4.2.1. Expressivity Analysis Firstly, we measure the ability and potential of the proposed pulse-level design space to explore the Hilbert space, while adding random-generated pulses without any design space-based policy as a comparison to the experiments. By evaluating the results of a series of expressivity tests (see Fig. 7), the value of K-L divergence drops gradually as the test depth (the number of qubits) increases, indicating that the expressivity of the pulse-level design space increases progressively. As the number of qubits exceeds 4, however, the value of K-L divergence increases slightly, indicating that one layer of pulse-level design space is incapable of exploring this portion of the Hilbert space. This is observed for all the proposed pulse-level design spaces we described. In contrast, the trend of expressivity with the number of qubits for pulses generated at random without policy is erratic and unpredictable. Then, we fix the number of qubits at four and gradually increase the width of the test, that is, for the number of layers of PPC. In Fig. 7, for each of the proposed pulse-level design spaces, increasing the number of circuit layers significantly improves expressivity. In addition, we find that the design space with a fixed CR amplitude is significantly more expressive than the corresponding design space without a fixed CR amplitude, and that among all design spaces, the design space with Pulse ID 2, which is the design space Dressed CR with a fixed CR amplitude, obtains the best performance in expressivity for a circuit with 1 layers and two, three, or four qubits, respectively. When there are two circuit layers, the hardware-efficient pulse with a Pulse ID 6 that fixes the amplitude of the CR gets the highest expressivity performance. When there are three circuit layers, the Decay-layer pulse with Pulse ID 4 that fixes the amplitude of CR has the best expressivity performance. #### 4.2.2. Entanglement Capability Analysis Secondly, we tested the entanglement capability of the suggested pulse-level design space to investigate and assess the strength of the proposed pulse-level design space's entanglement capability. Likewise, no design space-based policy is added to the experiment for comparison purposes. According to Fig. 8, the entanglement capability of all pulse-level design spaces grows progressively as the test depth (the number of qubits) rises. Meanwhile, entanglement capability with respect to the number of qubits for pulses generated at random without a policy is unpredictable. Then, we fix the number of qubits at four and increase the test's width progressively. From the six subplots, we can conclude that the progressive growth of the layer of PPC has a higher impact on the entanglement capability than the rise in qubits. Experiments on entanglement capability revealed that the design space with a fixed CR amplitude produces greater entanglement capability than the design space without a fixed CR amplitude. In the case of a single-layer circuit, the hardware-efficient pulse fixed CR amplitude with Pulse ID 2 can create the strongest entanglement capability. In the design space without fixed CR amplitude, the Decay-layer pulse with Pulse ID 3 provides the highest entanglement capability. When the number of circuit layers increases to two and three, the Decay-layer pulse with fixed CR amplitude and a Pulse ID 4 provides the best entanglement performance. #### 4.2.3. Estimated Cost Analysis For each given pulse-level design space, we estimate the cost of implementation in Table 3 in terms of the number of qubits N and the number of layers L of the circuit. We assess the cost of implementation for each pulse-level design space, taking into account the number of parameters, effective parameter dimension, number of CR pulses, and pulse width. We see a linear relationship between the number of parameters, effective parameter dimension, and number of CR pulses for all design spaces and the number of qubits. In addition, the pulsed circuit depth of all design spaces is proportional to the number of qubits. ## 5. Application Benchmarking ### Benchmark Applications #### 5.1.1. Quantum Chemistry Quantum chemistry studies the electronic structure and reaction characteristics of molecules using the ideas and tools of quantum mechanics to describe and calculate these properties. Quantum chemistry seeks to answer the following challenge in particular: how to determine the ground state energy, electronic structure, and features such as energy barriers for a given molecule. Using the proposed pulse-level design space, we intend to achieve the variational quantum eigensolver (VQE) to solve the \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Pulse ID** & **\# of Params** & **EPD** & **\# of CR Pulses** & **Pulse Circuit Depth** \\ \hline 1 & (SN-3)L & (SN-3)L & (N-1)L & NL \\ \hline 2 & 2(2N-1)L & 2(2N-1)L & (N-1)L & NL \\ \hline 3 & (6N-3)L & (6N-3)L & (N-1)L & 4L \\ \hline 4 & (SN-2)L & (5N-2)L & (N-1)L & 4L \\ \hline 5 & (9N-7)L & (9N-7)L & (N-1)L & (2N-1)L \\ \hline 6 & (8N-6)L & (8N-6)L & (N-1)L & (2N-1)L \\ \hline \end{tabular} \end{table} Table 3. The estimated costs associated with the proposed design space at the pulse level, which includes the number of parameters, effective parameter dimension, two-qubit operations, and circuit depth, are expressed in terms of two variables: N (number of qubits) and L (number of circuits). challenge of determining the ground state energy of molecules in quantum chemistry as part of our application benchmarking. This process can be considered as the solution of an eigenvalue problem for a Hamiltonian-like matrix, where the Hamiltonian describes the energy of particle interactions in a molecule and the eigenvalue is the ground state energy. The number of qubits, the number of Pauli strings, the measurement base, and the theoretical requirements of the ground state of the task for entanglement are stated for both molecular problems in Table 4. #### 5.1.2. Quantum Finance Quantum finance refers to the multidisciplinary application of quantum computing technology to the financial sector. Using quantum computers, it intends to handle a variety of financial problems, including but not limited to risk management, portfolio optimization, pricing of financial derivatives, etc. For our application benchmarking, we have selected the problem of investment optimization. Investment optimization is the construction of the optimal portfolio to achieve a particular purpose, given an underlying investment and a set of constraints. The risk factor is set to 0.5, and we are provided the two assets and four assets, as well as the randomly produced 'historical data' for each, and apply information to form the quadratic problem, then we map the variables in the quadratic problem to qubits so that translate the problem to Ising Hamiltonian. Our objective is to achieve VQE using pulse-level design spaces in order to obtain the ideal portfolio model. The information about the quantum finance problems is also provided in Table 4. ### Parameters Constraint Generator Parameterized pulse circuits operate by specifying a sequence of pulses that must adhere to hardware-imposed limitations. First, the pulse amplitude must be expressed as a fraction of the maximum output voltage of the arbitrary waveform generator (AWG) and must therefore be constrained to the interval [-1,1] (Han, 2018). Further experimentation by us has shown that the amplitude does not need to occupy the entire [-1,1] interval to achieve the desired oscillation period on the quantum state, and that the range may differ slightly across different hardware backends. The variance between the overall minimum and maximum values is typically around 0.35. For example, in Fig. 10, the amplitude range is [0.1, \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Applications** & **Benchmarks** & **\# of Qubits** & **\# of Pauli Strings** & **Measurement Bases** & **Entanglement Q** \\ \hline \multirow{2}{*}{Quantum Chemistry} & \(H_{2}\) & 2 & 5 & X, Y, Z & 0.04914 \\ \cline{2-5} & \(LiH\) & 4 & 100 & X, Y, Z & 0.00072 \\ \hline \multirow{2}{*}{Quantum Finance} & 2 Assets & 2 & 3 & Z & 0 \\ \cline{2-5} & 4 Assets & 4 & 10 & Z & 0.00097 \\ \hline \end{tabular} \end{table} Table 4. A detailed characterization of several application benchmarks using the number of qubits, number of Pauli strings, measurement basis, and entangled qubits required to find the ground state of the problem. Figure 9. The application benchmark of the proposed pulse-level design space, compared to random-generated PQC and two gate-level PQCs that are verified to have good performance. a) displays the results for quantum chemistry applications, with tasks involving the electronic structure of \(H_{2}\) and \(LiH\) molecules, the bond lengths default to 0.735Å and 1.5Å, respectively. b) displays the results for quantum finance applications, with tasks involving portfolio optimization for 2 and 4 assets. 0.4] on _ibmq_guadalupe_, thus, the training cost could be greatly reduced by selecting a suitable range of amplitude instead of [-1,1]. Second, the pulse angle affects the orientation of the rotation axis in the plane and thus oscillates within the interval [0,2\(\pi\)]. Third, the duration of a pulse, expressed in the unit of dt, must be a multiple of 16 to be loaded in the AWG, and the duration must be kept long enough to prevent the width of the top except for the rise-fall edge from being negative. To impose these constraints, we have introduced a parameter constraint generator to restrict the distribution range of all the parameters. For amplitude, we benchmarked multiple backends and provided a lookup-table (LUT) to find the amplitude range corresponding to a particular backend. For angle, we set this parameter to be within the range [0,2\(\pi\)]. For the duration, we restrict this parameter to the range [256,1024]. ### Evaluation and Analysis All the application benchmarks were conducted on Qiskit-Dynamics and Qiskit-Aer. Fig. 9 depicts a comparison of task performance between the suggested pulse-level design space, pulses generated at random, and the gate-based approach. We chose TwoLocal, which consists of Ry gates and CZ gates, and RealAmplitude, which consists of Ry gates and CX gates, for the gate-based model. There is no significant difference between the proposed design space and the randomly produced pulses for quantum finance tasks. We are able to identify the design space with a fixed CR amplitude that performs better in terms of expressivity and entanglement capability but yields poorer application results. This phenomenon is due to the factor we discussed in the previous section that our tested duration is mapped discretely to unit 'dt' from hardware to software, thus, the power of parameter 'duration' is limited. Thus, the criteria could be effective for guiding circuit design the majority of the time, but due to the fact that the proposed pulse-level design space must be evaluated further, application benchmarking is essential. In addition, the gate-based model generally obtains better performance than the pulse-based model in these tasks. As we evaluate the quantum chemistry tasks, we find in the \(H_{2}\) task, the random-generated pulse with few numbers of parameters (pulse ID 6 and 7, within the same number of parameters of Hardware-efficient pulse) performs badly. As for the _LiH_ task, the proposed pulse-level design space (Pulse ID from 1 - 6) achieves way better results than random-generated pulses, and also slightly better than the gate-based model. The reference ground state energy is -8.9407H, and the estimated energy of Pulse ID 1, 3, and 5 are -8.7766H, -8.8946H, and -8.8114H, respectively, the 'TwoLocal' gate-based ansatz obtains -8.6442H, as for 'RealAmplitude' gate-based ansatz, the result is -8.5214H. In terms of the sequence duration of the circuit, we have 979dt, 1835dt, and 1954dt for Pulse ID 1, 3, and 5, whereas, 'TwoLocal' and 'RealAmplitude' with 24800dt and 5600dt, respectively. In conclusion, the proposed pulse-level design space performs up to 4.2% in terms of the accuracy of the selected quantum chemistry problem with 67.2% latency advantage in duration, and up to 96% shortening in terms of duration with 1.5% accuracy advantage compared to the selected gate-based design. The different observations from the quantum finance task and the quantum chemistry task are illustrated in Table 4. The portfolio optimization problem with 2 assets and 4 assets are both too simple for VQEs, as they have a light number of Pauli strings and nearly no entanglement requirement by the ground state of the problem, and the measurement base is only on Z-axis. Thus, these problems can be even solved by random-generated pulses with good performance and lead to tiny differences that are hard to compare between models. In contrast, quantum chemistry problems have much more Pauli strings, stronger entanglement requirements, and more complex measurement bases than on all of X, Y, and Z, and thus, we observe the expectations in the quantum chemistry benchmark. ## 6. Discussion The ability to manipulate quantum states of qubits at the pulse level holds great promise, as it provides fine-grained control over the qubits. Parameterized quantum pulse enables researchers to explore new design space and generate more customized and complex quantum operations by leveraging the specific physical properties of the quantum processors. For instance, we have shown that a parameterized pulse template can generate highly entangled states with a high degree of expressivity, indicating the potential of designing parameterized pulse as the building block for the ansatz of variational quantum algorithms. Furthermore, the pulse naturally allows for flexible tuning with sufficient parameters, allowing for the exploration of Hilbert space fully [(7)]. In terms of EPD, the high EPD value demonstrated in our experiments also indicates the low redundancy of parameterized quantum pulse, ensuring efficient utilization of experimental resources. However, pulse-level control also has its limitations. One challenge is the complexity of designing pulse sequences for arbitrary quantum operations, as pulse-level control heavily relies on the physical implementation of the qubits. Consequently, designing parameterized quantum pulse requires a deep understanding of the underlying interactions and may require significant computational resources. Another challenge is the impact of noise and decoherence on the system. Basis gates can be fine-tuned for parameterized gates, but the calibration of arbitrary pulses is generally difficult. These effects can limit the effectiveness of pulse-level control, especially when dealing with large-scale quantum systems. The formalism of quantum pulses is still an incomplete field, making it challenging to design quantum protocols at the pulse level. Moreover, we have encoded classical data using a parameterized pulse encoder, which we initially believed had the potential to encode more data in a shorter amount of time. However, as shown in Fig. 11, in a simple experiment, we found that the gate-based encoder produced superior results, even with a shallow circuit. Although encoding classical data with parameterized pulses remains intriguing, this observation is unfavorable, and further research Figure 10. Dynamics of quantum states of an SQP with amplitude in the range of [0.1, 0.4] on _ibmq_guadalupe_. is necessary. We are still possible gain some benefits from a parameterized pulse encoder, for example, it is possible to encode time-sequence datasets by encoding time information into the duration parameter in parameterized pulse. To overcome the limitations of quantum pulse applications, researchers are exploring new approaches that combine pulse-level and gate-level control. For example, in this paper, we designed multiple pulse templates from gate-inspired ansatz with good performance. Such an approach allows for the flexibility and fine-grained control of pulse-level control while maintaining the simplicity and generality of gate-level protocols. Another approach is to use machine learning algorithms to optimize the pulse sequences for specific quantum tasks. This approach can reduce the complexity of programming the hardware and improve the overall performance of the quantum computation [8]. Overall, the advantages and limitations of parameterized quantum pulses highlight the need for continued research and development in this area to fully realize the potential of quantum computing. In this paper, we have evaluated parameterized quantum pulses from various perspectives, including expressivity, entanglement capability, effective parameter dimension, duration, and application-oriented benchmarking, which we hope will pave the way for future studies of quantum pulses. Figure 11. Binary classification of random-generated datasets with 16 features. a) Encoding by 8 SQPs on two qubits. b) Encoding by 16 single-qubit rotation gates on two qubits.
2305.08264
MatSci-NLP: Evaluating Scientific Language Models on Materials Science Language Tasks Using Text-to-Schema Modeling
We present MatSci-NLP, a natural language benchmark for evaluating the performance of natural language processing (NLP) models on materials science text. We construct the benchmark from publicly available materials science text data to encompass seven different NLP tasks, including conventional NLP tasks like named entity recognition and relation classification, as well as NLP tasks specific to materials science, such as synthesis action retrieval which relates to creating synthesis procedures for materials. We study various BERT-based models pretrained on different scientific text corpora on MatSci-NLP to understand the impact of pretraining strategies on understanding materials science text. Given the scarcity of high-quality annotated data in the materials science domain, we perform our fine-tuning experiments with limited training data to encourage the generalize across MatSci-NLP tasks. Our experiments in this low-resource training setting show that language models pretrained on scientific text outperform BERT trained on general text. MatBERT, a model pretrained specifically on materials science journals, generally performs best for most tasks. Moreover, we propose a unified text-to-schema for multitask learning on \benchmark and compare its performance with traditional fine-tuning methods. In our analysis of different training methods, we find that our proposed text-to-schema methods inspired by question-answering consistently outperform single and multitask NLP fine-tuning methods. The code and datasets are publicly available at \url{https://github.com/BangLab-UdeM-Mila/NLP4MatSci-ACL23}.
Yu Song, Santiago Miret, Bang Liu
2023-05-14T22:01:24Z
http://arxiv.org/abs/2305.08264v1
MatSci-NLP: Evaluating Scientific Language Models on Materials Science Language Tasks Using Text-to-Schema Modeling ###### Abstract We present MatSci-NLP, a natural language benchmark for evaluating the performance of natural language processing (NLP) models on materials science text. We construct the benchmark from publicly available materials science text data to encompass seven different NLP tasks, including conventional NLP tasks like named entity recognition and relation classification, as well as NLP tasks specific to materials science, such as synthesis action retrieval which relates to creating synthesis procedures for materials. We study various BERT-based models pretrained on different scientific text corpora on MatSci-NLP to understand the impact of pretraining strategies on understanding materials science text. Given the scarcity of high-quality annotated data in the materials science domain, we perform our fine-tuning experiments with limited training data to encourage the generalize across MatSci-NLP tasks. Our experiments in this low-resource training setting show that language models pretrained on scientific text outperform BERT trained on general text. MatBERT, a model pretrained specifically on materials science journals, generally performs best for most tasks. Moreover, we propose a unified text-to-schema for multitask learning on MatSci-NLP and compare its performance with traditional fine-tuning methods. In our analysis of different training methods, we find that our proposed text-to-schema methods inspired by question-answering consistently outperform single and multitask NLP fine-tuning methods. The code and datasets are publicly available1. Footnote 1: [https://github.com/BangLab-UdeM-Mila/NLP4MatSci-ACL23](https://github.com/BangLab-UdeM-Mila/NLP4MatSci-ACL23) ## 1 Introduction Materials science comprises an interdisciplinary scientific field that studies the behavior, properties and applications of matter that make up materials systems. As such, materials science often requires deep understanding of a diverse set of scientific disciplines to meaningfully further the state of the art. This interdisciplinary nature, along with the great technological impact of materials advances and growing research work at the intersection of machine learning and materials science (Miret et al., 2021; Choudhary et al., 2022), makes the challenge of developing and evaluating natural language processing (NLP) models on materials science text both interesting and exacting. The vast amount of materials science knowledge stored in textual format, such as journal articles, patents and technical reports, creates a tremendous opportunity to develop and build NLP tools to create and understand advanced materials. These tools could in turn enable faster discovery, synthesis and deployment of new materials into a wide variety of application, including clean energy, sustainable manufacturing and devices. Understanding, processing, and training language models for scientific text presents distinctive challenges that have given rise to the creation of specialized models and techniques that we review in Section 2. Additionally, evaluating models on scientific language understanding tasks, especially in materials science, often remains a laborious task given the shortness of high-quality annotated data and the lack of broad model benchmarks. As such, NLP research applied to materials science remains in the early stages with a plethora of ongoing research efforts focused on dataset creation, model training and domain specific applications. The broader goal of this work is to enable the development of pertinent language models that can be applied to further the discovery of new material systems, and thereby get a better sense of how well language models understand the properties and behavior of existing and new materials. As such, we propose MatSci-NLP, a benchmark of various NLP tasks spanning many applications in the materials science domain described in Section 3. We utilize this benchmark to analyze the performance of various BERT-based models for MatSci-NLP tasks under distinct textual input schemas described in Section 4. Concretely, through this work we make the following research contributions: * **MatSci-NLP Benchmark:** We construct the first broad benchmark for NLP in the materials science domain, spanning several different NLP tasks and materials applications. The benchmark contents are described in Section 3 with a general summary and data sources provided in Table 1. The processed datasets and code will be released after acceptance of the paper for reproducibility. * **Text-to-Schema Multitasking:** We develop a set of textual input schemas inspired by question-answering settings for fine-tuning language models. We analyze the models' performance on MatSci-NLP across those settings and conventional single and multitask fine-tuning methods. In conjunction with this analysis, we propose a new Task-Schema input format for joint multitask training that increases task performance for all fine-tuned language models. * **MatSci-NLP Analysis:** We analyze the performance of various BERT-based models pretrained on different scientific and non-scientific text corpora on the MatSci-NLP benchmark. This analysis help us better understand how different pretraining strategies affect downstream tasks and find that MatBERT Walker et al. (2021), a BERT model trained on materials science journals, generally performs best reinforcing the importance of curating high-quality pretraining corpora. We centered our MatSci-MLP analysis on exploring the following questions: * _How does in-domain pretraining of language models affect the downstream performance on MatSci-NLP tasks?_ We investigate the performance of various models pretrained on different kinds of domain-specific text including materials science, general science and general language (BERT Devlin et al. (2018)). We find that MatBERT generally performs best and that language models pretrained on diverse scientific texts outperform a general language BERT. Interestingly, SciBERT Beltagy et al. (2019) often outperforms materials science language models, such as MatSciBERT Gupta et al. (2022) and BatteryBERT Huang and Cole (2022). * _How do in-context data schema and multitasking affect the learning efficiency in low-resource training settings?_ We investigate how several input schemas shown in Figure 1 that contain different kinds of information affect various domain-specific language models and propose a new _Task-Schema_ method. Our experiments show that our proposed Task-Schema method mostly performs best across all models and that question-answering inspired schema outperform single task and multitask fine-tuning settings. ## 2 Background The advent of powerful NLP models has enabled the analysis and generation of text-based data across a variety of domains. BERT Devlin et al. (2018) was one of the first large-scale transformer-based models to substantially advance the state-of-the-art by training on large amounts of unlabeled text data in a self-supervised way. The pretraining procedure was followed by task-specific fine-tuning, leading to impressive results on a variety of NLP task, such as named entity recognition (NER), question and answering (QA), and relation classification Hakala and Pyysalo (2019); Qu et al. (2019); Wu and He (2019). A significant collection of large language models spanning millions to billions of parameters followed the success of BERT adopting a similar approach of pretraining on vast corpora of text with task-specific fine-tuning to push the state-of-the-art for in natural language processing and understanding Raffel et al. (2020); Brown et al. (2020); Scao et al. (2022). ### Scientific Language Models The success of large language models on general text motivated the development of domain-specific language models pretrained on custom text data, including text in the scientific domain: SciBERT Beltagy et al. (2019), ScholarBERT Hong et al. (2022) and Galactica Taylor et al. (2022) are pretrained on general corpus of scientific articles; BioBERT Lee et al. (2020), PubMedBERT Gu et al. (2021), BioMegatron Shin et al. (2020) and Sci-Five (Phan et al., 2021) are pretrained on various kinds of biomedical corpora; MatBERT (Walker et al., 2021), MatSciBERT (Gupta et al., 2022) are pretrained on materials science specific corpora; and BatteryBERT (Huang and Cole, 2022) is pretrained on a corpus focused on batteries. Concurrently, several domain-specific NLP benchmarks were established to assess language model performance on domain-specific tasks, such as QASPER (Dasigi et al., 2021) and BLURB (Gu et al., 2021) in the scientific domain, as well as PubMedQA (Jin et al., 2019), BioASQ (Balikas et al., 2015), and Biomedical Language Understanding Evaluation (BLUE) (Peng et al., 2019) in the biomedical domain. ### NLP in Materials Science The availability of openly accessible, high-quality corpora of materials science text data remains highly restricted in large part because data from peer-reviewed journals and scientific documents is usually subject to copyright restrictions, while open-domain data is often only available in difficult-to-process PDF formats (Olivetti et al., 2020; Kononova et al., 2021). Moreover, specialized scientific text, such as materials synthesis procedures containing chemical formulas and reaction notation, require advanced data mining techniques for effective processing (Kuniyoshi et al., 2020; Wang et al., 2022). Given the specificity, complexity, and diversity of specialized language in scientific text, effective extraction and processing remain an active area of research with the goal of building relevant and sizeable text corpora for pretraining scientific language models (Kononova et al., 2021). Nonetheless, materials science-specific language models, including MatBERT (Walker et al., 2021), MatSciBERT (Gupta et al., 2022), and BatteryBERT (Huang and Cole, 2022), have been trained on custom-built pretraining dataset curated by different academic research groups. The pretrained models and some of the associated fine-tuning data have been released to the public and have enabled further research, including this work. The nature of NLP research in materials science to date has also been highly fragmented with many research works focusing on distinct tasks motivated by a given application or methodology. Common ideas among many works include the prediction and construction of synthesis routes for a variety of materials (Mahbub et al., 2020; Karpovich et al., 2021; Kim et al., 2020), as well as the creation of novel materials for a given application (Huang and Cole, 2022; Georgescu et al., 2021; Jensen et al., 2021), both of which relate broader challenges in the field of materials science. ## 3 MatSci-NLP Benchmark Through the creation of MatSci-NLP, we aim to bring together some of the fragmented data across multiple research works for a wide-ranging materials science NLP benchmark. As described in Section 2, the availability of sizeable, high-quality and diverse datasets remain a major obstacle in applying modern NLP to advance materials science in meaningful ways. This is primarily driven by a high cost of data labeling and the heterogeneous nature of materials science. Given those challenges, we created MatSci-NLP by unifying various publicly available, high-quality, smaller-scale datasets to form a benchmark for fine-tuning and evaluating modern NLP models for materials science applications. MatSci-NLP consists of seven NLP tasks shown in Table 1, spanning a wide range of materials categories including fuel cells (Friedrich et al., 2020), glasses (Venugopal et al., 2021), inorganic materials (Weston et al., 2019; MatSciRE, 2022), superconductors (Yamaguchi et al., 2020), and synthesis procedures pertaining to various kinds of materials (Mysore et al., 2019; Wang et al., 2022). Some tasks in MatSci-NLP had multiple source components, meaning that the data was curated from multiple datasets (e.g. NER), while many were obtained from a single source dataset. The data in MatSci-NLP adheres to a standard JSON-based data format with each of the samples containing relevant text, task definitions, and annotations. These can in turn be refactored into different input schemas, such as the ones shown in Figure 1 consisting of 1) _Input_: primary text jointly with task descriptions and instructions, and 2) _Output_: query and label, which we perform in our text-to-schema modeling described in Section 4. Next, we describe the tasks in MatSci-NLP in greater detail: * **Named Entity Recognition (NER):** The NER task requires models to extract summary-level information from materials science text and recognize entities including materials, descriptors, material properties, and applications amongst others. The NER task predicts the best entity type label for a given text span \(s_{i}\) with a non-entity span containing a "null" label. MatSci-NLP contains NER task data adapted from Weston et al. (2019); Friedrich et al. (2020); Mysore et al. (2019); Yamaguchi et al. (2020). * **Relation Classification:** In the relation classification task, the model predicts the most relevant relation type for a given span pair \((s_{i},s_{j})\). MatSci-NLP contains relation classification task data adapted from Mysore et al. (2019); Yamaguchi et al. (2020); MatSciRE (2022). * **Event Argument Extraction:** The event argument extraction task involves extracting event arguments and relevant argument roles. As there may be more than a single event for a given text, we specify event triggers and require the language model to extract corresponding arguments and their roles. MatSci-NLP contains event argument extraction task data adapted from Mysore et al. (2019); Yamaguchi et al. (2020). * **Paragraph Classification:** In the paragraph classification task adapted from Venugopal et al. (2021), the model determines whether a given paragraph pertains to glass science. * **Synthesis Action Retrieval (SAR):** SAR is a materials science domain-specific task that defines eight action terms that unambiguously identify a type of synthesis action to describe a synthesis procedure. MatSci-NLP adapts SAR data from Wang et al. (2022) to ask language models to classify word tokens into pre-defined action categories. \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Task**} & **Size** & **Meta-Dataset** \\ & (\# Samples) & **Components** \\ \hline \hline Named Entity & & \\ Recognition & 112,191 & 4 \\ \hline Relation & & \\ Classification & 25,674 & 3 \\ \hline Event Argument & & \\ Extraction & 6,566 & 2 \\ \hline Paragraph & & \\ Classification & 1,500 & 1 \\ \hline Synthesis & & \\ Action Retrieval & 5,547 & 1 \\ \hline Sentence & & \\ Classification & 9,466 & 1 \\ \hline Slot Filling & 8,253 & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: Collection of NLP tasks in the meta-dataset of the MatSci-NLP Benchmark drawn from Weston et al. (2019); Friedrich et al. (2020); Mysore et al. (2019); Yamaguchi et al. (2020); Venugopal et al. (2021); Wang et al. (2022); MatSciRE (2022). Figure 1: Example of different question-answering inspired textual input schemas (Task-Schema, Potential Choices, Example) applied on MatSci-NLP. The input of the language model includes the shared text (green) along with relevant task details (blue for NER and orange for event extraction). The shared text can contain relevant information for multiple tasks and be part of the language model input multiple times. * **Sentence Classification:** In the sentence classification task, models identify sentences that describe relevant experimental facts based on data adapted from Friedrich et al. (2020). * **Slot Filling:** In the slot-filling task, models extract slot fillers from particular sentences based on a predefined set of semantically meaningful entities. In the task data adapted from Friedrich et al. (2020), each sentence describes a single experiment frame for which the model predicts the slots in that frame. The tasks contained in MatSci-NLP were selected based on publicly available, high-quality annotated materials science textual data, as well as their relevance to applying NLP tools to materials science. Conventional NLP tasks (NER, Relation Classification, Event Argument Extraction, Paragraph Classification, Sentence Classification) enable materials science researchers to better process and understand relevant textual data. Domain specific tasks (SAR, Slot Filling) enable materials science research to solve concrete challenges, such as finding materials synthesis procedures and real-world experimental planning. In the future, we aim to augment to current set of tasks with additional data and introduce novel tasks that address materials science specific challenges with NLP tools. ## 4 Unified Text-to-Schema Language Modeling As shown in Figure 1, a given piece of text can include multiple labels across different tasks. Given this multitask nature of the MatSci-NLP benchmark, we propose a new and unified _Task-Schema_ multitask modeling method illustrated in Figure 2 that covers all the tasks in the MatSci-NLP dataset. Our approach centers on a unified text-to-schema modeling approach that can predict multiple tasks simultaneously through a unified format. The underlying language model architecture is made up of modular components, including a domain-specific encoder model (e.g. MatBERT, MatSciBERT, SciBERT), and a generic transformer-based decoder, each of which can be easily exchanged with different pretrained domain-specific NLP models. We fine-tune these pretrained language models and the decoder with collected tasks in MatSci-NLP using the procedure described in Section 4.3. The unified text-to-schema provides a more structured format to training and evaluating language model outputs compared to seq2seq and text-to-text approaches (Raffel et al., 2020; Luong et al., 2015). This is particularly helpful for the tasks in MatSci-NLP given that many tasks can be reformulated as classification problems. NER and Slot Filling, for example, are classifications at the token-level, while event arguments extraction entails the classification of roles of certain arguments. Without a predefined schema, the model relies entirely on unstructured natural language to provide the answer in a seq2seq manner, which significantly increases the complexity of the task and also makes it harder to evaluate performance. The structure imposed by text-to-schema method also simplifies complex tasks, such as event extraction, by enabling the language model to leverage the structure of the schema to predict the correct answer. We utilize the structure of the schema in decoding and evaluating the output of the language models, as described further in Section 4.3 in greater detail. Moreover, our unified text-to-schema approach alleviates error propagation commonly found in multitask scenarios (Van Nguyen et al., 2022; Lu et al., 2021), enables knowledge sharing across multiple tasks and encourages the fine-tuned language model to generalize across a broader set of text-based instruction scenarios. This is supported by our results shown in Section 5.2 showing text-to-schema outperforming conventional methods. ### Language Model Formulation The general purpose of our model is to achieve multitask learning by a mapping function (\(f\)) between input (\(x\)), output (\(y\)), and schema (\(s\)), i.e., \(f(x,s)=y\). Due to the multitasking nature of our setting, both inputs and outputs can originate from different tasks n, i.e. \(x=[x_{t}1,x_{t}2,...x_{t}n]\) and \(y=[y_{t}1,y_{t}2,...y_{t}n]\), all of which fit under a common schema (\(s\)). Given the presence of domain-specific materials science language, our model architecture includes a domain-specific BERT encoder and a transformer decoder. All BERT encoders and transformer decoders share the same general architecture, which relies on a self-attention mechanism: Given an input sequence of length \(N\), we compute a set of attention scores, \(A=\mathrm{softmax}(\mathrm{QT}^{\mathrm{K}}/(\sqrt{\mathrm{d}_{\mathrm{K}}}))\). Next, we compute the weighted sum of the value vectors, \(O=AV\), where \(Q\), \(K\), and \(V\) are the query, key, and value matrices, and \(d_{k}\) is the dimensionality of the key vectors. Additionally, the transformer based decoder differ from the domain specific encoder by: 1) Applying masking based on the schema applied to ensure that it does not attend to future positions in the output sequence. 2) Applying both self-attention and encoder-decoder attention to compute attention scores that weigh the importance of different parts of the output sequence and input sequence. The output of the self-attention mechanism (\(O_{1}\)) and the output of the encoder-decoder attention mechanism (\(O_{2}\)) are concatenated and linearly transformed to obtain a new hidden state, \(H=\tanh(\mathrm{W_{o}}[\mathrm{O_{1}};\mathrm{O_{2}}]+\mathrm{b_{o}})\) with \(W_{o}\) and \(b_{o}\) being the weight and biases respectively. The model then applies a _softmax_ to \(H\) to generate the next element in the output sequence \(P=\mathrm{softmax}(\mathrm{W_{p}H+b_{p}})\), where \(P\) is a probability distribution over the output vocabulary. ### Text-To-Schema Modeling As shown in Figure 1, our schema structures the text data based on four general components: text, description, instruction options, and the predefined answer schema. * **Text** specifies raw text from the literature that is given as input to the language model. * **Description** describes the task for a given text according to a predefined schema containing the task name and the task arguments. * **Instruction Options** contains the core explanation related to the task with emphasis on three different types: 1) Potential choices of answers; 2) Example of an input/output pair corresponding to the task; 3) Task-Schema : our predefined answer schema illustrated in Figure 2. * **Answer** describes the correct label of each task formatted as a predefined answer schema that can be automatically generated based on the data structure of the task. ### Language Decoding & Evaluation Evaluating the performance of the language model on MatSci-NLP requires determining if the text generated by the decoder is valid and meaningful in the context of a given task. To ensure consistency in evaluation, we apply a constrained decoding procedure consisting of two steps: 1) Filtering out invalid answers through the predefined answer schema shown in Figure 2 based on the structure of the model's output; 2) Match the model's prediction with the most similar valid class given by the annotation for the particular task. For example, if for the NER task shown in Figure 1 the model's predicted token is "BaCl2 2H2O materials", it will be matched with the NER label of "material", which is then used as the final prediction for computing losses and evaluating performance. This approach essentially reformulates each task as a classification problem where the classes are provided based on the labels from the tasks in MatSci-NLP. We then apply a cross-entropy loss for model fine-tuning based on the matched label from the model output. The matching procedure simplifies the language modeling challenge by not requiring an exact match of the predicted tokens with the task labels. This in turns leads to a more comprehensible signal in the fine-tuning loss function. ## 5 Evaluation and Results Our analysis focuses on the questions outlined in Section 1: 1) Studying the effectiveness of domain-specific language models as encoders, and 2) Analyzing the effect of different input schemas in resolving MatSci-NLP tasks. Concretely, we study the performance of the language models and language schema in a _low resource_ setting where we Figure 2: Unified text-to-schema method for MatSci-NLP text understanding applied across the seven tasks. The language model includes a domain specific encoder, which can be exchanged in a modular manner, as well as a general language pretrained transformer decoder. perform fine-tuning on different pretrained BERT models with limited data from the MatSci-NLP benchmark. This low-resource setting makes the learning problem harder given that the model has to generalize on little amount of data. Moreover, this setting approximates model training with very limited annotated data, which is commonly found in materials science as discussed in Section 2. In our experiments, we split the data in MatSci-NLP into 1% training subset and a 99% testing subset for evaluation. None of the evaluated encoder models were exposed to the fine-tuning data in advance of our experiments and therefore have to rely on the knowledge acquired during their respective pretraining processes. We evaluate the results of our experiments using micro-F1 and macro-F1 scores of the language model predictions on the test split of the MatSci-NLP that were not exposed during fine-tuning. How does in-domain pretraining of language models affect the downstream performance on MatSci-NLP tasks? (Q1) Based on the results shown in Table 2, we can gather the following insights: _First, domain-specific pretraining affects model performance._ We perform fine-tuning on various models pretrained on domain-specific corpora in a low-resource setting and observe that: i) MatBert, which was pretrained on textual data from materials science journals, generally performs best for most tasks in the MatSci-NLP benchmark with SciBERT generally performing second best. The high performance of MatBERT suggests that materials science specific pretraining does help the language models acquire relevant materials science knowledge. Yet, the underperformance of MatSciBERT compared to MatBERT and SciBERT indicates that the curation of pretraining data does significantly affect performance. ii) The importance of the pretraining corpus is further reinforced by the difference in performance between SciBERT and ScholarBERT, both of which were trained on corpora of general scientific text, but show vastly different results. In fact, ScholarBERT underperforms all other models, including the general language BERT, for all tasks except event argument extraction where ScholarBERT performs best compared to all other models. iii) The fact that most scientific BERT models outperform BERT pretrained on general language suggests that pretraining on high-quality scientific text is beneficial for resolving tasks involving materials science text and potentially scientific texts from other domains. This notion of enhanced performance on MatSci-NLP when pretraining on scientific text is further reinforced by the performance of BioBERT by Wada et al. (2020). BioBERT outperforms BERT on most tasks even though it was trained on text from the biomedical domain that has minor overlap with the materials science domain. This strongly indicates that scientific language, regardless of the domain, has a significant distribution shift from general language that is used to pretrain common language models. \begin{table} \begin{tabular}{c|c c c c c c c c} \hline \multirow{2}{*}{**NLP Model**} & **Named Entity** & **Relation** & **Event Argument** & **Paragraph** & **Synthesis** & **Sentence** & **Slot** & **Overall** \\ & **Recognition** & **Classification** & **Extraction** & **Classification** & **Action Retrieval** & **Classification** & **Filling** & **(All Tasks)** \\ \hline MatSciBERT & 0.707,0.028 & 0.791\({}_{\pm 0.008}\) & 0.436\({}_{\pm 0.006}\) & 0.719\({}_{\pm 0.116}\) & 0.692\({}_{\pm 0.179}\) & 0.914\({}_{\pm 0.008}\) & 0.436\({}_{\pm 0.12}\) & 0.671\({}_{\pm 0.000}\) \\ Gupta et al. (2022) & 0.470,0.029 & 0.507\({}_{\pm 0.017}\) & 0.251\({}_{\pm 0.075}\) & 0.623\({}_{\pm 0.183}\) & 0.481\({}_{\pm 0.254}\) & 0.660\({}_{\pm 0.009}\) & 0.194\({}_{\pm 0.002}\) & 0.436\({}_{\pm 0.002}\) \\ \hline MatBERT & 0.875,0.018 & 0.814\({}_{\pm 0.011}\) & 0.451\({}_{\pm 0.001}\) & 0.756\({}_{\pm 0.007}\) & 0.717\({}_{\pm 0.004}\) & 0.590\({}_{\pm 0.009}\) & 0.548\({}_{\pm 0.008}\) & 0.722\({}_{\pm 0.002}\) \\ Walker et al. (2021) & 0.630,0.077 & 0.513\({}_{\pm 0.183}\) & 0.288\({}_{\pm 0.006}\) & 0.691\({}_{\pm 0.188}\) & 0.549\({}_{\pm 0.001}\) & 0.614\({}_{\pm 0.134}\) & 0.273\({}_{\pm 0.001}\) & 0.517\({}_{\pm 0.001}\) \\ \hline BatteryBERT & 0.786,0.113 & 0.801\({}_{\pm 0.003}\) & 0.457\({}_{\pm 0.004}\) & 0.633\({}_{\pm 0.075}\) & 0.614\({}_{\pm 0.128}\) & 0.912\({}_{\pm 0.015}\) & 0.520\({}_{\pm 0.037}\) & 0.663\({}_{\pm 0.038}\) \\ Huang and Cole (2022) & 0.472\({}_{\pm 0.103}\) & 0.466\({}_{\pm 0.111}\) & 0.277\({}_{\pm 0.004}\) & 0.610\({}_{\pm 0.004}\) & 0.419\({}_{\pm 0.109}\) & 0.684\({}_{\pm 0.005}\) & 0.224\({}_{\pm 0.007}\) & 0.456\({}_{\pm 0.004}\) \\ \hline SciBERT & 0.734,0.079 & 0.819\({}_{\pm 0.007}\) & 0.451\({}_{\pm 0.007}\) & 0.696\({}_{\pm 0.004}\) & 0.701\({}_{\pm 0.138}\) & 0.911\({}_{\pm 0.007}\) & 0.481\({}_{\pm 0.144}\) & 0.685\({}_{\pm 0.006}\) \\ Beltay et al. (2019) & 0.497\({}_{\pm 0.009}\) & **0.545**\({}_{\pm 0.103}\) & 0.276\({}_{\pm 0.000}\) & 0.546\({}_{\pm 0.023}\) & 0.516\({}_{\pm 0.271}\) & 0.617\({}_{\pm 0.143}\) & 0.224\({}_{\pm 0.010}\) & 0.460\({}_{\pm 0.004}\) \\ \hline ScholarBERT & 0.168,0.025 & 0.428,0.143 & 0.489\({}_{\pm 0.003}\) & 0.663\({}_{\pm 0.002}\) & 0.322\({}_{\pm 0.200}\) & 0.906\({}_{\pm 0.007}\) & 0.296\({}_{\pm 0.005}\) & 0.468\({}_{\pm 0.002}\) \\ Hong et al. (2022) & 0.101,0.043 & 0.274\({}_{\pm 0.104}\) & 0.3566\({}_{\pm 0.003}\) & 0.433\({}_{\pm 0.127}\) & 0.178\({}_{\pm 0.015}\) & 0.478\({}_{\pm 0.008}\) & 0.190\({}_{\pm 0.004}\) & 0.476\({}_{\pm 0.003}\) \\ \hline BioBERT & 0.715,0.031 & 0.797\({}_{\pm 0.002}\) & 0.488\({}_{\pm 0.006}\) & 0.675\({}_{\pm 0.144}\) & 0.647\({}_{\pm 0.140}\) & 0.915\({}_{\pm 0.001}\) & 0.452\({}_{\pm 0.114}\) & 0.670\({}_{\pm 0.006}\) \\ Wada et al. (2020) & 0.459\({}_{\pm 0.035}\) & 0.465\({}_{\pm 0.134}\) & 0.274\({}_{\pm 0.009}\) & 0.578\({}_{\pm 0.132}\) & 0.446\({}_{\pm 0.211}\) & 0.686\({}_{\pm 0.008}\) & 0.191\({}_{\pm 0.045}\) & 0.442\({}_{\pm 0.057}\) \\ \hline BERT & 0.657\({}_{\pm 0.077}\) & 0.782\({}_{\pm 0.066}\) & 0.418\({}_{\pm 0.053}\) & 0.665\({}_{\pm 0.037}\) & 0.656\({}_{\pm 0.009}\) & 0.910\({}_{\pm 0.017}\) & 0.520\({}_{\pm 0.019}\) & 0.658\({}_{\pm 0.000}\) \\ Devlin et al. (2018) & 0.461\({}_{\pm 0.058}\) & 0.494\({}_{\pm 0.001}\) & 0.225\({}_{\pm 0.001}\) & 0.532\({}_{\pm 0.194}\) & 0.515\({}_{\pm 0.007}\) & 0.633\({}_{\pm 0.133}\) & 0.257\({}_{\pm 0.022}\) & 0.439\({}_{\pm 0.021}\) \\ \hline \end _Second, imbalanced datasets in MatSci-NLP skew performance metrics:_ We can see from Table 2 that the micro-F1 scores are significantly higher than the macro-f1 across all tasks. This indicates that the datasets used in the MatSci-NLP are consistently imbalanced, including in the binary classification tasks, and thereby push the micro-F1 higher compared to the macro-F1 score. In the case of paragraph classification, for example, the number of positive examples is 492 compared with the total number of 1500 samples. As such, only models with a micro-F1 score above 0.66 and macro-F1 above 0.5 can be considered to have semantically meaningful understanding of the task. This is even more pronounced for sentence classification where only \(876/9466\approx 10\%\) corresponds to one label. All models except ScholarBERT outperform a default guess of the dominant class for cases. While imbalanced datasets may approximate some real-world use cases of materials science text analysis, such as extracting specialized materials information, a highly imbalanced can be misguiding in evaluating model performance. To alleviate the potentially negative effects of imbalanced data, we suggest three simple yet effective methods: 1) Weighted loss functions: This involves weighting the loss function to give higher weights to minority classes. Focal loss Lin et al. (2017), for example, is a loss function that dynamically modulates the loss based on the prediction confidence, with greater emphasis on more difficult examples. As such, Focal loss handles class imbalance well due to the additional attention given to hard examples of the minority classes. 2) Class-balanced samplers: Deep learning frameworks, such as Pytorch, have class-balanced batch samplers that can be used to oversample minority classes within each batch during training, which can help indirectly address class imbalance. 3) Model architecture tweaks: The model architecture and its hyper-parameters can be adjusted to place greater emphasis on minority classes. For example, one can apply separate prediction heads for minority classes or tweak L2 regularization and dropout to behave differently for minority and majority classes. How do in-context data schema and multitasking affect the learning efficiency in low-resource training settings? (Q2) To assess the efficacy of the proposed textual schemas shown in Figure 1, we evaluate four different QA-inspired schemas: 1) _No Explanations_ - here the model receives only the task description; 2) _Potential Choices_ - here the model receives the class labels given by the task; 3) _Examples_ - here the model receives an example of a correct answer, 4) _Task-Schema_ - here the model receives our pro \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline **NLP Model** & **Single Task** & **Single Task Prompt** & **MMOE** & **No Explanations** & **Potential Choices** & **Examples** & **Task-Schema** \\ \hline MatSciBERT & 0.501\({}_{\pm 0.007}\) & 0.485\({}_{\pm 0.003}\) & 0.457\({}_{\pm 0.002}\) & 0.651\({}_{\pm 0.006}\) & 0.670\({}_{\pm 0.006}\) & 0.688\({}_{\pm 0.004}\) & 0.671\({}_{\pm 0.000}\) \\ Gupta et al. (2022) & 0.320\({}_{\pm 0.078}\) & 0.238\({}_{\pm 0.012}\) & 0.228\({}_{\pm 0.008}\) & 0.438\({}_{\pm 0.052}\) & 0.435\({}_{\pm 0.001}\) & 0.463\({}_{\pm 0.000}\) & 0.456\({}_{\pm 0.012}\) \\ \hline MatBERT & 0.537\({}_{\pm 0.036}\) & 0.523\({}_{\pm 0.071}\) & 0.557\({}_{\pm 0.010}\) & 0.721\({}_{\pm 0.003}\) & 0.699\({}_{\pm 0.003}\) & 0.705\({}_{\pm 0.025}\) & 0.722\({}_{\pm 0.023}\) \\ Walker et al. (2021) & 0.330\({}_{\pm 0.003}\) & 0.267\({}_{\pm 0.04}\) & 0.301\({}_{\pm 0.006}\) & 0.514\({}_{\pm 0.006}\) & 0.478\({}_{\pm 0.032}\) & 0.470\({}_{\pm 0.029}\) & 0.517\({}_{\pm 0.011}\) \\ \hline BatteryBERT & 0.469\({}_{\pm 0.000}\) & 0.488\({}_{\pm 0.011}\) & 0.431\({}_{\pm 0.044}\) & 0.660\({}_{\pm 0.013}\) & 0.622\({}_{\pm 0.009}\) & 0.660\({}_{\pm 0.033}\) & 0.663\({}_{\pm 0.038}\) \\ Huang and Cole (2022) & 0.288\({}_{\pm 0.055}\) & 0.241\({}_{\pm 0.009}\) & 0.200\({}_{\pm 0.002}\) & 0.450\({}_{\pm 0.003}\) & 0.423\({}_{\pm 0.039}\) & 0.416\({}_{\pm 0.054}\) & 0.456\({}_{\pm 0.018}\) \\ \hline SciBERT & 0.500\({}_{\pm 0.055}\) & 0.502\({}_{\pm 0.003}\) & 0.504\({}_{\pm 0.052}\) & 0.680\({}_{\pm 0.006}\) & 0.660\({}_{\pm 0.042}\) & 0.686\({}_{\pm 0.009}\) & 0.685\({}_{\pm 0.056}\) \\ Belarsy et al. (2019) & 0.300\({}_{\pm 0.009}\) & 0.248\({}_{\pm 0.015}\) & 0.275\({}_{\pm 0.031}\) & 0.458\({}_{\pm 0.000}\) & 0.435\({}_{\pm 0.001}\) & 0.460\({}_{\pm 0.002}\) & 0.460\({}_{\pm 0.014}\) \\ \hline ScholarBERT & 0.472\({}_{\pm 0.137}\) & 0.429\({}_{\pm 0.28}\) & 0.367\({}_{\pm 0.079}\) & 0.464\({}_{\pm 0.016}\) & 0.513\({}_{\pm 0.011}\) & 0.467\({}_{\pm 0.019}\) & 0.468\({}_{\pm 0.028}\) \\ Hong et al. (2022) & 0.234\({}_{\pm 0.094}\) & 0.250\({}_{\pm 0.122}\) & 0.165\({}_{\pm 0.044}\) & 0.271\({}_{\pm 0.022}\) & 0.295\({}_{\pm 0.005}\) & 0.260\({}_{\pm 0.018}\) & 0.276\({}_{\pm 0.024}\) \\ \hline BioBERT & 0.487\({}_{\pm 0.059}\) & 0.488\({}_{\pm 0.032}\) & 0.360\({}_{\pm 0.047}\) & 0.663\({}_{\pm 0.044}\) & 0.587\({}_{\pm 0.022}\) & 0.632\({}_{\pm 0.040}\) & 0.670\({}_{\pm 0.001}\) \\ Wada et al. (2020) & 0.281\({}_{\pm 0.026}\) & 0.238\({}_{\pm 0.017}\) & 0.151\({}_{\pm 0.002}\) & 0.442\({}_{\pm 0.075}\) & 0.365\({}_{\pm 0.018}\) & 0.404\({}_{\pm 0.046}\) & 0.442\({}_{\pm 0.027}\) \\ \hline BERT & 0.498\({}_{\pm 0.051}\) & 0.488\({}_{\pm 0.043}\) & 0.394\({}_{\pm 0.009}\) & 0.670\({}_{\pm 0.020}\) & 0.601\({}_{\pm 0.066}\) & 0.636\({}_{\pm 0.002}\) & 0.658\({}_{\pm 0.030}\) \\ Devlin et al. (2018) & 0.266\({}_{\pm 0.044}\) & 0.239\({}_{\pm 0.011}\) & 0.166\({}_{\pm 0.008}\) & 0.440\({}_{\pm 0.052}\) & 0.382\({}_{\pm 0.003}\) & 0.394\({}_{\pm 0.051}\) & 0.439\({}_{\pm 0.021}\) \\ \hline Overall & 0.493\({}_{\pm 0.064}\) & 0.486\({}_{\pm 0.002}\) & 0.439\({}_{\pm 0.03}\) & 0.644\({}_{\pm 0.03}\) & 0.622\({}_{\pm 0.035}\) & 0.639\({}_{\pm 0.044}\) & 0.688\({}_{\pm 0.066}\) \\ All Models & 0.288\({}_{\pm 0.053}\) & 0.246\({}_{\pm 0.032}\) & 0.212\({}_{\pm 0.022}\) & 0.430\({}_{\pm 0.089}\) & 0.402\({}_{\pm 0.09 posed textual schema. We compare the schemas to three conventional fine-tuning methods: 1) _Single Task_ - the traditional method to solve each task separately using the language model and a classification head; 2) _Single Task Prompt_ - here we change the format of the task to the same QA-format as "No Explanations", but train each task separately; 3) _MMOE_ by Ma et al. (2018) uses multiple encoders to learn multiple hidden embeddings, which are then weighed by a task-specific gate unit and aggregated to the final hidden embedding using a weighted sum for each task. Next, a task-specific classification head outputs the label probability distribution for each task. Based on the results shown in Table 3, we gather the following insights: _First, Text-to-Schema methods perform better for all language models._ Overall, the Task-Schema method we proposed performs best across all tasks in the MatSci-NLP benchmark. The question-answering inspired schema ("No Explanations", "Potential Choices", "Examples", "Task-Schema ") perform better than fine-tuning in a traditional single task setting, single task prompting, as well as fine-tuning using the MMOE multitask method. This holds across all models for all the tasks in MatSci-NLP showing the efficacy of structured language modeling inspired by question-answering. _Second, schema design affects model performance._ The results show that both the pretrained model and the input format affect performance. This can be seen by the fact that while all scientific models outperform general language BERT using the Task-Schema method, BERT outperforms some models, mainly ScholarBERT and BioBERT, in the other text-to-schema settings and the conventional training settings. Nevertheless, BERT underperforms the stronger models (MatBERT, SciBERT, MatSciBERT) across all schema settings for all tasks in MatSci-NLP, further emphasizing the importance of domain-specific model pretraining for materials science language understanding. ## 6 Conclusion and Future Works We proposed MatSci-NLP, the first broad benchmark on materials science language understanding tasks constructed from publicly available data. We further proposed text-to-schema multitask modeling to improve the model performance in low-resource settings. Leveraging MatSci-NLP and text-to-schema modeling, we performed an in-depth analysis of the performance of various scientific language models and compare text-to-schema language modeling methods with other input schemas, guided by (Q1) addressing the pretrained models and (Q2) addressing the textual schema. Overall, we found that the choice of pretrained models matters significantly for downstream performance on MatSci-NLP tasks and that pretrained language models on scientific text of any kind often perform better than pretrained language models on general text. MatBERT generally performed best, highlighting the benefits of pretraining with high-quality domain-specific language data. With regards to the textual schema outlined in (Q2), we found that significant improvements can be made by improving textual schema showcasing the potential of fine-tuning using structured language modeling. The proposed encoder-decoder architecture, as well as the proposed multitask schema, could also be useful for additional domains in NLP, including both scientific and non-scientific domains. The potential for open-domain transferability of our method is due to: 1) Our multitask training method and associated schemas do not depend on any domain-specific knowledge, allowing them to be easily transferred to other domains. 2) The encoder of our proposed model architecture can be exchanged in a modular manner, which enables our model structure to be applied across multiple domains. 3) If the fine-tuning data is diverse across a wide range of domains, our method is likely to learn general language representations for open-domain multitask problems. Future work could build upon this paper by applying the model and proposed schema to different scientific domains where fine-tuning data might be sparse, such as biology, physics and chemistry. Moreover, future work can build upon the proposed schema by suggesting novel ways of modeling domain-specific or general language that lead to improvements in unified multi-task learning. ## Limitations One of the primary limitations of NLP modeling in materials science, including this work, is the low quantity of available data as discussed in Section 2. This analysis is affected by this limitation as well given that our evaluations were performed in a low-data setting within a dataset that was already limited in size. We believe that future work can improve upon this study by applying larger datasets, both in the number of samples and in the scope of tasks, to similar problem settings. The small nature of the datasets applied in this study also presents the danger that some of the models may have memorized certain answers instead of achieving a broader understanding, which could be mitigated by enlarging the datasets and making the tasks more complex. Moreover, we did not study the generalization of NLP models beyond the materials science domain, including adjacent domains such as chemistry and physics. This targeted focus was intentional but imposes limitations on whether the proposed techniques and insights we gained from our analysis are transferable to other domains, including applying NLP models for scientific tasks outside of materials science. Another limitation of our study is the fact that we focused on BERT-based models exclusively and did not study autoregressive models, including large language models with billions of parameters highlighted in the introduction. The primary reason for focusing on BERT-based models was the diversity of available models trained on different scientific text corpora. Large autoregressive models, on the other hand, are mostly trained on general text corpora with some notable exceptions, such as Galactica Taylor et al. (2022). We believe that future work analyzing a greater diversity of language models, including large autoregressive models pre-trained on different kinds of text, would significantly strengthen the understanding surrounding the ability of NLP models to perform text-based tasks in materials science. While the results presented in this study indicate that domain-specific pretraining can lead to noticeable advantages in downstream performance on text-based materials science tasks, we would like to highlight the associated risks and costs of pretraining a larger set of customized language models for different domains. The heavy financial and environmental costs associated with these pretraining procedures merit careful consideration of what conditions may warrant expensive pretraining and which ones may not. When possible, we encourage future researchers to build upon existing large models to mitigate the pretraining costs. ## Broader Impacts and Ethics Statement Our MatSci-NLP benchmark can help promote the research on NLP for material science, an important and growing research field. We expect that the experience we gained from the material science domain can be transferred to other domains, such as biology, health, and chemistry. Our Text-to-Schema also helps with improving NLP tasks' performance in low-resource situations, which is a common challenge in many fields. Our research does not raise major ethical concerns. ## Acknowlegments This work is supported by the Mila internal funding - Program P2-V1: Industry Sponsored Academic Labs (project number: 10379), the Canada CIFAR AI Chair Program, and the Canada NSERC Discovery Grant (RGPIN-2021-03115).
2310.03187
Synthesis of Data-Driven Nonlinear State Observers using Lipschitz-Bounded Neural Networks
This paper focuses on the model-free synthesis of state observers for nonlinear autonomous systems without knowing the governing equations. Specifically, the Kazantzis-Kravaris/Luenberger (KKL) observer structure is leveraged, where the outputs are fed into a linear time-invariant (LTI) system to obtain the observer states, which can be viewed as the states nonlinearly transformed by an immersion mapping, and a neural network is used to approximate the inverse of the nonlinear immersion and estimate the states. In view of the possible existence of noises in output measurements, this work proposes to impose an upper bound on the Lipschitz constant of the neural network for robust and safe observation. A relation that bounds the generalization loss of state observation according to the Lipschitz constant, as well as the $H_2$-norm of the LTI part in the KKL observer, is established, thus reducing the model-free observer synthesis problem to that of Lipschitz-bounded neural network training, for which a direct parameterization technique is used. The proposed approach is demonstrated on a chaotic Lorenz system.
Wentao Tang
2023-10-04T22:19:53Z
http://arxiv.org/abs/2310.03187v1
# Synthesis of Data-Driven Nonlinear State Observers using Lipschitz-Bounded Neural Networks ###### Abstract This paper focuses on the _model-free_ synthesis of state observers for nonlinear autonomous systems without knowing the governing equations. Specifically, the Kazantzis-Kravaris/Luenberger (KKL) observer structure is leveraged, where the outputs are fed into a linear time-invariant (LTI) system to obtain the observer states, which can be viewed as the states nonlinearly transformed by an immersion mapping, and a neural network is used to approximate the inverse of the nonlinear immersion and estimate the states. In view of the possible existence of noises in output measurements, this work proposes to impose an upper bound on the Lipschitz constant of the neural network for robust and safe observation. A relation that bounds the generalization loss of state observation according to the Lipschitz constant, as well as the \(H_{2}\)-norm of the LTI part in the KKL observer, is established, thus reducing the model-free observer synthesis problem to that of Lipschitz-bounded neural network training, for which a direct parameterization technique is used. The proposed approach is demonstrated on a chaotic Lorenz system. ## I Introduction For nonlinear systems that arise from realistic engineering applications such as transport-reaction processes, modern control theory relies on _state-space representations_ for their modeling, analysis, and control [1, 2, 3]. Recent advances in nonlinear control have highlighted the role of data-driven (machine learning) techniques in identifying governing equations or underlying dynamical structures [4, 5, 6], analyzing system and control-theoretic properties [7, 8], and synthesizing model-free controllers [9, 10, 11]. In these efforts, it is often assumed that the _state_ information is available for analysis or control; for example, in reinforcement learning (RL) literature, it is common to apply stochastic first-order optimization to learn a value (cost) function or \(Q\) function based on temporal actions and state measurements. In many (if not most) control engineering applications, such as in chemical processes, however, it is more likely that the states are not measurable. Hence, for nonlinear control in a state-space framework, a _state observer_ is necessary, whereby the states are estimated based on input and output history [12]. A recent review on model-based approaches to synthesize state observers is found in Bernard, Andrieu, and Astolfi [13]. A classical form of state observer for linear systems is known as Luenberger observer [14], which an auxiliary linear time-invariant (LTI) system that uses the plant outputs as inputs and returns state estimates. The observer states are in fact a linear transform of the plant states [15]. The idea was extended to nonlinear systems in the seminal work of Kazantzis and Kravaris [16]. In their Kazantzis-Kravaris/Luenberger (KKL) observer (as named in Andrieu and Praly [17]) still uses an LTI system to convert plant outputs to observer states, which turn out to be the plant states transformed via a nonlinear immersion. Thus, the observer synthesis problem reduces to the determination of this nonlinear immersion and its inverse, via solving (model-based) partial differential equations (PDEs). Such a KKL observer was extended from autonomous to actuated systems in [18], where the LTI part is replaced by an input-affine one with an additional nonlinear drift term associated with the actuated inputs. This paper focuses on the _synthesis of KKL observer_ in a _model-free_ manner, without assuming prior knowledge on the plant dynamics. This is motivated by two reasons: (i) many nonlinear systems that involve complex kinetic or kinematic mechanisms are often hard to model accurately, and (ii) it can be challenging to solve the associated PDEs, especially in high-dimensional state space (in fact, there may not be well-posed boundary conditions). In the recent years, there have been several works that pioneered the use of neural networks in the observer problem. For example, Ramos et al. [19] first trained neural networks to approximate the inverse immersion map to reconstruct the actual states from observer states. Then, the optimization of pole placement was considered along with the training of inverse immersion in [20]. Niazi et al. [21] used physics-informed neural networks (PINNs) to approach a surrogate solution to solve the PDEs. Miao and Gatsis [22] formulated a dynamic optimization problem to minimize the accumulated squared state observation error, whereby the optimality condition, through calculus of variations results in neural ODEs. It is commonly known that neural networks, when over-parameterized with large widths and depths, may cause a deteriorated capability of generalization. It has also been argued that neural networks can be fragile to adversarial attacks to the training data and thus must be equipped with a self-defense mechanisms that warranty robustness [23, 24]. In particular, controlling the Lipschitz constant of the mapping specified by the neural network has been studied as a promising approach [25, 26, 27]. However, in these works, estimating and minimizing the Lipschitz constant requires the use of semidefinite programming routines, which has a high complexity when the number of neurons is large. An alternative way, called _direct paramterizaton_, as recently proposed in Wang and Manchester [28], is to translate the Lipschitz bound constraint into a special architecture of the neural layers, thus allowing the use of typical back-propagation (BP) to train the network in an unconstrained way. Hence, in this work, the Wang-Manchester direct parameterization is adopted to train Lipschitz-bounded neural networks in a KKL state observer for any unknown nonlinear autonomous system. The paper establishes a relation between the generalized observation error and the Lipschitz bound of the neural network as well as the \(H_{2}\)-norm of the LTI observer dynamics, under a typical white noise assumption on the plant outputs. Hence, by varying the Lipschitz bound, the optimal observer can be synthesized. ## II Preliminaries We consider a nonlinear autonomous system: \[\dot{x}(t)=f(x(t)),\quad y(t)=h(x(t)) \tag{1}\] where \(x(t)\in\mathcal{X}\subseteq\mathbb{R}^{n}\) is the vector of states and \(y(t)\in\mathbb{R}^{m}\) represents the outputs. For simplicity, we will consider \(m=1\). It is assumed that \(f\) and \(h\) are smooth on \(\mathcal{X}\) to guarantee existence and uniqueness of solution but unknown for model-based synthesis. ### _KKL Observer_ For nonlinear systems, KKL observer generalizes the notion of Luenberger observers that were restricted to linear systems [14], providing a generic method for state observation with mild assumptions to guarantee existence. Specifically, the KKL observer for (1) is expressed as \[\dot{z}(t)=Az(t)+By(t),\quad\hat{x}(t)=T^{\dagger}(z(t)). \tag{2}\] Here the observer states \(z\in\mathbb{R}^{n_{z}}\) has an LTI dynamics. The matrices \(A\) and \(B\) are chosen under the requirements of (i) controllability of \((A,B)\) should be controllable, (ii) Hurwitz property of \(A\), and (iii) sufficiently high dimension of \(z\) (\(n_{z}\)), which should be at least \(n+1\) if \((A,B)\) is complex [17] and at least \(2n+1\) if \((A,B)\) is real [29]. The mapping from the observer states \(z\) to the state estimates \(\hat{x}\) is a static one, \(T^{\dagger}\), which is the left-pseudoinverse of a nonlinear immersion \(T\) (i.e., a differentiable injection satisfying \(T^{\dagger}\circ T=\mathsf{id}\)). This immersion \(T\) should satisfy the following PDE: \[\frac{\partial T}{\partial x}(x)f(x)=AT(x)+Bh(x),\quad\forall x\in\mathcal{X}, \tag{3}\] where \(\partial T/\partial x\) denotes the Jacobian matrix of \(T\). It can be easily verified that under the above PDE, \(dT(x)/dt=AT(x)+By\), and thus \(z-T(x)\) has an exponentially decaying dynamics, as \(A\) is Hurwitz. The conditions for the existence of a KKL observer, namely the solution to its defining PDE (3), have been established based on the condition of backward distinguishability. In below, we denote the solution to the ODEs \(\dot{x}=f(x)\) at time \(t\) with initial condition \(x(0)=\xi\) as \(\Phi_{t}(\xi)\). For any open set \(\mathcal{O}\) in \(\mathcal{X}\), denote the backward time instant after which the solution does not escape this region by \(\varsigma_{\mathcal{O}}(\xi)=\inf\{t|\Phi_{t}(\xi)\in\mathcal{O}\}\). Also denote \(\mathcal{O}+\epsilon:=\{\xi+\eta|\xi\in\mathcal{O},\|\eta\|<\epsilon\}\). **Definition 1** (Backward distinguishability).: _The system (1) is \((\mathcal{O},\epsilon)\)-backward distinguishable if for any distinct \(\xi,\xi^{\prime}\in\mathcal{X}\) there exists a negative \(t>\varsigma_{\mathcal{O}+\epsilon}(\xi)\wedge\varsigma_{\mathcal{O}+\epsilon} (\xi^{\prime})\) such that \(h(\Phi_{t}(\xi))\neq h(\Phi_{t}(\xi^{\prime}))\)._ **Fact 1** (Existence of KKL observer, cf. Brivadis et al. [29]).: _Assume that there is an open \(\mathcal{O}\subseteq\bar{\mathcal{X}}\) and a positive constant \(\epsilon\) such that the system (1) is \((\mathcal{O},\epsilon)\)-backward distinguishable. Then there exists a constant \(\rho>0\) such that for all but a Lebesgue-zero-measure set of \((A,B)\in\mathbb{R}^{(2n+1)\times(2n+1)}\times\mathbb{R}^{(2n+1)}\), if \(A+\rho I\) Hurwitz, then there exists an immersion \(T:\mathcal{O}\rightarrow\mathbb{R}^{(2n+1)}\) solving the PDEs (3)._ The above theorem clarifies that as long as the spectrum of \(A\) is restricted to the left of \(-\rho+i\mathbb{R}\), the LTI dynamics in the KKL observer can be almost arbitrarily assigned. Once \((A,B)\) are chosen, the remaining question for synthesis a KKL observer (2) is to numerically determine the solution. In view of the computational challenge in directly solving the PDEs (3) and the recent trend of handling the problem by neural approaches [19, 20, 21], this work will seek to approximate \(T^{\dagger}\) by a neural network. Yet, instead of using a vanilla multi-layer perceptron architecture, a Lipschitz-bounded neural network will be adopted, which safeguards the generalization performance of state observation, which will be discussed in SSIII. This overall idea is illustrated in Fig. 1. ### _Lipschitz-Bounded Neural Networks_ Consider a \(\nu\)-layer neural network \(\hat{x}=S(z,\theta)\) with all parameters denoted as a single vector \(\theta\). Without loss of generality, assume that the activation function (element-wise applied to vectors) is \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\), with slope bounded in \([0,1]\) (in this work, rectified linear units (ReLU) are used to prevent gradient decay in BP training). The neural network then can be expressed as \[\begin{split}& z^{\ell+1}=\sigma(W^{\ell}z^{\ell}+b^{\ell}),\ \ \ell=0,\ldots,\nu-1\\ & z^{0}=z,\quad\hat{x}=W^{\nu}z^{\nu}+b^{\nu}.\end{split} \tag{4}\] where \(W^{0},\ldots,W^{\nu}\) are the weight matrices and \(b^{0},\ldots,b^{\nu}\) are the biases. In total there are \(\nu\) activation layers inserted between \(\nu+1\) fully connected layers. \(z\) represents the inputs to the neural network and \(\hat{x}\) is the output vector, as we will use such a neural network to approximate the \(T^{\dagger}\) mapping in the KKL observer. Given a neural network with fixed parameters \(\theta=(W^{0},b^{0},\ldots,W^{\nu},b^{\nu})\), a rough estimate of the Lipschitz Fig. 1: KKL observer with a Lipschitz-bounded neural network to be trained. constant of \(S\) can be obviously obtained as \[L_{S}(\theta)=\prod_{\ell=0}^{\nu}\|W^{\ell}\|_{2}, \tag{5}\] where \(\|\cdot\|_{2}\) for a matrix refers to its operator norm induced by the \(\ell_{2}\)-norm of vectors, i.e., its largest singular value. To reduce the conservativeness, Fazlyab et al. [25] leverages the control-theoretic tool of integral quadratic constraints to formulate the Lipschitz bound condition as a linear matrix inequality, thus estimating the Lipschitz constants and training Lipschitz-bounded neural networks through solving semidefinite programming problems [27]. The pertinent matrix size, however, proportionally scales with the total number of neurons, which results in high computational complexity unless the neural network is very small. The recent work of Wang and Manchester [28] proposed a _direct parameterization_ approach to accommodate Lipschitz bound by a special design of the neural network architecture instead of imposing extra parameter constraints. By this approach, the training of neural networks is an unconstrained optimization problem and is thus amenable to the typical, computationally lightweight back-propagation (BP) routine. Wang-Manchester direct parameterization is conceptually related to, and arguably motivated by, the theory of controller parameterization [30, 31]. **Definition 2** (\(1\)-Lipschitz sandwich layer, cf. [28]).: _Given parameters \(X\in\mathbb{R}^{d\times d}\), \(Y\in\mathbb{R}^{c\times d}\), \(s\in\mathbb{R}^{d}\), and \(b\in\mathbb{R}^{d}\), a \(1\)-Lipschitz sandwich layer is defined as such a mapping \(\Xi:\mathbb{R}^{c}\rightarrow\mathbb{R}^{d}\) that maps any \(h\in\mathbb{R}^{c}\) into a \(\Xi(h;X,Y,s,b)\in\mathbb{R}^{d}\) according to the following formulas:_ \[Z =X-X^{\top}+Y^{\top}Y,\ \Psi_{s}=\mathrm{diag}(e^{s}) \tag{6}\] \[M_{X,Y} =\left[(I+Z)^{-1}(I-Z)\right]^{\top},\] \[N_{X,Y} =\left[-2Y(I+Z)^{-1}\right]^{\top},\] \[\Xi(h) =\sqrt{2}M_{X,Y}^{\top}\Psi_{s}\sigma(\sqrt{2}\Psi_{s}^{-1}N_{X, Y}h+b).\] It turns out that the Lipschitz constant of the above-defined sandwich layer is guaranteed to be upper bounded by \(1\)[28, Theorem 3.3]. The mapping from the input \(h\) to the output \(\Xi(h)\) can be regarded as comprising of an activation layer in the midst of two fully connected layers with related parameters. The operation from \((X,Y)\) to \((M,N)\) is known as the _Cayley transform_. The structure and the parameters of a sandwich layer is shown in Fig. 2. Thus, by stacking a number of such sandwich layers after a scaling by \(\sqrt{\gamma}\) and before a non-activated half-sandwich layer (meaning a layer containing only the terms in the parentheses of \(\Xi\) as in Equation (6)), a neural network with Lipschitz bound \(\gamma\) can be obtained, for any provided \(\gamma>0\). **Definition 3** (Wang-Manchester network).: _In this work, we refer to Wang-Manchester network, \(S(\cdot|\theta)\), by a neural network in the following architecture:_ \[h^{0} =\sqrt{\gamma}z; \tag{7}\] \[h^{\ell+1} =\Xi(h^{\ell};X^{\ell},Y^{\ell},s^{\ell},b^{\ell}),\ \ell=0,1,\ldots,\nu-1;\] \[\hat{x} =\sqrt{\gamma}N_{X^{\nu},Y^{\nu}}h^{\nu}+b^{\nu}.\] _Here the parameters include_ \[\theta=\{X^{\ell},Y^{\ell},s^{\ell},b^{\ell}\}_{\ell=0}^{\nu-1}\cup\{X^{\nu}, Y^{\nu},b^{\nu}\}\] _which can be trained in an unconstrained way using back-propagation. The inputs and outputs of the network are \(z\) and \(\hat{x}\), respectively._ The above-defined Wang-Manchester network satisfies \(\|S(\cdot|\theta)\|_{\mathrm{Lip}}\leq\gamma\). In this work, the network is defined and trained with data using PyTorch (version 2.0.1) on Google Colaboratory, which allows the auto-differentiation of a user-defined loss function with respect to the neural network parameters for the parameters to be iteratively updated. ## III Analysis on the Generalized Loss Here we shall provide a justification for requiring a Lipschitz bound on the neural network. We will make the following standing assumptions on the training data collection procedure for subsequent analysis. **Assumption 1** (Ergodicity).: _Assume that a sample trajectory is collected from the system, whose initial state is sampled from a probability distribution \(\mathcal{F}\) on \(\mathcal{X}\). The distribution \(\mathcal{F}\) is time-invariant (i.e., an eigenmeasure of the Perron-Frobenius operator), so that any point of the trajectory comes from \(\mathcal{F}\)._ Suppose that The LTI dynamics of the KKL observer, \((A,B)\), is fixed. Then the observer states can be simulated from this linear dynamics. **Assumption 2** (Noisy measurements).: _Assume that the input signal for this LTI system is not noise-free measurements \(y=h(x)\), but instead containing a white noise of unknown covariance \(\sigma^{2}\). In other words, the simulation from \(y\) to \(z\) is_ \[\begin{split}&\dot{z}=Ax+By+w,\quad\mathbb{E}[w(t)]=0,\,\forall t \in\mathbb{R}\\ &\mathbb{E}[w(t)w(s)]=\delta(t-s)\sigma^{2},\,\forall t,s\in \mathbb{R}.\end{split} \tag{8}\] In this way, the collected sample, denoted as \(\{(x(t_{i}),z(t_{i}))\}_{i=1}^{m}=\{(x_{i},z_{i})\}_{i=1}^{m}\), in fact satisfies the following relation: \[z_{i}=\bar{z}_{i}+v_{i},\quad\delta_{i}=\int_{-\infty}^{t_{i}}g(\tau)w(t_{i}- \tau)d\tau. \tag{9}\] Here \(g(\tau)\) is the impulse response of LTI system \((A,B)\); \(\bar{z}\) is the value of \(z(t_{i})\) that would be otherwise obtained if there were no white noises in the output measurements. **Assumption 3** (Sufficient decay).: _After a significantly long time \(t_{\epsilon}\), \(\|z-T(x)\|\leq\epsilon\) for a small enough \(z\). Here \(T\) is the nonlinear immersion map specified by (3)._ Fig. 2: A sandwich layer and its parameters. Then, \(\|\bar{z}_{i}-T(x_{i})\|\leq\epsilon\). Thus, we may write \[z_{i}=T(x_{i})+v_{i}+v_{i}^{\prime},\quad\|v_{i}^{\prime}\|\leq\epsilon. \tag{10}\] Now we suppose that the sample \(\{(x_{i},z_{i})\}_{i=1}^{m}\) is used to train a neural network \(S(\cdot|\theta)\), which gives the state observations \(\hat{x}_{i}=S(z_{i}|\theta)\), and that the resulting empirical loss, if defined as the average squared observation error, is \[\hat{R}_{S}(\theta):=\frac{1}{m}\sum_{i=1}^{m}\|\hat{x}_{i}-x_{i}\|^{2}. \tag{11}\] Then we get \[\hat{R}_{S}(\theta)=\frac{1}{m}\sum_{i=1}^{m}\left\|S\left(T(x_{i})+v_{i}+v_{i }^{\prime}|\theta\right)-x_{i}\right\|^{2}. \tag{12}\] **Assumption 4**.: _Assume that the probability distribution \(\mathcal{F}\) is supported by a compact set, i.e., if \(x\sim\mathcal{F}\), then \(x\) should be almost surely bounded._ It follows that both \(S(\cdot|\theta)\) and \(T\) should be Lipschitz continuous. Denote their Lipschitz constants as \(L_{S}(\theta)\) and \(L_{T}\), respectively. We have \[\|S\left(T(x_{i})+\delta_{i}+\delta_{i}^{\prime}|\theta\right)-S\left(T(x_{i}) |\theta\right)\|\leq L_{S}(\theta)L_{T}(\|v_{i}\|+\epsilon). \tag{13}\] Denote \(D\) as the essential upper bound of \(\|x\|\) on the distribution \(\mathcal{F}\). As such, without loss of generality, let \(S(T(0))=0\). Then \(\|x-S(T(x))\|\leq(L_{S}(\theta)L_{T}+1)D\) almost surely. Combining the above two equations, we further get \[\frac{1}{m}\sum_{i=1}^{m}\|x_{i}-S(T(x_{i})|\theta)\|^{2}\leq \hat{R}_{S}(\theta) \tag{14}\] \[\quad+\frac{1}{m}\sum_{i=1}^{m}L_{S}(\theta)L_{T}(L_{S}(\theta)L _{T}+1)D(\|v_{i}\|+\epsilon)\] \[\quad+\frac{1}{m}\sum_{i=1}^{m}L_{S}^{2}(\theta)L_{T}^{2}(\|v_{i} \|+\epsilon)^{2}.\] That is, \[\frac{1}{m}\sum_{i=1}^{m}\|x_{i}-S(T(x_{i})|\theta)\|^{2}\leq \hat{R}_{S}(\theta) \tag{15}\] \[\quad+\frac{1}{m}\sum_{i=1}^{m}(L_{S}(\theta)L_{T}+1)^{2}\left(D+ \|v_{i}\|+\epsilon\right)(\|v_{i}\|+\epsilon).\] The left-hand side gives an estimation of the empirical loss when observing the states from perfect output measurements (namely when \(z_{i}=T(x_{i})\), \(i=1,\ldots,m\)). Further expanding the last term and applying Cauchy-Schwarz inequality, we have \[\frac{1}{m}\sum_{i=1}^{m}\|x_{i}-S(T(x_{i})|\theta)\|^{2}\leq\hat {R}_{S}(\theta)+(L_{S}(\theta)L_{T}+1)^{2}\times \tag{16}\] \[\quad\left[\frac{1}{m}\sum_{i=1}^{m}\|v_{i}\|^{2}+(D+2\epsilon) \left(\sum_{i=1}^{m}\|v_{i}\|^{2}\right)^{1/2}+(D+\epsilon)\epsilon\right].\] Given that \(v_{i}\) is the response of LTI system \((A,B)\) to a white noise of covariance \(\sigma^{2}\), \(\mathbb{E}(\|v_{i}\|^{2})=h^{2}\sigma^{2}\) where \(h\) is the \(H_{2}\)-norm of the system \((A,B)\) where \(A\) is Hurwitz. Therefore, \[\mathbb{E}\left(\frac{1}{m}\sum_{i=1}^{m}\|v_{i}\|^{2}\right)=h^{2}\sigma^{2}. \tag{17}\] Let \(\alpha\) be a small positive number. With confidence \(1-\alpha/2\), a conservative estimation for its upper bound can be found according to Markov inequality: \[\frac{1}{m}\sum_{i=1}^{m}\|v_{i}\|^{2}\leq\frac{1}{1-\alpha/2}h^{2}\sigma^{2}. \tag{18}\] Therefore, \[\frac{1}{m}\sum_{i=1}^{m}\|x_{i}-S(T(x_{i})|\theta)\|^{2}\leq\hat {R}_{S}(\theta)+(L_{S}(\theta)L_{T}+1)^{2}\times \tag{19}\] \[\quad\left[\frac{h^{2}\sigma^{2}}{1-\alpha/2}+(D+2\epsilon)\frac{ h\sigma}{\sqrt{1-\alpha/2}}+(D+\epsilon)\epsilon\right].\] Finally, we note that for \(x\sim\mathcal{F}\), now that \(\|x-S(T(x)|\theta)\|\leq(L_{S}(\theta)L_{T}+1)D\) almost surely, by Hoeffding's inequality, for any \(\varepsilon>0\), \[\mathbb{P}\bigg{(}\bigg{|}\frac{1}{m}\sum_{i=1}^{m}\|x_{i}-S(T(x_{ i})|\theta)\|^{2}-\mathbb{E}\left(\|x-S(T(x))\|^{2}\right)\bigg{|} \tag{20}\] \[\geq(L_{S}(\theta)L_{T}+1)^{2}D^{2}\varepsilon\bigg{)}\leq 2\exp \left(-2m\varepsilon^{2}\right).\] Thus, with confidence \(1-\alpha/2\), we have \[\left|\frac{1}{m}\sum_{i=1}^{m}\|x_{i}-S(T(x_{i})|\theta)\|^{2}- \mathbb{E}\left(\|x-S(T(x)|\theta)\|^{2}\right)\bigg{|}\right. \tag{21}\] \[\quad<(L_{S}(\theta)L_{T}+1)^{2}D^{2}\sqrt{\frac{\ln(4/\alpha)}{2m }}.\] Combining (19) and (21), we have the following theorem. **Theorem 1**.: _Under the afore-mentioned assumptions, the generalization loss, defined as_ \[R_{S}(\theta)=\mathbb{E}\left(\|x-S(T(x)|\theta)\|^{2}\right), \tag{22}\] _is related to the empirical loss as defined in (11) by_ \[R_{S}(\theta)<\hat{R}_{S}(\theta)+(L_{S}(\theta)L_{T}+1)^{2}\Delta(h,\sigma, \alpha,\epsilon). \tag{23}\] _with confidence \(1-\alpha\) (\(\alpha\in(0,1)\)). Here_ \[\Delta(h,\sigma,\alpha,\epsilon)= D^{2}\sqrt{\frac{\ln(4/\alpha)}{2m}}+\frac{h^{2}\sigma^{2}}{1- \alpha/2} \tag{24}\] \[+(D+2\epsilon)\frac{h\sigma}{\sqrt{1-\alpha/2}}+(D+\epsilon)\epsilon.\] The theorem shows that the Lipschitz constant of the neural network trained plays an important role in the generalized performance of the resulting state observer. The effect of \(L_{S}(\theta)\) is mainly that of amplifying the first and third terms defined on the right-hand side of (24), supposing that \(\sigma\) and \(\epsilon\) are small enough. These two terms respectively arise from (i) the overall upper bound of the observation error \(\|x-S(T(x)|\theta)\|\), which acts as a coefficient before the Hoeffding term \(\sqrt{\ln(4/\alpha)/2m}\), and (ii) the effect of noisy measurements on the observer states. **Remark 1**.: _It is noted that the performance bound stated in the above theorem can be conservative. The conclusion that \(L_{S}(\theta)\) amplifies the generalization error and measurement noise should be considered as qualitative. The theorem also does not suggest a tractable algorithm to optimize the selection of \((A,B)\) along with the neural network \(S(\cdot|\theta)\), as the dependence of \(L_{T}\) on \((A,B)\) is highly implicit. Hence, this paper does not consider the problem of simultaneously training \((A,B)\) and the neural network._ ## IV Case Study Let us consider a Lorenz system in a 3-dimensional state space with chaotic behavior. The equation is written as: \[\dot{x}_{1} =10(x_{2}-x_{1}), \tag{25}\] \[\dot{x}_{2} =x_{1}(28-10x_{3})-x_{2},\] \[\dot{x}_{3} =10x_{1}x_{2}-(8/3)x_{3}.\] Suppose that the measurement used for state observation is \(y=x_{2}\), where a white noise exists. We assign different values to the variance of the measurement noise and investigate how the resulting neural network should be chosen differently. To simulate the process we will use a sampling time of \(0.01\). The LTI part of the KKL observer, \(A=-\mathrm{diag}(8,4,2,1)\) and \(B=[1,1,1,1]^{\top}\) are chosen. At the beginning of the observer simulation, \(z(0)=0\) is set as the initial condition; we simulate the dynamics until \(t=500\) and randomly collect \(m=2000\) time instants between \(t=20\) and \(t=500\) as the training data. Consider first the case with noiseless measurement (\(\sigma=0\)). The sample \(\{(x_{i},z_{i})\}_{i=1}^{2000}\) is plotted in Fig. 3, which shows that the data points are representative on the forward invariant set of the system, and that the observer states \(z_{i}\) indeed captures the structure of such a Lorenz attractor in a \(4\)-dimensional space. Hence, we train the Wang-Manchester network using a randomly selected \(80\%\) of the sample under the mean-squares loss metric, and validate using the remaining \(20\%\) sample points. Stochastic gradient descent (SGD) algorithm with a learning rate of \(10^{-3}\) is used for optimization. The number of epochs is empirically tuned to \(300\). The neural network has \(2\) hidden layers, each containing \(8\) neurons, resulting in \(292\) parameters to train in total. After training, the Lipschitz constant is evaluated a posteriori via the semidefinite programming approach of Fazlyab et al. [25] using cvxpy, which costs approximately \(1.5\) seconds (for a randomly initialized network). Varying the prior bound on the Lipschitz constant, the resulting training loss, validation loss, and the posterior Lipschitz bound obtained from the same training conditions are illustrated in Fig. 4. The following observations can be made from these results. * As anticipated, as the set bound on the Lipschitz bound increases, the Lipschitz constant of the trained neural network becomes higher. The Lipschitz constants estimated a posteriori are lower than the prior bound on the Wang-Manchester network, validating the direct parameterization approach on constraining the slope. On the other hand, the actually posterior Lipschitz constant has an increasingly large lag behind the prior bound; for example, when the prior bound is \(1000\), the \(L_{S}\) after training does not exceed \(300\). This indicates that even for the training objective alone, there is a "resistance" to pursue the maximally possible Lipschitz constant. * When the Lipschitz bound is small, relaxing the restriction on \(L_{S}\) is beneficial for decreasing the training loss as well as the validation loss, showing that the Lipschitz bound is a bottleneck causing underfitting. When \(L_{S}\) is high enough, such underfitting no longer exists; instead, overfitting will appear, with rising training and Fig. 3: Sample collected from the Lorenz system. validation losses. The overfitting phenomenon is more significant when the noise is large. Thus, there should be optimal values to be set as the Lipschitz bound. * Depending on the noise magnitude, the deviation of posterior Lipschitz constant from the prior bound and the emergence of overfitting phenomenon occur at different threshold values of the Lipschitz bound. Thus, the Lipschitz bound to be used for neural network training should be tuned differently as the noise intensity varies. For example, at \(\sigma=1\), a suitable choice can be \(\gamma=100\), whereas at \(\sigma=5\) and \(\sigma=10\), \(\gamma\) can be chosen as \(30\) and \(10\), respectively. Now suppose that at the observer design stage, the Wang-Manchester network is trained by the simulated data from a perfect digital twin of the true dynamics, i.e., \(\sigma=0\); yet, when applying the network trained to observe the states of the physical system, the environment is noisy. In Fig. 5, the resulting loss (mean squared state observation error) is plotted against varying prior Lipschitz bounds under multiple values of the environment noise magnitude. It is seen that when the noise is low, roughly speaking, increasing \(L_{S}\) leads to monotonic decrease in the observation error within a large range. On the other hand, when the environment is highly noisy (e.g., when \(\sigma\geq 3\)), the Lipschitz bound has a severe effect on the generalization loss, and since the achievable performance is restrictive, the fine-tuning of Lipschitz bound as a hyperparameter becomes critical. Finally, the performance of the state observer is examined. Consider using the network trained with noiseless simulation data under the prior Lipschitz bound \(L_{S}=10\), and applying it to environments with noise \(\sigma=0.1\), \(0.3\), \(1.0\), \(3.0\). The trajectories of the three components of estimated states by the observer are plotted against the true states in Fig. 6, within a time horizon of \(10\) time units. Naturally, when \(\sigma\) is low, the state estimates can well track the true states and capture the trends in the correct directions; as \(\sigma\) increases, the accuracy is lowered and the signals constructed by the observer are more noisy, occasionally yielding incorrect directions of evolution (e.g., on \(3<t<4\) or \(8<t<9\), where the states swing between the two foils of the Lorenz attractor). Overall, the state estimates mollifies the true state trajectories, which is due to the structure of our KKL observer - a linear filter (LTI system) as the state dynamics and a Lipschitz-bounded neural network as the static output map. ## V Conclusions and Discussions This work leverages the recent tools of Lipschitz-bounded neural networks for the synthesis of nonlinear state observers in a model-free setting. The observer, which has a Kazantzis-Kravaris structure, turns out to have a provable generalization performance that is related to the Lipschitz constant of the trained neural network (which represents the mapping from the observer states to the plant states). As such, by varying the Lipschitz bound and re-training the neural network, the optimal training result can yield the minimum generalized state observation error. The importance of bounding the Lipschitz constant has been demonstrated by a numerical case study on the Lorenz system. Fig. 4: Loss and Lipschitz constants under different prior Lipschitz bounds. (Blue wedges: training loss, blue circles: validation loss, green circles: prior Lipschitz bound; green wedges: posterior Lipschitz bound.) Fig. 5: Errors of noiselessly trained observers in noisy environments. We implicitly assumed here that a simulator of the dynamics is available, so that the true states' trajectories can be used to train the neural network. However, such ground truth for supervised learning may not actually exist in real applications, i.e., only inputs and outputs are recorded, yet a state observation mechanism is still needed or desired for feedback control. To this end, the author's recent work [32] proposed a data-driven KKL observer by appending a kernel dimensionality reduction scheme to the LTI dynamics, thus obtaining estimates that are diffeomorphic to the states. Also, the current approach is yet restricted to autonomous systems. For control purposes, it should be further extended to non-autonomous ones, where the Bernard-Andrieu observer structure [18] is anticipated. Also, the application of such data-driven state observers to learning control-relevant properties of nonlinear dynamical systems and controller synthesis [33, 34] is undergoing active research.
2305.07157
Exploring Zero and Few-shot Techniques for Intent Classification
Conversational NLU providers often need to scale to thousands of intent-classification models where new customers often face the cold-start problem. Scaling to so many customers puts a constraint on storage space as well. In this paper, we explore four different zero and few-shot intent classification approaches with this low-resource constraint: 1) domain adaptation, 2) data augmentation, 3) zero-shot intent classification using descriptions large language models (LLMs), and 4) parameter-efficient fine-tuning of instruction-finetuned language models. Our results show that all these approaches are effective to different degrees in low-resource settings. Parameter-efficient fine-tuning using T-few recipe (Liu et al., 2022) on Flan-T5 (Chang et al., 2022) yields the best performance even with just one sample per intent. We also show that the zero-shot method of prompting LLMs using intent descriptions
Soham Parikh, Quaizar Vohra, Prashil Tumbade, Mitul Tiwari
2023-05-11T22:07:27Z
http://arxiv.org/abs/2305.07157v1
# Exploring Zero and Few-shot Techniques for Intent Classification ###### Abstract Conversational NLU providers often need to scale to thousands of intent-classification models where new customers often face the cold-start problem. Scaling to so many customers puts a constraint on storage space as well. In this paper, we explore four different zero and few-shot intent classification approaches with this low-resource constraint: 1) domain adaptation, 2) data augmentation, 3) zero-shot intent classification using descriptions large language models (LLMs), and 4) parameter-efficient fine-tuning of instruction-finetuned language models. Our results show that all these approaches are effective to different degrees in low-resource settings. Parameter-efficient fine-tuning using T-few recipe Liu et al. (2022) on Flan-T5 Chung et al. (2022) yields the best performance even with just one sample per intent. We also show that the zero-shot method of prompting LLMs using intent descriptions is also very competitive. ## 1 Introduction Intent classification is the primary natural language understandin task for a virtual agent or a chatbot. Providing intent-utterances for training intent classification models is a laborious process. In this paper, we address this problem by exploring zero and few-shot intent identification using Large Language Models (LLMs) as well as instruction fine-tuned models. Zero-shot and few-shot intent prediction completely remove or substantially reduce the work to provide intent-utterances, respectively. We demonstrate that the following four approaches work well in practice for zero/few-shot intent classification. * **Domain adaptation**: We use a sentence encoder that is pre-trained with our domain knowledge and show that it performs well in a few-shot setting compared to off-the-shelf sentence encoders. * **Data Augmentation** by supplementing human-curated training data with LLM-generated data to improve training data. * **Zero-shot intent classification**: high capacity LLMs can be prompted creatively with intent descriptions to do zero-shot classification. * **Parameter-efficient fine-tuning (PEFT)**: finetuning a small number of parameters added to instruction finetuned LMs using only a few examples Here is the outline of the rest of the paper. In Section 2 we describe the related work. In Section 3 we detail the datasets used. In Section 4 we describe the four approaches covered in this work for zero/few-shot intent classification. Finally, we conclude with observations in Sections 5 and 6. ## 2 Related Work Recent papers have used domain adaptation Yu et al. (2021) and contrastive learning Zhang et al. (2021) successfully for few-shot intent classification. Yu et al. Yu et al. (2021) use embeddings from a BERT model Devlin et al. (2019) pretrained on domain data to search for utterances belonging to new intents in the domain. Zhang et al. (2021) finetune a BERT model on few-shot data using contrastive learning which learns to discriminate between semantically similar sentences. Our work on domain adaptation differs from these mainly due to our setting which involves serving thousands of customers. For legal reasons, we cannot co-mingle data from these customers to pre-train a single model. Instead, we pre-train a sentence encoder based on an intent taxonomy and out-of-the-box intents, which consist of human generated synthetic data. In this setting, we can only train very lightweight models for each customer, e.g. a dense layer on top of a pre-trained sentence encoder. Data Augmentation is another widely used technique to solve the problem of data scarcity. Recent work on data augmentation has focused on using multiple methods to improve model performance Chen and Yin (2022). Sahu et al. (2022) use a prompt-based approach to generate labeled training data for intent classification using LLMs like GPT-3 Brown et al. (2020). The quality of generated training data using LLMs is highly dependent on the prompts. In this work, we show various prompt-based approaches that generate diverse data for training and boost the performance of intent classifiers. As the usage of conversational agents grows, it is important for them to generalize to new intents. Recent work has focused on performing zero-shot intent detection on unseen intents and domains. Ferreira et al. (2015); Yazdani and Henderson (2015) use additional knowledge from ontologies or attributes whereas Liu et al. (2019) make modifications to capsule networks to generalize to unseen domains. Ma et al. (2019) use embeddings of intent descriptions to perform zero-shot detection of new intents and services. While these methods are effective, they all require training on an initial set of intents. Large Language Models (LLMs) like GPT-3 Brown et al. (2020) and more recently instruction finetuned models like Chung et al. (2022) have shown good zero-shot performance on newly seen tasks without any prior training data on those tasks. In this work, we show that these models are also effective for zero-shot intent classification using just intent descriptions. ## 3 Datasets We use public and private intent classification datasets to benchmark different approaches. For evaluation on public dataset, we use the English train and test sets from MASSIVE for intent classification. MASSIVE contains utterances directed at a physical device spanning 60 intents and 18 domains. For more details on the MASSIVE dataset FitzGerald et al. (2022), we encourage readers to refer to their paper. We also use private benchmarking datasets internal to our company. These datasets contain various intents and utterances in the enterprise setting spanning 3 different domains: IT Service Management (ITSM), HR and Customer Service Management (CSM). The utterances are inspired by interactions between humans and chatbots and are typically queries from goal-oriented conversations where the user needs to resolve an issue. Additionally, some of these datasets also contain out-of-scope (OOS) utterances in their test set i.e., utterances that do not belong to any intent, in order to benchmark irrerelevance detection of intent classification models. Table 1 shows statistics for different datasets used in our benchmarking. ## 4 Methodology In this section, we describe the various methods we test for zero and few-shot learning. ### Domain Adaptation Domain and task-specific pre-training of language model Gururangan et al. (2020) has shown to significantly improve classification accuracy in both low and high resource settings. Techniques like contrastive learning Gao et al. (2021) Feng et al. (2022) are effective for improving the quality of sentence encoders, specifically in low-resource settings. Inspired by these ideas, we use a sentence encoder trained on our domain-specific data along with public datasets. Starting with the LaBSE checkpoint Feng et al. (2022) we train it further by converting intent classification, paraphrasing, etc, as sentence similarity tasks. We will refer to this model as ELMSE (enterprise language model based sentence encoder). For training intent-classification models, we freeze ELMSE weights and use its sentence embeddings as features for a trainable non-linear dense layer for classification. We compare ELMSE against other publicly available sentence encoders, \begin{table} \begin{tabular}{c|c|c|c|c} \hline **Dataset** & **Intents** & **Train Size** & **Test Size** & **OOS Samples in Test** \\ \hline MASSIVE & 60 & 11514 & 2974 & No \\ OOTB-dataset* & 27 & 1363 & 3099 & No \\ Benchmark1* & 9 & 270 & 300 & Yes \\ Benchmark2* & 13 & 390 & 420 & Yes \\ Benchmark03* & 31 & 930 & 960 & Yes \\ \hline \end{tabular} \end{table} Table 1: Statistics for intent classification datasets used in this paper. Datasets marked with an asterisk (*) are private, internal benchmarking datasets. Train and Test Sizes correspond to the number of utterances in the respective splits. OOS samples in test set indicates whether there are any out-of-scope samples in the test set. namely LaBSE, Multilingual Universal Sentence Encoder (MUSE) (Yang et al., 2020) and GTR-3B. ELMSE is comparable in size to LaBSE and MUSE while almost 30 times smaller than GTR-3b. We simulate few-shot setting by randomly selecting K utterances per intent from full datasets. We use K=3,5,8,10,15,20 as well as the full dataset. We report results on 4 datasets from Table 1. Since OOTB-dataset was used for pretraining ELMSE, we exclude it from few-shot evaluation. #### 4.1.1 Results for Domain Adaptation Table 2 reports in-scope accuracy and standard deviation averaged of 5 random seeds for 3-shot and 5-shot classification. The results demonstrate that domain adaptation is a very effective approach with improvements of greater than 5 percent in most cases when compared with models of similar size. These results carry over as we increase the number of few-shot utterances to more than 5 as shown in Figure 1. The plots also show that the gap between ELMSE and LaBSE is much larger in a few-shot setting and reduces as K increases. Moreover, ELMSE is only 2-3% worse than GTR-3b which is 30 times larger model. ### Data Augmentation We use data augmentation to generate labeled data for training starting with a seed set of 5 utterances per intent. In this section, we explore different ways of prompting GPT-3 and T5 (Raffel et al., 2020). For evaluating the generated utterances, we use them for training the same type of lightweight classifier as described in 4.1 using ELMSE as the sentence encoder. This section describes different prompt-based approaches for data generation. GPT-3 + ParaphraseFollowing (Sahu et al., 2022), we ask GPT-3 to generate 20 paraphrases of utterances from the same intent taken from the seed set. To encourage diverse generations, we set high temperature and top_p values. GPT-3 + Intent DescriptionsWe describe intents in the prompt and ask GPT-3 to generate 20 utterances for a particular intent. We find that describing all intents prevents hallucinations in the generations. Parrot T5 ParaphrasingWe use the Parrot Paraphrase approach based on T5 (Damodaran, 2021) to generate 20 diverse paraphrased utterances given seed set. Table 3 shows a few generations from our prompt-based approaches. #### 4.2.1 Experimental Setup and Results To evaluate the quality of generated utterances, we use them to train intent classifiers. We evaluate the performance of augmented dataset from each approach as mentioned in Section 4.2.1 by training ELMSE classifier model for intent classifica \begin{table} \begin{tabular}{|l|l|c|c|c|c|} \hline **Few-shot K** & **model** & **Massive** & **Benchmark01** & **Benchmark02** & **Benchmark03** \\ \hline \multirow{4}{*}{3} & LaBSE & 46 (1.7) & 59 (2.9) & 52 (2.7) & 58 (3.1) \\ & MUSE3 & 53 (2.8) & 64 (3.8) & 62 (2.7) & 64 (1.3) \\ & GTR-3b & **59 (1.4)** & 76 (1.4) & **70 (3.3)** & **78 (2.2)** \\ & ELMSE & 57 (2.3) & **77 (2.4)** & 63 (4.6) & 74 (1.7) \\ \hline \multirow{4}{*}{5} & LaBSE & 58 (1.7) & 65 (3.3) & 59 (1.7) & 67 (1.8) \\ & MUSE3 & 61 (0.9) & 70 (2.2) & 66 (1.4) & 70 (1.7) \\ \cline{1-1} & GTR-3b & **66 (1.2)** & 78 (1.0) & **73 (1.7)** & **84 (1.0)** \\ \cline{1-1} & ELMSE & 63 (1.1) & **80 (1.7)** & 67 (2.6) & 79 (1.2) \\ \hline \end{tabular} \end{table} Table 2: Results for domain adaptation on 3 internal datasets along with MASSIVE comparing LaBSE, MUSE, ELMSE, and GTR-3B models. The metric reported here is in-scope accuracy averaged over 5 different selections of few shot data. Numbers inside parenthesis indicate standard deviation across the 5 selections Figure 1: Comparison of ELMSE which is domain adapted with sentence encoders which are not domain adapted tion task. We evaluate on 4 datasets and compared against ELMSE few-shot baseline where K is set to 5. We report the in-scope accuracy and standard deviation averaged over 3 different random seeds. Table 4 shows the result for all approaches using the data augmentation. Unless mentioned explicitly, we do not add the seed set to the training mix. We find that using paraphrases from GPT-3 and Parrot T5 Paraphraser give better results compared to ELMSE Baseline even without the seed set. GPT-3 Augmentations using Intent Descriptions does not perform well but when combined with ELMSE Baseline seed set gives better results. Moreover, given a good quality seed-set, we see that data augmentation using LLMs can boost the performance of intent classification in few-shot setting. ### Prompting Zero-shot Prediction The given sentence needs to be mapped to exactly one of the intents described below: **alarm_set**: user wants to set an alarm **tot_cleaning**: user wants to do some cleaning : **play_podcasts**: user wants to play a podcast or rewind/repeat a particular episode in a podcast **none_of_the_above**: if the user sentence is not about any of the intents above **Sentence**: wake me up at 7am **Intent**: **blarm_set** We use intent names and descriptions for prompting language models to perform zero-shot prediction. The intent descriptions are prefaced with instructions to predict the correct intent and the test utterance is specified at the end. The output is expected to be the correct intent label. Figure 4.3 shows an example prompt from the MASSIVE dataset and the output from LLMs. For evaluation we check for the presence of intent names in the LM completion text as opposed to an exact match and report the first intent predicted in the completion. This is done to account for hallucinations. If no intent names are present in the completion text, we mark it as an "out-of-scope prediction". We create intent descriptions for 4 datasets: 3 internal benchmarking datasets and the open-source MAS \begin{table} \begin{tabular}{l|c|c|c|c} \hline **Approach** & **MASSIVE** & **Benchmark01** & **Benchmark02** & **Benchmark03** \\ \hline ELMSE Baseline & 63 (1.1) & 80 (1.7) & 67 (2.6) & 79 (1.2) \\ GPT-3 w/ Paraphrase Aug. & **63 (0.5)** & **84 (0.4)** & 71 (0.3) & **81 (0.5)** \\ GPT-3 w/ Intent Descriptions & 51 (0.5) & 76 (0.4) & 69 (0.5) & 76 (0.2) \\ Parrot T5 & 58 (0.4) & 81 (0.2) & **73 (0.4)** & 81 (0.4) \\ Seed Set Set + GPT-3 w/ Intent Descriptions & 63 (0.8) & 84 (0.4) & 71 (0.3) & 78 (0.9) \\ Seed Set Set + Parrot T5 & 63 (0.6) & 79 (0.4) & 68 (2.2) & 76 (0.6) \\ \hline \end{tabular} \end{table} Table 4: Results for Data Augmentation on 3 internal datasets along with MASSIVE comparing the performance on multiple prompt-based approaches. We report the average in-scope accuracy and standard deviation averaged over 3 different random seeds \begin{table} \begin{tabular}{|c|c|c|} \hline & Prompt & Generated Text \\ \hline GPT3 Paraphrase Aug. & **Task**: Create diverse utterances & Set an alarm for 10 \%/clock. \\ & by paraphrasing the following utterances: & \\ & _scholar alarm to wake me up after 3 hours_ & Wake me up on Friday at 5am \\ GPT3 Paraphrase Aug. & _after for ten_ & so I can make the train. \\ & _wake me up off ridaty at five in the morning i need to catch the train_ & : \\ & _alarm at eight am_ & : \\ & _pleuse at alarm for today_ & Set a timer to wake me up \\ & **Create 20 utterances**: & in three hours \\ \hline & A virtual assistant assistant serves multiple intents. & Can you set an alarm for next week? \\ & Below are the description of the intents: & I need to set an alarm for a specific time \\ & **alarm_set**: user wants to set an alarm & 1 want to set an alarm for a specific time \\ & **iot_cleaning**: user wants to do some cleaning & I want to set an alarm for a certain day \\ GPT3 Aug. Using Intent Deser. & : & : \\ & **play_podcasts**: user wants to play a podcast or rewind/repeat a particular & \begin{tabular}{l} \(\Gamma\) f like to set an alarm \\ for a certain hour \\ for a certain hour \\ \end{tabular} \\ & **place 20 utterances for alarm_set** intent: & : \\ \hline \end{tabular} \end{table} Table 3: Example prompts used in generating text for the corresponding approaches SIVE dataset. We benchmark 2 language models using this type of prompt: GPT-3 (175B parameters) and Flan-T5-XXL (11B parameters), an instruction fine-tuned model. Filtering Intents for LLMsMany conversational agents have a lot of intents, sometimes more than 50. As the number of intents increases, the prompt size increases which incurs higher latency and cost (in case of token-based pricing models). To restrict the length of the prompt, we use sentence similarity to retrieve the top-5 intents and only use those 5 intents in the language model prompt. Using sentence similarity needs a few training examples which makes this a few-shot approach. With just 5 examples per intent, we get more than 0.85 recall for Top-5. To restrict the size of the prompt, we do not pass the training examples to the LMs. SetupFor GPT-3, we set the temperature to 0 and max tokens for generation to 20. We use the default setting generation settings for the Flan-T5-XXL model and do not restrict the number of tokens to be generated. The results with filtering are averaged over 3 runs using different random seeds for sampling the 5 samples per intent. ResultsTable 5 reports the accuracy for in-scope intents and the recall for out-of-scope samples where applicable (samples that do not belong to any of the intents in the dataset). We find that prompting language models with intent descriptions for zero-shot intent classification performs better than few-shot learning using a classifier (Tables 2 and 4). Since this only needs intent descriptions, this approach can generalize to new intents as well. Using the same prompt, Flan-T5-XXL is competitive with GPT-3 in terms of in-scope accuracy and is often better when presented a smaller number of intents in the prompt. While the in-scope accuracy is comparable, GPT-3 clearly outperforms Flan-T5-XXL in terms of the out-of-scope recall, indicating that it is better at detecting irrelevant samples. We attribute the strong performance of Flan-T5-XXL (even though it is 16x smaller) to the multi-task instruction finetuning on over 1800 datasets. For the 3 internal datasets, we also find that using more intents in the prompt works better only up to a certain extent but have excluded the results for the brevity of this paper. While the intent retrieval method does not give perfect Top-5 recall, it helps in keeping the prompt short and hence provides lesser chances for the language models to give a output a wrong label name. Moreover, filtering can also improve the out-of-scope recall as in the case of Benchmark02 dataset. ### Parameter-Efficient FineTuning (PEFT) Taking inspiration from the T-Few recipe (Liu et al., 2022), we add and finetune IA3 adapters from scratch in Flan-T5 models in a few-shot setting which is similar to 4.1. We pick K=1,3,5 utterances per intent. Since the Flan-T5 models are instruction fine-tuned, we use the same prompt from 4.3 and provide the intent name as the target string. For MASSIVE and OOTB-dataset, we restrict the number of intents in the prompt to 15 at training time to prevent out-of-memory exceptions. At in \begin{table} \begin{tabular}{l|c|c|c|c} \hline **Dataset** & **LLM Intents** & **Model** & **In-Scope Accuracy** & **Out-of-scope Recall** \\ \hline \multirow{4}{*}{MASSIVE (60 intents)} & \multirow{2}{*}{5} & Flan-T5-XXL & 68.6 & - \\ & & GPT-3 & **69.2** & - \\ \cline{2-5} & \multirow{2}{*}{60} & Flan-T5-XXL & 73.3 & - \\ & & GPT-3 & **73.9** & - \\ \hline \multirow{4}{*}{OOTB-dataset (27 intents)} & \multirow{2}{*}{5} & Flan-T5-XXL & **83.7** & - \\ & & GPT-3 & 83.4 & - \\ \cline{2-5} & \multirow{2}{*}{27} & Flan-T5-XXL & **86.3** & - \\ & & GPT-3 & 84.9 & - \\ \hline \multirow{4}{*}{Benchmark01 (9 intents)} & \multirow{2}{*}{5} & Flan-T5-XXL & **86.5** & 0.43 \\ & & GPT-3 & 84.6 & **0.97** \\ \cline{1-1} \cline{2-5} & \multirow{2}{*}{9} & Flan-T5-XXL & 86.5 & 0.48 \\ & & GPT-3 & **89.3** & **0.67** \\ \hline \multirow{4}{*}{Benchmark02 (13 intents)} & \multirow{2}{*}{5} & Flan-T5-XXL & **69.7** & 0.65 \\ & & GPT-3 & 60.6 & **0.87** \\ \cline{1-1} \cline{2-5} & \multirow{2}{*}{13} & Flan-T5-XXL & **69** & **0.7** \\ \cline{1-1} & & GPT-3 & 61.3 & 0.67 \\ \hline \end{tabular} \end{table} Table 5: Results for zero-shot prediction on 3 internal datasets along with MASSIVE with GPT-3 and Flan-T5-XXL. In-scope accuracy is the accuracy computed for test samples that belong to the intents in the dataset. Out-of-scope recall is the fraction of out-of-scope test samples which were correctly identified as irrelevant by the model i.e., not belonging to any of the intents ference time, we provide all intents in the prompt. We use all 3 loss functions (language modeling, unlikelihood and length normalized losses) and the same hyperparameters as mentioned in the T-Few paper. For more details about the T-Few recipe, we encourage readers to refer to their paper. Figure 2 compares the results of PEFT against the best results from previously described methods. Flan-T5-XL (3B parameters) consistently outperforms all other methods with just 1 training example per intent. With a few more examples, Flan-T5-Large (770M parameters) also outperforms all other methods except Flan-T5-XXL on the OOTB dataset. This shows that we can train significantly smaller models which are easier to deploy and also outperform LLMs like GPT-3 with just a few parameters using intent descriptions and a handful of examples. ## 5 Observations Comparing results across the 4 approaches, we notice that all 4 approaches are effective in low resource settings. We find that domain adaptation is a cheap option in terms of size of the models but it still requires 5-10 training utterances per intent for getting accuracy above 70%. Data Augmentation using paraphrasing further helps in most cases by 2-4 percentage points. However, expanding to new domains requires sentence-pairs data for training the sentence encoder which can involve days of human labeling. Zero shot classification using intent descriptions with LLMs and instruction finetuned models performs even better than domain adaptation with data augmentation and doesn't require any utterances to be configured per intent. However a good description for each intent is required. Additionally, these models can be expensive to operationalize. Inference on Flan-T5-XXL requires using A100 GPUs. GPT-3 is not open-source and based on a pricing model which can be expensive to scale to thousands of customers. Parameter efficient fine-tuning (PEFT) of instruction finetuned models like Flan-T5-XL and Flan-T5-Large offers the best performance across all methods and often by a large margin. Moreover, these models are only a fraction of the size of GPT-3 and Flan-T5-XXL and much easier to operationalize at scale with far lesser compute resources. ## 6 Conclusion In this paper, we addressed the task of zero/few-shot intent identification with Large Language Models (LLMs). We presented four approaches, namely domain adaptation, data augmentation, zero-shot prediction with prompting, and parameter-efficient fine-tuning. Our experimental results demonstrate that LLMs and larger instruction fine-tuned language models are very effec Figure 2: Plots comparing in-scope accuracy of different Flan-T5 models using Parameter-efficient FineTuning (PEFT) with the T-Few recipe. The dotted lines show the best results on each dataset from previously described methods. The shaded regions show the standard deviation tive in zero-shot setting with in-context prompting. Smaller instruction finetuned models with adapters are even better when adapter-finetuned on just 1 or 3 examples per intent. We hope these results are useful for practical deployment of conversational agents in low-resource settings as well as aiding non-practitioners in building their intent classification models. In the future, we plan to extend this work by domain adapting smaller instruction fine-tuned models in a multi-task setting and exploring their zero-shot capabiltiies.
2310.13394
POSQA: Probe the World Models of LLMs with Size Comparisons
Embodied language comprehension emphasizes that language understanding is not solely a matter of mental processing in the brain but also involves interactions with the physical and social environment. With the explosive growth of Large Language Models (LLMs) and their already ubiquitous presence in our daily lives, it is becoming increasingly necessary to verify their real-world understanding. Inspired by cognitive theories, we propose POSQA: a Physical Object Size Question Answering dataset with simple size comparison questions to examine the extremity and analyze the potential mechanisms of the embodied comprehension of the latest LLMs. We show that even the largest LLMs today perform poorly under the zero-shot setting. We then push their limits with advanced prompting techniques and external knowledge augmentation. Furthermore, we investigate whether their real-world comprehension primarily derives from contextual information or internal weights and analyse the impact of prompt formats and report bias of different objects. Our results show that real-world understanding that LLMs shaped from textual data can be vulnerable to deception and confusion by the surface form of prompts, which makes it less aligned with human behaviours.
Chang Shu, Jiuzhou Han, Fangyu Liu, Ehsan Shareghi, Nigel Collier
2023-10-20T10:05:01Z
http://arxiv.org/abs/2310.13394v1
# POSQA: Probe the World Models of LLMs with Size Comparisons ###### Abstract Embodied language comprehension emphasises that language understanding is not only a matter of mental processing in the brain, but also involves interactions with the physical and social environment. With the explosive growth of Large Language Models (LLMs) and their already ubiquitous presence in our daily lives, it is becoming increasingly necessary to verify their real-world understanding. Inspired by cognitive theories, we propose **POSQA**: a **P**hysical **O**bject **S**ize **Q**uestion-**A**nswering dataset with simple size comparison questions to examine the extremity and analyse the potential mechanisms of the embodied comprehension of the latest LLM. We show that even the largest LLMs today perform poorly under the zero-shot setting. We then push their limits with advanced prompting techniques and external knowledge augmentation. Furthermore, we investigate whether their real-world comprehension primarily derives from contextual information or internal weights and analyse the impact of prompt formats and report bias of different objects. Our results show that real-world understanding that LLMs shaped from textual data can be vulnerable to deception and confusion by the surface form of prompts, which makes it less aligned with human behaviours. + Footnote †: Code: [https://github.com/cambridgeltl/POSQA](https://github.com/cambridgeltl/POSQA) + Footnote †: Code: [https://github.com/cambridgeltl/POSQA](https://github.com/cambridgeltl/POSQA) + Footnote †: Code: [https://github.com/cambridgeltl/POSQA](https://github.com/cambridgeltl/POSQA) ## 1 Introduction The rapid growth of recent Large Language Models (LLMs) such as ChatGPT has led to their increased use in various applications (Gozalo-Brizuela and Garrido-Merchan, 2023; Sobania et al., 2023; Lehnert, 2023; Guo et al., 2023; Nov et al., 2023; Jiao et al., 2023). With the rapid growth of interest in developing Embodied Language Models (ELM) (Dasgupta et al., 2023; Driess et al., 2023; Vemprala et al., 2023), recently there has been increasing interest in investigating whether LLM have an aligned understanding of the real world as our human from cognitive and physiological perspectives (Prystawski et al., 2022; Binz and Schulz, 2022; Hagendorff et al., 2022; Mahowald et al., 2023). Embodied language comprehension (Hochak et al., 2014; Buccino et al., 2016; Fischer and Zwaan, 2008; Barsalou, 1999), a possible explanation for human cognition, suggests that the human develops an understanding of the physical world related by language by our physical experiences and sensory perceptions of the world around us. When we process languages, we reemulate or recreate the experiences mentioned in the language to understand and interact with those languages more meaningfully. Although common sense physical reasoning has been widely explored previously with various benchmarks, such as PIQA (Bisk et al., 2020), MMLU-Physics (Hendrycks et al., 2021), UTOPIA (Liu et al., 2022), and PROST (Aroca-Ouellette et al., 2021), few studies analyse the understanding of LLMs about object size, which is actually central to various fundamental aspects of cognition such as implicit memory, object recognition, conceptual processing, and perception-action coordination (Biederman and Cooper, 1992; Barsalou, 2008). Therefore, inspired by cognitive experiments (de Koning et al., 2017), we proposed **POSQA**: a **P**hysical **O**bject **S**ize **Q**uestion-**A**nswering dataset containing 12,000 questions of size comparisons between pairs of objects to investigate whether the latest LLMs have aligned cognition with our human and identify the limits of their real-world understanding with various prompt-based experiments. Empirical findings suggest that under the zero-shot setting, the performance of popular LLMs such as GPT-3 is slightly better than random guessing. However, increasing the types and amount of external knowledge presented in the prompt about objects has a significant impact on the behaviour of LLMs. In particular, LLMs tend to develop their mental representation of objects referred to based on the given context in prompts rather than relying on their internal weights, even if the given context information is incorrect. To conclude, our contributions can be summarized into three folds: * We propose a simple but effective size comparison dataset to probe the real-world understanding of LLMs. * We analyse the limits of the real-world understanding of LLMs with comprehensive prompt-based probing experiments. * We discuss the vulnerability and the alignment of the world knowledge of LLMs. ## 2 Background ### World Models and World Knowledge There has been a wide and long-lasting debate about whether LLMs really have their internal world models and to what extent their world knowledge aligns with humans. Mind's Eye Liu et al. (2022) proposed to augment language models with an external physical simulation engine for better understanding the physical phenomena. RAP Hao et al. (2023) suggests LLMs as both a world model and a reasoning agent and includes a principled planning algorithm for strategic exploration in a vast reasoning space. Xiang et al. (2023) deploys an embodied agent in a world model to endow LLMs with a diverse set of embodied experiences by fine-tuning. Although the world model of LLM can be effectively augmented and they indeed display a certain level of real-world understanding, there is still a lack of sufficient study on the boundary of the world understanding of LLMs. Similarly to other research on LLMs inspired by cognitive science and psychology Binz and Schulz (2022); Bisk et al. (2020); Mahowald et al. (2023); Prystawski et al. (2022), we propose to audit the real-world understanding of LLM with questions as simple as size comparison. ### Physical World Understanding Datasets PIQA Bisk et al. (2020) is a popular data set for physical commonsense reasoning to benchmark progress in physical commonsense understanding. PIQA dataset consists of more than 16,000 training QA pairs, with additional 2K and 3K held for development and testing. The task is multiple choice question answering: Given a question and two possible solutions, a model or a human must choose the most appropriate solution, of which exactly one is correct. MMLU-Physics Hendrycks et al. (2021) contains 206 samples of physics consisting of multiple choice questions at the college and high school level to evaluate the academic and professional understanding of the model in the physics domain. UTOPIA Liu et al. (2022) is a new multi-task physics alignment dataset that aims to benchmark how well current LMs can understand and areas over some basic laws of physics. It leverages a physics engine to generate data for 39 subtasks covering six common scenes that involve understanding basic principles of physics. PROST Aroca-Ouellette et al. (2021) is a new probing dataset to evaluate the ability of pre-trained LMs to understand and reason about the physical world. It contains 18,736 multiple-choice questions made from 14 manually curated templates, covering 10 physical reasoning concepts. The existing datasets contain questions from different dimensions, but they fail to effectively evaluate some particular aspect of the understanding of the physical world of LLMs. Since we want to probe the effect of context for in-context learning, it is necessary to have a content-controllable and dimension-specific dataset. Based on the requirements, we propose **POSQA**: a **P**hysical **O**bject **S**ize **Q**uestion-Answering dataset which is also designed to test the size understanding ability of LLMs on physical world objects. ## 3 Posqa POSQA consists of 12,000 multiple choice questions designed to probe the physical world understanding ability of the language model in the size dimension. We design two types of questions, each of them containing 6,000 questions. Table 1 shows the statistics of POSQA. The size comparison covers 92 entities, ranging from proton to universe. The entity and size information are obtained from Nikon Universcale, which aims to allow people to see and understand the relative size of the full range of known objects in our universe. We design four manually written templates to construct the two types of size questions. We show the templates in detail below. General QuestionA general question requires the answer "yes" or "no". We use two templates to generate general questions. Template 1: Is _Entity A_ bigger than _Entity B_? Template 2: Is _Entity A_ smaller than _Entity B_? We replace _Entity A_ and _Entity B_ with different entity names. For each template, we use the same Entity A - Entity B pair to generate a question, which is to avoid introducing bias. We generate 3,000 questions for each template, so there are 6,000 general questions in total. Based on the actual size of each entity, we label each question with "yes" or "no". The general questions aim to evaluate the size knowledge of objects contained in the LMs and the understanding of LMs on yes/no labels. Special QuestionA special question begins with an interrogative word "which". We design two templates for special questions. Template 3: Which one is bigger between _Entity A_ and _Entity B_? Template 4: Which one is smaller between _Entity A_ and _Entity B_? Similarly, we also use the same _Entity A - Entity B_ pair to generate a question on each template. We generated a total of 6,000 questions, 3,000 questions for each template. The label of a special question is different from the general question. In the special question, we label each question with the actual entity name, which is exactly the same as _Entity A_ or _Entity B_. The special questions are intended to test the understanding of the size of the LMs and the understanding of the LMs about the interrogative word 'which' of the question. Entity FeatureWe collect the features of the 92 entities, including scale, size, magnitude, and text. The scale feature stores the size information of the entity in a specific size unit. For example, the scale of the Solar System is 9 billion km and the scale of an Atom is 100 pm. The size feature stores the absolute value of the size of the entity representing in scientific notation. The magnitude feature represents the exponent of size which is stored in the size feature. The text feature contains a textual description of the entity. ## 4 Methodology In this section, we cover the details of the proposed approach, first describing the designed prompt in Section 4.1, followed by the models used in the experiments in Section 4.2 and the introduction of the evaluation methods in Section 4.3. ### Prompt Design We construct different prompts that are used in our experiments. #### 4.1.1 Plain Question Prompt Plain question prompt is to query the model with a single plain question without any hint or knowledge. We aim to test how the model performs on POSQA without any external auxiliary information. The model answers the query purely based on the knowledge stored within its weights. #### 4.1.2 Relevant Knowledge-augmented Prompt External knowledge has been shown to be helpful for various NLP tasks, including common sense reasoning Liu et al. (2022). We consider two kinds of knowledge in our experiments: (1) Exact Size Information from POSQA (2) Generated Knowledge from GPT-3. Knowledge is considered as the context that is concatenated with a question. We use the knowledge-augmented prompt to query LLMs to see how the context would affect the model's prediction. Exact Size InformationThe exact size information of each object in POSQA is stored as an entity feature. For a size comparison question, first we retrieve the exact size the two entities respectively, then we rewrite the original prompt to a knowledge-augmented prompt. In particular, we add a sentence to describe the size of the two objects before the question. The sentence is: The size of _Entity A_ is _Exact Size of Entity A_. The size of _Entity B_ is _Exact Size of Entity B_. Generated KnowledgeWe generate the entity-related knowledge statement by querying an LLM. We consider two types of knowledge of entities: (1) general knowledge, which describes the general information of an entity; (2) size knowledge, which describes the size information of an entity. The purpose is to investigate which knowledge is more \begin{table} \begin{tabular}{l c c c} \hline \hline Question Type & Bigger & Smaller & Total \\ \hline General Question & 3,000 & 3,000 & 6,000 \\ Special Question & 3,000 & 3,000 & 6,000 \\ \hline \hline \end{tabular} \end{table} Table 1: The statistics of POSQA. useful to the model when answering the size-related questions. We query GPT-3 using the prompt 'Generate knowledge about _entity_ in one sentence.' for extraction of general knowledge and 'Generate size knowledge of _entity_ in one sentence.' for size knowledge extraction. The knowledge generated from GPT-3 is stored and used as a context in the knowledge-augmented prompt. Then we concatenate the knowledge generated with the size comparison question to query LLMs. #### 4.1.3 Adversarial Prompt with Knowledge Perturbation In addition to the above useful prompt, we also design some adversarial prompts with knowledge perturbation. We aim to test how the model behaves when the given context is not useful or against the knowledge stored in its internal weights. We consider three settings: (1) Partial Information Provided (2) Masking Particular Information (3) Counterfactual Size Information. Partial Information ProvidedIn Exact Size Information Prompt, we provide the exact size information of two entities as the context. To investigate to what extent the model would utilise the context information, instead of giving two entity size information, we only provide one of them. This is to test whether the model could extract useful information from its internal weights to use together with the context information. Masking Particular InformationTo further investigate whether the exact size helps the model when answering size comparison questions, we manually mask important information in the context. For example, in Exact Size Information Prompt, we mask the exact size or entities, respectively, to see the performance gap between using the masked prompt and the unmasked prompt. We replace the exact size of the entities with the mask token _[MASK]_. Counterfactual Size InformationInstead of providing the true size information in the context, we replace it with the wrong size information to investigate what predictions the model would make when the external context knowledge contradicts the knowledge stored in its weights. If the model will fully utilise the counterfactual size information when answering the size comparison questions. In particular, we swap the size information of the two entities in the prompt to mislead the model. ### Models Previous work (Wei et al., 2022)(Sanh et al., 2022) has shown that instruction-tuned language models on a collection of NLP tasks formatted with instructions substantially improve the ability of language models to perform an unseen task from an instruction, especially zero-shot performance. In this work, we do experiments on three kinds of instruction-tuned model: Flan-T5 (from 80M to 3B) (Chung et al., 2022), InstructGPT (175B) (Ouyang et al., 2022), and recent ChatGPT. Flan-T5 is instruction-tuned on 1,836 NLP tasks that initialise from prior public checkpoints of T5 (Raffel et al., 2020). InstructGPT uses reinforcement learning from human feedback (Christiano et al., 2017) (Stiennon et al., 2020) to fine-tune GPT-3 (Brown et al., 2020) to follow a broad class of written instructions. ChatGPT uses the same training methods as InstructGPT, but with slight differences in data collection setup. It can interact in the form of a conversational dialogue and provide human-like responses. ### Evaluation In this part, we describe the evaluation process and the evaluation metrics we use. #### 4.3.1 Answer Mapping Since we query the LLMs to generate the answer to the question, it cannot be guaranteed that all the generated answers are exactly the labels. We use an answer mapping process to map the generated answer to the answer label. For general questions, the labels are yes or no. If the predicted answer contains 'yes'/'YES', we assume its predicted label is yes. If the predicted answer contains the 'no'/'NO', we assume that its predicted label is no. For special questions, the labels are entity names. We calculate the Levenshtein distance (Li and Liu, 2007) between the predicted entity and the two candidate entities, respectively. The Levenshtein distance is a string metric for measuring the difference between two sequences, and a smaller distance means the two strings are more similar. We choose the candidate entity with smaller Levenshtein distance as the predicted label. #### 4.3.2 Metrics We consider four metrics in our experiments. **Accuracy** and **Macro-F1** scores are two commonly used metrics for the evaluation of the performance of the model in classification tasks. Accuracy considers global precision and recall of the categories, while Macro-F1 computes the average of the F1 scores obtained by individual categories. To explore the influence of context on the prompt, we propose two quantitative evaluation metrics: **Context Effective Rate (CER)** and **Context Misleading Rate (CMR)**. We calculate CER and CMR by comparing the output of a model using a prompt that contains context information with the output of a model using a prompt that does not contain any context information. Specifically, CER evaluates how many incorrectly answered questions can be correctly answered after adding the context in the prompt. CMR evaluates how many correctly answered questions can be incorrectly answered after adding the context to the prompt. ## 5 Results and Analysis ### Baseline The baseline is to query LLMs using the Plain Question prompt and the results are presented in Table 2 for general questions and Table 3 for special questions. When the number of parameters of the model exceeds 250M (Flan-T5-Base), the ability to answer the special size comparison questions begins to emerge. GPT3.5-Turbo achieves the best accuracy score and Macro-F1 score on both types of questions. It is surprising that significantly smaller models, such as Flan-T5-Large (780M) and Flan-T5-XL (3B), exhibit superior performance. For example, Flan-T5-XL outperforms GPT3-Davinci (175B) by 0.06 precision in answering general questions and 0.03 precision in answering special questions. In general, LLMs perform better at special questions. For example, the accuracy increases by 0.14 on Flan-T5-XL and 0.04 on GPT3.5-Turbo, respectively, from answering general questions to special questions. Our empirical investigation also reveals that the GPT3-Davinci model tends to provide an initial incorrect answer, despite subsequently offering a correct explanation for the given question. This phenomenon occurs especially when directly querying GPT3-Davinci with general questions to get the "yes" or "no" answers. We speculate that it could be attributed to the use of first-word sampling techniques during the decoding process. However, this phenomenon does not occur in GPT3.5-Turbo which has been optimised for dialogue scenario. Even GPT3.5-Turbo, one of the most powerful LLMs, can only achieve an average 0.71 accuracy score on these two types of questions. This suggests that although larger LMs may possess certain advantages in most situations, they may still lack real-world understanding when it comes to answering basic size comparison questions. \begin{table} \begin{tabular}{l c c|c|c|c|c|c|c|c|c|c} \hline **Prompt** & FTS-Small & \multicolumn{2}{c|}{FTS-Base} & \multicolumn{2}{c|}{FT5-Large} & \multicolumn{2}{c|}{FT5-XL} & \multicolumn{2}{c|}{GPT3-Davinci} & \multicolumn{2}{c}{GPT3-Turbo} \\ **Metric** & Acc & Macro-F1 & Acc & Macro-F1 & Acc & Macro-F1 & Acc & Macro-F1 & Acc & Macro-F1 & Acc & Macro-F1 \\ \hline Plain Question & 0.51 & 0.34 & 0.50 & 0.35 & 0.56 & 0.54 & 0.58 & 0.51 & 0.52 & 0.38 & 0.69 & 0.69 \\ + General Knowledge Information & 0.50 & 0.34 & 0.52 & 0.52 & 0.64 & 0.63 & 0.67 & 0.67 & 0.51 & 0.35 & 0.79 & 0.79 \\ + Size Knowledge Information & 0.50 & 0.40 & 0.55 & 0.53 & 0.63 & 0.61 & 0.76 & 0.76 & 0.56 & 0.48 & 0.85 & 0.85 \\ + Exact Size Information & 0.50 & 0.50 & 0.50 & 0.54 & 0.62 & 0.77 & 0.87 & 0.88 & 0.73 & 0.89 & 0.90 & 0.89 \\ \hline + Only Head Entity Gold Size & 0.50 & 0.34 & 0.54 & 0.50 & 0.63 & 0.60 & 0.64 & 0.64 & 0.51 & 0.35 & 0.69 & 0.69 \\ + Only Tail Entity Gold Size & 0.50 & 0.34 & 0.49 & 0.34 & 0.55 & 0.54 & 0.66 & 0.63 & 0.51 & 0.34 & 0.75 & 0.74 \\ + Masking Size Information & 0.50 & 0.49 & 0.49 & 0.34 & 0.53 & 0.41 & 0.74 & 0.74 & 0.53 & 0.39 & 0.80 & 0.80 \\ + Masking Entity Information & 0.50 & 0.33 & 0.50 & 0.37 & 0.49 & 0.42 & 0.63 & 0.58 & 0.52 & 0.46 & 0.70 & 0.69 \\ + Counterfactual Size Information & 0.50 & 0.47 & 0.49 & 0.33 & 0.47 & 0.33 & 0.23 & 0.23 & 0.42 & 0.41 & 0.38 & 0.38 \\ \hline \end{tabular} \end{table} Table 2: The results of using different prompt settings on various models on the general question of POSQA. Figure 1: The results of CER and CMR using different knowledge-augmented prompts (except for Counterfactual Size Information) on various models on general question of POSQA. ### Prompts with Augmented Knowledge The performance of using Relevant Knowledge-Enhanced Prompt as contextual information to query LLMs is shown in Table 2 and Table 3. According to the presented results, augmenting the original size comparison questions with supplementary knowledge about the mentioned objects can significantly enhance an LLM's performance, suggesting that these models can effectively use contextual information to improve their real-world understanding. In particular, as the model size scales beyond 780M (Flan-T5-Large), its ability to utilise prompt information increases greatly. Among these three Knowledge-augmented prompts, adding exact size information of the two compared objects in the context is the most effective way, which adheres to the intuition. By using this prompt, GPT3.5-Turbo achieves 0.9 accuracy score on both general questions and special questions, which shows great ability in utilising the exact size information in the context. GPT3-Davinci also obtains the 0.89 accuracy score and the Macro-F1 score on special questions. Even for FT5-XL, it can achieve 0.87 accuracy score and 0.88 Macro-F1 score on general questions and special questions, respectively, which is quite close to GPT3.5-Turbo with regard to the size comparison ability. In addition to Gold Size Information, the generated knowledge information and size information are also helpful to LLMs in answering the size comparison question. The results also show that size knowledge is more effective than general knowledge as contextual information. Specifically, on GPT3.5-Turbo, using size knowledge information can increase the accuracy and Macro-F1 scores by 0.06 on general questions and 0.03 on special questions from using general knowledge information. Although there is a significant improvement after providing LLMs with useful contextual information, the result still fall short of the human-level understanding of the real world, especially when adding the ground truth exact size information. Furthermore, the LLMs' performance in answering different types of question has an obvious difference, despite the two types of questions being semantically equivalent for humans. This observation indicates that LLMs are more sensitive to question formats than humans. In summary, the findings suggest that supplementing original questions with additional information can enhance an LLM's real-world understanding. However, even with this augmentation, LLMs' ability to achieve human-level understanding of the real world is still limited, and their sensitivity to question formats remains a challenge. ### Context vs. Weights By using the Adversarial Prompt with Knowledge Perturbation, we further explore the influence of the contextual information. The results in Table 2 and Table 3 provide valuable insights into how providing additional information can help improve LLM's Figure 2: The results of CER and CMR using different knowledge-augmented prompts (except for Counterfactual Size Information) on various models on special question of POSQA. \begin{table} \begin{tabular}{l c c|c|c|c|c|c|c|c|c|c} \hline **Models\(\rightarrow\)** & \multicolumn{2}{c|}{FTS-Small} & \multicolumn{2}{c|}{FTS-Base} & \multicolumn{2}{c|}{FT5-Large} & \multicolumn{2}{c|}{FT5-XL} & \multicolumn{2}{c|}{GPT3-Davinci} & \multicolumn{2}{c}{GPT3.5-Turbo} \\ **Prompt\(|\)1, Metric\(\rightarrow\)** & Acc & Macro-F1 & Acc & Macro-F1 & Acc & Macro-F1 & Acc & Macro-F1 & Acc & Macro-F1 \\ \hline Plain Question & 0.51 & 0.49 & 0.51 & 0.50 & 0.63 & 0.62 & 0.72 & 0.71 & 0.69 & 0.69 & 0.73 & 0.72 \\ + Size Knowledge Information & 0.52 & 0.50 & 0.55 & 0.53 & 0.74 & 0.73 & 0.79 & 0.78 & 0.76 & 0.76 & 0.83 & 0.83 \\ + General Knowledge Information & 0.52 & 0.50 & 0.54 & 0.52 & 0.67 & 0.66 & 0.72 & 0.71 & 0.71 & 0.71 & 0.80 & 0.80 \\ + Exact Size Information & 0.52 & 0.50 & 0.56 & 0.54 & 0.77 & 0.77 & 0.88 & 0.88 & 0.89 & 0.89 & 0.90 & 0.89 \\ \hline + Only Head Entity Gold Size & 0.50 & 0.48 & 0.52 & 0.51 & 0.68 & 0.67 & 0.77 & 0.77 & 0.58 & 0.57 & 0.74 & 0.73 \\ + Only Tail Entity Gold Size & 0.51 & 0.49 & 0.52 & 0.51 & 0.64 & 0.63 & 0.77 & 0.76 & 0.69 & 0.68 & 0.76 & 0.76 \\ + Masking Size Information & 0.52 & 0.49 & 0.51 & 0.49 & 0.68 & 0.68 & 0.76 & 0.76 & 0.67 & 0.67 & 0.64 & 0.63 \\ + Masking Entity Information & 0.51 & 0.47 & 0.54 & 0.52 & 0.71 & 0.69 & 0.84 & 0.84 & 0.84 & 0.83 & 0.82 & 0.82 \\ + Counterfactual Size Information & 0.52 & 0.50 & 0.49 & 0.47 & 0.32 & 0.32 & 0.18 & 0.18 & 0.25 & 0.25 & 0.29 & 0.29 \\ \hline \end{tabular} \end{table} Table 3: The results of using different prompt settings on various models on the special question of POSQA. real-world understanding. When we only provide LLMs with partial information, the performance drops significantly compared to jointly providing information about both objects. For instance, on GPT3.5-Turbo, the accuracy score decreases by 0.21 on general questions and by 0.16 on special questions when only providing the exact size information of the head entity. The results also reveal an imbalance in the utilisation of information, as the LMs tend to benefit more from extra information about the tail objects than the head objects. Masking either of the size or entity information would influence the performance of the LLMs. Interestingly, masking size information decreases the performance more in answering the special questions, while masking entity information decreases the performance more in answering the general questions. For example, on GPT3.5-Turbo, the accuracy score is 0.8 on general questions while only 0.64 on special questions when masking size information. When masking entity information, GPT3.5-Turbo only gets a 0.7 accuracy score on general questions while 0.82 on special questions. Even when key information is masked, the context still provides some useful information which LLMs can utilise when answering on one type of question. Figure 1 and Figure 2 show the CER and CMR results using different knowledge-augmented prompts (except for Counterfactual Size Information) on various models on general and special questions, respectively. As the scale of LMs grows, the ability of LMs to utilise contextual information in the prompt is also enhanced. For example, the results of GPT3.5-Turbo are scattered in the lower right of the diagram, which means that a powerful LLM should have high CER and low CMR. When providing LLMs with counterfactual size information that is not helpful to answering size comparison questions, accuracy and Macro-F1 scores decrease to a large extent. It should be noted that FT5-XL has the lowest accuracy score and Macro-F1 score on both types of questions, with only 0.23 and 0.18 respectively. Figure 3 illustrates the results of CER and CMR using the Counterfactual Size Information prompt on various models for two types of questions. The CER is low on all models, while the CMR increases as the scale of the models grows. It shows a great ability to utilise counterfactual contextual information for larger LMs (e.g. GPT3-Davinci), which is revealed by the low CER and high CMR on general and special questions. The results also highlight the importance of contextual information in LLMs' real-world understanding, as providing counterfactual information significantly decreases their performance with high CMR in answering both general and special questions. This indicates that the LLMs rely more on contextual information rather than their internal weights learnt during pre-training, which reflects their in-context learning capabilities. In summary, the findings provide valuable insights into the strengths and limitations of LLMs in real-world understanding. They demonstrate that while providing additional information can enhance LLMs' performance and their sensitivity to contextual information. However, this also raises concerns about the robustness of LLMs' real-world understanding, as they may be easily induced to perform harmful actions in real-world scenarios. ## 6 Discussion **LLMs' ability to understand the size of physical objects in the real world remains a challenge.** Our experiments underscore that even the most advanced LLMs at our disposal struggle to consistently grasp the sizes of physical objects. Specifically, GPT3.5-Turbo registers an average accuracy score of 0.71 when directly addressing the two question types present in POSQA. This performance reveals a pronounced disparity compared to human comprehension of size, particularly when Figure 3: The results of CER and CMR using Counterfactual Size Information prompt on various models on two types of questions of POSQA. object size information is explicitly provided. Humans, when confronted with size comparison tasks, often engage in mental simulations, drawing upon their accumulated knowledge to envisage the sizes of objects [14]. For a human, the ability to tackle size comparison effectively hinges on possessing adequate size-related information about the objects in question. Similarly, LLMs should leverage the knowledge encoded in their internal weights to adeptly respond to size comparison queries. LLMs prefer to utilise the information in the given context rather than knowledge stored in their internal weights.Our experiments demonstrate that giving useful information in a prompt can enhance the performance of LLMs. For instance, GPT3.5-Turbo achieves 0.9 accuracy score on both two types of questions. However, LLMs cannot make good use of the external context information when they are only given partial information. It is noteworthy that adding error information to the prompt largely decreases the performance of LLMs. For example, GPT3.5-Turbo can only get 0.38 Accuracy and Macro-F1 score on general questions and 0.29 Accuracy and Macro-F1 score on special questions. Even research [20] has shown that adding an instruction to ignore irrelevant information brings performance gains; A single piece of irrelevant information can distract the models and substantially degrade their performance. These results indicate that the context in the prompt is extremely important for LLMs and that LLMs will utilise the information in the context. LLMs are sensitive to the format of the query, even if they are semantically equivalent.In our experiments, we query LLMs with different format of size comparison questions from POSQA. The results show that LLMs are not robust enough when faced with the same semantics, but different forms of the queries. For example, on FT5-XL, it achieves a 0.58 accuracy score on general questions versus a 0.72 accuracy score on special questions. Based on the behaviour of the LLMs on answering size comparison questions, it is not certain which forms of questions the models are better at solving, and the performance is also influenced by the context added to the prompt. Although the performance gap between these two types of questions in GPT3.5-Turbo has narrowed, it is still a noteworthy problem when training robust LLMs in the future. Alignment of World ModelsTo further investigate the alignment of LLM world models, we randomly sampled 100 examples and annotated them with four human annotators in two settings: (1) Online: Annotators are free to access any external or online sources of knowledge, and (2) Offline: Annotators are prohibited from relying on external resources during annotation. Surprisingly, online annotations lag behind offline annotations in terms of accuracy, with online accuracy at 0.86 and offline accuracy at 0.88. ChatGPT is slightly behind human performance, reaching an accuracy of 0.77. As shown in Table 4, we compute the Krippendorff Alpha [16] to assess the internal and mutual agreement between the online and offline annotators and ChatGPT. The scores indicate that ChatGPT is more consistent with human annotators without accessing an external knowledge source, which arouses curiosity that ChatGPT may share characteristics and bias similar to that of human intuition or fast thinking, which could be further investigated from the psycholinguistic point of view. ## 7 Conclusion We propose POSQA: a Physical Object Size Question-Answering dataset with two types of size comparison questions to probe the ability of LLMs to understand the size of physical world objects. We design different knowledge-augmented prompt settings to investigate the effect of the context in the prompt. Our experiments demonstrate that LLMs still fail to demonstrate a robust understanding of the size of physical objects. The ability of LLMs to understand the size of physical objects in the real world remains a challenge for the future. The results also show that LLMs prefer to utilise the information in the given context rather than to use the knowledge stored in their internal weights. This also raises concerns about the robustness of LLMs' understanding ability to identify the useful and correct contextual information in the prompt. \begin{table} \begin{tabular}{l l l} \hline \hline Krippendorff’s Alpha & Human (Online) & Human (Offline) \\ \hline Human (Online) & 0.740 & 0.780 \\ Human (Offline) & 0.780 & 0.791 \\ ChatGPT & 0.644 & 0.687 \\ \hline \hline \end{tabular} \end{table} Table 4: The internal and mutual consistency among human annotators in different settings and ChatGPT is measured by Krippendorff’s Alpha. ## Acknowledgement We are grateful to acknowledge that the work of the joint first author, CS, has been jointly supported by a donation from Toshiba Europe and the Engineering and Physical Sciences Research Council of UKRI (grant number 2752931). ## Limitations Our datasets comprise a modest total of 92 objects. Consequently, rather than serving as comprehensive evaluation toolkits that encapsulate the breadth of LLM world models, they may be best suited for probing or auditing LLM performance in real-world understanding, preferably in tandem with broader benchmarks. Additionally, the dataset only captures rudimentary relationships between two objects--specifically, size comparisons. The incorporation of more intricate interactions and dynamics among multiple objects might provide a deeper insight into the LLM world model. Moreover, due to resource constraints, our experiments were limited to Flan-T5, GPT3-Davinci, and GPT3.5-Turbo. ## Ethics Statement The purpose of this research project is to evaluate the world understanding capabilities of Language Models (LLMs) by synthesizing new datasets from existing knowledge bases, with the aim of advancing Natural Language Processing (NLP) research and improving LLM performance. We are committed to conducting this research with the highest ethical standards, ensuring privacy and ethical considerations. No personally identifiable information or sensitive data is collected, stored, or processed, as the datasets are solely derived from publicly available knowledge bases and are anonymized. We actively mitigate biases in the underlying data by carefully selecting and preprocessing the knowledge base. The datasets created will be used exclusively for evaluating LLM world understanding and will not be used for any commercial, discriminatory, or unethical purposes. We prioritize responsible data usage, securely storing the data and making it accessible only to authorized researchers. Transparency and reproducibility are key, as we document the dataset synthesis process for others to reproduce and validate the results. Our research adheres to ethical guidelines, institutional policies, industry standards, and relevant regulations. Additionally, we foster collaboration and knowledge sharing within the research community, seeking to develop LLMs that better engage with the world and benefit society. Continuous evaluation and improvement are integral to our approach, and we welcome feedback from the research community and the wider public, as it contributes to the responsible development and application of LLMs, aligning with principles of fairness, privacy protection, transparency, and public benefit.
2307.13983
Pleijel nodal domain theorem in non-smooth setting
We prove the Pleijel theorem in non-collapsed RCD spaces, providing an asymptotic upper bound on the number of nodal domains of Laplacian eigenfunctions. As a consequence, we obtain that the Courant nodal domain theorem holds except at most for a finite number of eigenvalues. More in general, we show that the same result is valid for Neumann (resp. Dirichlet) eigenfunctions on uniform domains (resp. bounded open sets). This is new even in the Euclidean space, where the Pleijel theorem in the Neumann case was open under low boundary-regularity.
Nicolò De Ponti, Sara Farinelli, Ivan Yuri Violo
2023-07-26T06:46:46Z
http://arxiv.org/abs/2307.13983v2
# Pleijel nodal domain theorem in non-smooth setting ###### Abstract We prove the Pleijel theorem in non-collapsed RCD spaces, providing an asymptotic upper bound on the number of nodal domains of Laplacian eigenfunctions. As a consequence, we obtain that the Courant nodal domain theorem holds except at most for a finite number of eigenvalues. More in general, we show that the same result is valid for Neumann (resp. Dirichlet) eigenfunctions on uniform domains (resp. bounded open sets). This is new even in the Euclidean space, where the Pleijel theorem in the Neumann case was open under low boundary-regularity. ###### Contents * 1 Introduction * 2 Preliminaries * 3 Sobolev spaces and Neumann eigenfunctions in uniform domains * 4 From local to global isoperimetric inequality * 5 Almost Euclidean Faber-Krahn inequality for small volumes * 6 Proof of main theorem ## 1 Introduction Given a continuous eigenfunction \(u\) of a linear operator \(L\), there is a lot of interest in studying the properties of its nodal domains, the latter being defined as the connected components of the set \(\{u\neq 0\}\). In the usual setting \(L\) is an operator of differential nature, with discrete spectrum \(\lambda_{1}\leq\lambda_{2}\leq\ldots\leq\lambda_{k}\leq\ldots\), and a classical problem is to bound the number of nodal domains of \(u_{k}\) in terms of \(k\). Here \(u_{k}\) is an eigenfunction of eigenvalue \(\lambda_{k}\). There are two main results known in this direction. The first one, due to Courant [35] (see also [36]), provides a pointwise bound: _for every_\(k\) the number of nodal domains of \(u_{k}\) is less than or equal to \(k\). The second one is due to Pleijel [78] and provides an asymptotic upper bound, which implies that _for sufficiently large_\(k\) the number of nodal domains of \(u_{k}\) is strictly less than \(k\). The theorems of Courant and Pleijel have been deeply investigated in a different number of situations, see [10, 14, 16, 17, 25, 30, 37, 40, 45, 56, 66, 69, 79, 86] for a non-exhaustive list. The main goal of the present paper is to obtain an asymptotic upper bound on the number of nodal domains for Dirichlet and Neumann Laplacian eigenfunctions in the setting of possibly non-smooth metric measure spaces. In particular, we focus on the class of \(\mathrm{RCD}(K,N)\) spaces consisting of metric measure spaces satisfying a synthetic notion of Ricci curvature bounded from below by \(K\) and dimension bounded from above by \(N\) (see [3] and Section 2.5 for more details). Nevertheless, our analysis is of interest already in the Euclidean case since we prove the Pleijel theorem for Lipschitz and even more rough domains (see Corollary 1.3 below and the subsequent discussion). Indeed, the validity of a Pleijel result in the Neumann case with boundary regularity below \(C^{1,1}\) was an open question in the field (see the comments after Remark 1.2 in [56]). About the Courant nodal domain theorem, let us just briefly mention that its validity is open for \(\mathrm{RCD}\) spaces. This is mainly due to the fact that the _weak unique continuation property_ for the Laplacian in this setting is currently not known. We refer to [43, 44] for more on this problem, where also the failure of the _strong_ unique continuation property in the \(\mathrm{RCD}\) setting is shown. We remark that a worse, but still pointwise, upper bound on the number of nodal domains can be easily deduced from the variational characterization of the eigenvalues (see [66]). Before stating our main result, let us first introduce the setting and some notations referring to Section 2 for the precise definitions. Our investigation deals with eigenfunctions of the Dirichlet or Neumann Laplacian \(\Delta_{\mathcal{D}}\), \(\Delta_{\mathcal{N}}\) in a bounded domain \(\Omega\subset\mathrm{X}\) in a \(\mathrm{RCD}(K,N)\) space \((\mathrm{X},\mathsf{d},\mathcal{H}^{N})\), where \(\mathcal{H}^{N}\) denotes the \(N\)-dimensional Hausdorff measure in \((\mathrm{X},\mathsf{d})\). As usual in this kind of problems, some additional assumptions are required to deal with the Neumann case and we will demand that \(\Omega\) is a _uniform domain_ (see Definition 3.1). We will clarify below why we need to restrict our attention to \(\mathrm{RCD}\) spaces endowed with the Hausdorff measure, called _non-collapsed_ in the literature, instead of considering the full \(\mathrm{RCD}\) class. Here we limit ourselves to mention that these assumptions are sufficient for the Dirichlet and Neumann Laplacian in \(\Omega\) to have discrete spectrum and for the eigenfunctions to be continuous. We list the Dirichlet and Neumann eigenvalues respectively by \[0 \leq\lambda_{1}^{\mathcal{D}}(\Omega)\leq\lambda_{2}^{\mathcal{D }}(\Omega)\leq\cdots\leq\lambda_{k}^{\mathcal{D}}(\Omega)\leq\cdots\to+\infty,\] \[0 =\lambda_{1}^{\mathcal{N}}(\Omega)\leq\lambda_{2}^{\mathcal{N}}( \Omega)\leq\ldots\leq\lambda_{k}^{\mathcal{N}}(\Omega)\leq\ldots\to+\infty,\] counted with multiplicity. Thanks to the continuity of a Laplacian eigenfunction \(u\) in our setting, it makes sense to define its _nodal domains_, which are the connected components of \(\Omega\setminus\{u=0\}\). For any \(k\in\mathbb{N}\) we can now define \[M_{\Omega}^{\mathcal{D}}(k):=\sup\left\{\#\text{ of nodal domains of }u:u\text{ Dirichlet eigenfunction of eigenvalue }\lambda_{k}^{\mathcal{D}}(\Omega)\right\}\] and analogously \(M_{\Omega}^{\mathcal{N}}\) in the Neumann case (see Def. 6.3 for more detailed definitions of \(M_{\Omega}^{\mathcal{D}},M_{\Omega}^{\mathcal{N}}\)). We finally denote by \(j_{\alpha}\) the first positive zero of the Bessel function of index \(\alpha>0\) and by \(\omega_{N}\) the volume of the unit ball in the \(N\)-dimensional Euclidean space. **Theorem 1.1** (Pleijel theorem in \(\mathrm{RCD}\) setting - Neumann and Dirichlet cases).: _Let \((\mathrm{X},\mathsf{d},\mathcal{H}^{N})\) be an \(\mathrm{RCD}(K,N)\) space, with \(K\in\mathbb{R}\) and \(N\geq 2\), and let \(\Omega\subset\mathrm{X}\) be an open and bounded set. Then_ \[\varlimsup_{k\to+\infty}\frac{M_{\Omega}^{\mathcal{D}}(k)}{k}\leq\frac{(2\pi) ^{N}}{\omega_{N}^{2}j_{\frac{(N-2)}{N}}^{N}}<1\,. \tag{1.1}\] _If moreover \(\Omega\subset\mathrm{X}\) is a uniform domain, then_ \[\varlimsup_{k\to+\infty}\frac{M_{\Omega}^{\mathcal{N}}(k)}{k}\leq\frac{(2\pi) ^{N}}{\omega_{N}^{2}j_{\frac{(N-2)}{N}}^{N}}<1\,. \tag{1.2}\] _In particular, for every \(k\in\mathbb{N}\) large enough every Dirichlet (resp. Neumann) eigenfunction of eigenvalue \(\lambda_{k}^{\mathcal{D}}(\Omega)\) (resp. \(\lambda_{k}^{\mathcal{N}}(\Omega)\)) in any \(\Omega\) bounded open set (resp. uniform domain) has less than \(k\) nodal domains._ There has been recently a growing interest in the study of eigenvalues and eigenfunctions of the Laplacian and their zero set in the setting of \(\mathrm{RCD}\) spaces (see [6, 7, 8, 18, 42, 43, 44, 61, 62, 92]). However, to the best of our knowledge, Theorem 1.1 is the first non-trivial result related to nodal domains. The class of non-collapsed \(\mathrm{RCD}(K,N)\) space includes non-collapsed Ricci limit spaces [32, 33] and finite dimensional Alexandrov spaces [77, 91], and our result is new also for these classes of spaces where the Courant's nodal domain theorem is not known. Additionally, thanks to the recent [83], we know that every \(\mathrm{RCD}(K,N)\) space contains a rich class of non-trivial uniform domains, hence it is possible to find many sets that satisfy the assumptions of our result also in the Neumann case. We recall that, in the somewhat easier Dirichlet case, Theorem 1.1 goes back to the work of Pleijel [78] in the Euclidean plane and to Berard and Meyer [17] for smooth Riemannian manifolds. When \(\mathrm{X}\) is bounded it is allowed to take \(\Omega=\mathrm{X}\) in the Theorem 1.1. In this case Neumann eigenfunctions coincide with the usual Laplacian eigenfunctions on \(\mathrm{X}\) and we have the following. **Corollary 1.2**.: _Let \((\mathrm{X},\mathsf{d},\mathcal{H}^{N})\) be a compact \(\mathrm{RCD}(K,N)\) space, with \(K\in\mathbb{R}\) and \(N\geq 2\). Denote by \(\{\lambda_{k}\}_{k\in\mathbb{N}}\) the eigenvalues of the Laplacian in \(\mathrm{X}\) and by \(M(k)\) the maximal number of nodal domains of any Laplacian eigenfunction of eigenvalue \(\lambda_{k}.\) Then_ \[\varlimsup_{k\to+\infty}\frac{M(k)}{k}\leq\frac{(2\pi)^{N}}{\omega_{N}^{2}j_{ \frac{(N-2)}{N}}^{N}}<1\,. \tag{1.3}\] The result in Theorem 1.1 in the case of Neumann eigenfunctions is interesting already when taking \((\mathrm{X},\mathsf{d},\mathcal{H}^{N})\) to be the \(N\)-dimensional Euclidean space. We extract this version below in a self-contained statement, for the convenience of the reader. **Corollary 1.3**.: _Let \(\Omega\subset\mathbb{R}^{N}\), \(N\geq 2,\) be a uniform domain. For every \(k\in\mathbb{N}\) denote by \(\lambda_{k}^{\mathcal{N}}(\Omega)\) the Neumann Laplacian eigenvalues in \(\Omega\) and by \(M_{\Omega}^{\mathcal{N}}(k)\) the maximal number of nodal domains of a Neumann eigenfunction of eigenvalue \(\lambda_{k}^{\mathcal{N}}(\Omega).\) Then_ \[\varlimsup_{k\to+\infty}\frac{M_{\Omega}^{\mathcal{N}}(k)}{k}\leq\frac{(2\pi) ^{N}}{\omega_{N}^{2}j_{\frac{(N-2)}{N}}^{N}}<1. \tag{1.4}\] Recall that the class of uniform domains in the Euclidean space includes bounded _Lipschitz domains_, but also more irregular domains such as _quasi disks_ and in particular the interior of a _Koch Snowflake_ (see Section 3.1 for more details and references). A Pleijel theorem for Neumann eigenfunctions of Euclidean domains was firstly proved by Polterovich [79], who considered planar domains with piecewise real analytic boundary. The general \(N\)-dimensional case was obtained in [69] for domains \(\Omega\) with \(C^{1,1}\) boundary, where the regularity assumption is required in order to apply to eigenfunctions a reflection procedure across the boundary of \(\Omega\). The same limitation on the regularity of the boundary appears in [56] (in the context of more general Robin problems), where it is explicitly stated the problem of the validity of Pleijel theorem under a weaker regularity of the boundary. Very recently, the techniques introduced in [69] were employed and refined in [15] in the planar case, where the authors were able to treat 2-dimensional domains with smooth boundaries except for a finite number of vertices. Our work introduces a different strategy and avoids any reflection argument, allowing us to handle more general domains without imposing any restriction on the dimension. To explain the basic idea of our method we recall that a key step in the original proof of the Pleijel theorem is to exploit the fact that an eigenfunction \(u\) in \(\Omega\), when restricted to one of its nodal domains \(U\subset\Omega\), satisfies a zero-Dirichlet boundary condition in \(U\) itself, thus allowing to apply the Faber-Krahn inequality and get a lower bound for the volume of \(U\). While this is true for a Dirichlet-eigenfunction and for all its nodal domains, it is in general _false_ for a Neumann eigenfunction and a nodal domain that touches the boundary. The reflection procedure in [69] is needed precisely to handle this issue, but requires smoothness of the boundary. Instead our observation is that, by the very definition of nodal domain, an eigenfunction \(u\) (even in the Neumann case) has indeed zero-Dirichlet boundary conditions in \(U\) but _relative to the ambient domain_\(\Omega\), i.e. ignoring the portion of \(\partial U\) which is contained in \(\partial\Omega\). The key point is then to view \(\Omega\) as a metric space in its own right and prove that it is regular enough to satisfy a version of the Faber-Krahn inequality, which then allows to carry out the rest of the argument. This is where the uniform condition will enter into play ensuring the required analytical properties of \(\Omega\). Even if we use mostly techniques coming from the metric setting, we also develop some purely-Euclidean technical tools that we believe could be useful to show other Pleijel-type results in \(\mathbb{R}^{N}\) under low boundary-regularity. In particular, we prove a Faber-Krahn-type inequality and a Green's formula for eigenfunctions of uniform domains (see Corollary 5.2 and Corollary 6.2 respectively). Both the results were previously available only assuming \(C^{1,1}\)-boundary. We now comment further on the assumptions and the proof of Theorem 1.1. The uniformity hypothesis on the domain guarantees the discreteness of the spectrum of the Neumann Laplacian, a fact even needed to state the theorem. Additionally we will make a crucial use of analytical properties of uniform domains in metric measure spaces, such as Sobolev extension properties, the validity of a Poincare inequality and a Sobolev embedding (see Section 3 for more details). The non-collapsed assumption is more technical in nature, and we leave for future investigations the general case of possibly collapsed \(\mathrm{RCD}(K,N)\) spaces. Let us notice that in collapsed \(\mathrm{RCD}\) spaces the spectrum of the Laplacian can produce a singular and in some sense unexpected behaviour in the asymptotic regime (see the recent [38]), and thus this generalization seems non-trivial as we are going to further clarify in the next lines commenting the proof. The main scheme of the proof of Theorem 1.1 is similar to the one usually employed in the smooth setting, e.g. in [17, 45, 56, 69, 78, 86]. The two primary ingredients are the Weyl law and an almost-Euclidean Faber-Krahn inequality for small volumes. The Weyl law has already been investigated in the setting of \(\mathrm{RCD}(K,N)\) spaces (see [8] and [92]). In the non-collapsed case it takes the usual formulation \[\lim_{\lambda\to+\infty}\frac{N(\lambda)}{\lambda^{N/2}}=\frac{\omega_{N}}{(2 \pi)^{N}}\mathcal{H}^{N}(\Omega)\,,\] where \(N(\lambda):=\frac{\varepsilon}{4}\{k\in\mathbb{N}:\lambda_{k}^{\mathcal{D}}( \Omega)\leq\lambda\}\) is the eigenvalues counting function and \(\{\lambda_{k}^{\mathcal{D}}(\Omega)\}_{k\in\mathbb{N}}\) are the Dirichlet eigenvalues of the domain \(\Omega\) (see Definition 2.5). We stress that the Weyl law in the Dirichlet case is sufficient for our purposes, even if in our main statement we consider both Dirichlet and Neumann eigenfunctions. This thanks to the elementary inequality \(\lambda_{k}^{\mathcal{N}}(\Omega)\leq\lambda_{k}^{\mathcal{D}}(\Omega)\) between Neumann and Dirichlet eigenvalues (see Lemma 2.8). We remark that suitable forms of the Weyl law on the whole space have been studied under slightly more general assumptions than non-collapsing, but the situation is more intricate and there exist compact \(\mathrm{RCD}(K,N)\) spaces for which \(N(\lambda)\) is not asymptotic to \(\lambda^{\beta}\) for any \(\beta\geq 0\). We refer to [8, 38] for the details. Concerning the almost-Euclidean Faber-Krahn inequality, it roughly states that the first Dirichlet eigenvalue of an open set \(U\subset\mathrm{X}\), of sufficiently small volume, is bounded below by the first Dirichlet eigenvalue of the Euclidean ball having the same volume and up to a small error. This will be obtained starting from an almost-Euclidean isoperimetric inequality for small volumes (similar to the one obtained in [17] in the smooth setting) and rearrangement methods. In contrast with the proof in the smooth case, our situation requires to deal with a set \(C\) of possibly "bad" points, and to work with sets \(U\) that stay sufficiently far from \(C\). We refer to Theorem 5.3 and Theorem 3.2 for the precise statements, and we suggest to compare them with [17, Lemme 16,15]. For both these results the non-collapsed assumption also plays a key role to ensure a more regular infinitesimal behaviour of the ambient space. ## Acknowledgements The authors thank Asma Hassannezhad for helpful discussions. The third author was supported by the Academy of Finland project Incidences on Fractals, Grant No.321896. Preliminaries ### Calculus in metric measure spaces The triple \((\mathrm{X},\mathsf{d},\mathfrak{m})\) will denote a metric measure space, where \((\mathrm{X},\mathsf{d})\) is a complete and separable metric space and \(\mathfrak{m}\) is a non-negative Borel measure, finite on bounded sets. We will also always assume \(\mathrm{supp}(\mathfrak{m})=X\). For every set \(A\subset\mathrm{X}\) we will denote by \(\overline{A}\) its topological closure, by \(A^{c}\coloneqq\mathrm{X}\backslash A\) its complement and by \(\partial A\) its topological boundary. We denote by \(B_{r}(x):=\{y\in\mathrm{X}:\mathsf{d}(x,y)<r\}\) the ball of radius \(r\) and center \(x\). The same set is also denoted by \(B_{r}^{X}(x)\) whenever we want to emphasize the role of the space \(\mathrm{X}\). By \(\mathsf{d}(A,B):=\inf\{\mathsf{d}(x,y)~{}:~{}x\in A,y\in B\}\) we denote the distance between two sets \(A,B\subset\mathrm{X}\), so that \(\mathsf{d}(A,\emptyset)=+\infty\). The open \(\varepsilon\)-enlargement of a set \(A\subset\mathrm{X}\) is denoted by \(A^{c}:=\{x\in\mathrm{X}:\mathsf{d}(A,x)<\varepsilon\}\). Given a set \(C\subset\mathrm{X}\), we denote by \(\mathsf{d}\big{|}_{C}:=\mathsf{d}\big{|}_{C\times C}\) the restriction of the distance to the set \(C\). We will say that \((\mathrm{X},\mathsf{d})\) is proper if closed and bounded subsets of \(\mathrm{X}\) are compact. Given a metric space \((\mathrm{X},\mathsf{d})\) and a rectifiable curve \(\gamma:[a,b]\to\mathrm{X}\), we denote by \(l(\gamma)\) its length (see e.g. [57, Chapter 5.1]). We say that \(\gamma\) joins \(x\in\mathrm{X}\) and \(y\in\mathrm{X}\) if \(\gamma(a)=x\) and \(\gamma(b)=y\). **Definition 2.1** (Nodal domain).: Let \((\mathrm{X},\mathsf{d})\) be a metric space, \(A\subset\mathrm{X}\) be any subset and \(f:A\to\mathbb{R}\) be a continuous function. The _nodal domains_ of \(f\) (in \(A\)) are the connected components of \(A\setminus\{x\in A~{}:~{}f(x)=0\}\). In the next result we recall some elementary properties of nodal sets. **Lemma 2.2**.: _Let \((\mathrm{X},\mathsf{d})\) be a metric space, \(A\subset\mathrm{X}\) be any subset and \(f:A\to\mathbb{R}\) be a continuous function. Let \(U\subset A\) be a nodal domain of \(f.\) Then either \(f>0\) or \(f<0\) in \(U.\) Moreover if \(A\) is open and \((\mathrm{X},\mathsf{d})\) is locally connected then \(U\) is also open._ Proof.: The set \(U\) is connected by definition, hence \(f(U)\subset\mathbb{R}\) is also connected and does not contain zero. It follows that \(f(U)\subset(0,\infty)\) or \(f(U)\subset(-\infty,0).\) For a proof that \((\mathrm{X},\mathsf{d})\) locally connected implies that \(U\) is open whenever \(A\) is open see e.g. [75, Theorem 25.3]. For every open set \(\Omega\subset\mathrm{X}\) we denote by \(\mathsf{LIP}(\Omega)\), \(\mathsf{LIP}_{\mathsf{loc}}(\Omega)\) and \(\mathsf{LIP}_{c}(\Omega)\) respectively the space of Lipschitz functions, locally Lipschitz functions and Lipschitz functions with compact support in \(\Omega.\) We also denote by \(\mathsf{LIP}_{bs}(\Omega)\) the subset of \(\mathsf{LIP}(\mathrm{X})\) of functions having support bounded and contained in \(\Omega.\) The slope \(\mathrm{lip}(f)(x)\) of a locally Lipschitz function \(f\in\mathsf{LIP}_{\mathsf{loc}}(\Omega)\) at a point \(x\in\Omega\) is defined as \[\mathrm{lip}(f)(x):=\varlimsup_{y\to x}\frac{|f(y)-f(x)|}{\mathsf{d}(y,x)},\] taken to be \(0\) when \(x\) is isolated. The slope satisfies the following Leibniz rule: \(\mathrm{lip}(fg)\leq f\mathrm{lip}(g)+g\mathrm{lip}(f),\) for every \(f,g\in\mathsf{LIP}_{\mathsf{loc}}(\Omega).\) Given \(p\in[1,\infty]\), we use the notation \(L^{p}(\mathrm{X},\mathfrak{m})\) (resp. \(L^{p}_{\mathsf{loc}}(\mathrm{X},\mathfrak{m})\)) for the space of Lebesgue \(p\)-integrable (resp. \(p\)-locally integrable) real functions on \(X\) endowed with the Borel \(\sigma\)-algebra. For brevity, the same function space is also denoted by \(L^{p}(\mathfrak{m})\). When \(\Omega\subset\mathrm{X}\) is an open set, we set \(L^{p}(\Omega):=L^{p}(\Omega,\mathfrak{m}_{|_{\Omega}})\) where \(\mathfrak{m}_{|_{\Omega}}\) is the restriction of the measure \(\mathfrak{m}\) to \(\Omega\). For a function \(u\in L^{p}(\Omega)\) we define its essential support \(\mathrm{supp}(u)\) as the smallest closed set \(C\) such that \(u=0\)\(\mathfrak{m}\)-a.e. in \(\Omega\setminus C.\) The Cheeger energy \(\mathsf{Ch}\colon L^{2}(\mathfrak{m})\to[0,\infty]\) is defined as the convex and lower semicontinuous functional \[\mathsf{Ch}(f):=\inf\Big{\{}\varliminf_{n\to\infty}\int_{\mathrm{X}}\mathrm{lip }^{2}(f_{n})\,\mathrm{d}\mathfrak{m}\colon(f_{n})\subset L^{2}(\mathfrak{m}) \cap\mathsf{LIP}_{\mathsf{loc}}(\mathrm{X}),\varliminf_{n\to\infty}\|f-f_{n} \|_{L^{2}(\mathfrak{m})}=0\Big{\}}.\] The Sobolev space \(W^{1,2}(\mathrm{X},\mathsf{d},\mathfrak{m})\) (or \(W^{1,2}(\mathrm{X})\) for short) is then defined as \(W^{1,2}(\mathrm{X},\mathsf{d},\mathfrak{m}):=\{\mathsf{Ch}<\infty\}\) equipped with the norm \(\|f\|_{W^{1,2}(\mathrm{X})}^{2}:=\|f\|_{L^{2}(\mathfrak{m})}^{2}+\mathsf{Ch}(f),\) which makes it a Banach space. This approach to the definition of Sobolev space was introduced in [5], where it is also shown to be equivalent to the previous definitions given in [31, 85]. For every \(f\in W^{1,2}(\mathrm{X})\) there exists a notion of modulus of the gradient called _minimal weak upper gradient_, minimal w.u.g. for short, denoted by \(|Df|\in L^{2}(\mathfrak{m})\) and satisfying \[\mathsf{Ch}(f)=\int_{\mathrm{X}}|Df|^{2}\mathrm{d}\mathfrak{m}\,.\] For every \(f\in\mathsf{LIP}_{\mathsf{loc}}(\mathrm{X})\) we have \(|Df|\leq\mathrm{lip}(f)\)\(\mathfrak{m}\)-a.e.. Moreover, the following calculus rules are satisfied (see e.g. [51]): for every \(f,g\in\mathit{W}^{1,2}(\mathrm{X})\) it holds _locality:_\(|Df|=|Dg|\)\(\mathfrak{m}\)-a.e. in \(\{f=g\}\), _chain rule:_ for every \(\varphi\in\mathsf{LIP}(\mathbb{R})\) with \(\varphi(0)=0\), \(\varphi(f)\in\mathit{W}^{1,2}(\mathrm{X})\) and \(|D\varphi(f)|=|\varphi^{\prime}(f)||Df|\), _Leibniz rule:_ for every \(\eta\in\mathsf{LIP}\cap L^{\infty}(\mathrm{X})\), \(\eta f\in\mathit{W}^{1,2}(\mathrm{X})\) and \(|D(\eta f)|\leq|\eta||Df|+|D\eta||f|\). Given \(\Omega\subset\mathrm{X}\) open we also define the following local Sobolev spaces \[W^{1,2}_{0}(\Omega) \coloneqq\overline{\mathsf{LIP}_{bs}(\Omega)}^{W^{1,2}(\mathrm{ X})},\] \[W^{1,2}(\Omega) \coloneqq\{f\in L^{2}(\Omega)\ :\ f\eta\in\mathit{W}^{1,2}( \mathrm{X}),\,\forall\eta\in\mathsf{LIP}_{bs}(\Omega),\,|Df|\in L^{2}(\Omega)\},\] where in the definition of \(\mathit{W}^{1,2}(\Omega)\) the minimal w.u.g. \(|Df|\in L^{2}(\Omega)\) is defined by \[|Df|\coloneqq|D(f\eta_{n})|,\quad\mathfrak{m}\text{-a.e. in }\{\eta_{n}=1\}, \tag{2.2}\] with \(\eta_{n}\in\mathsf{LIP}_{bs}(\Omega)\) is any sequence satisfying \(\{\eta_{n}=1\}\uparrow\Omega\) (there is no dependence on the chosen sequence, by the locality property of the minimal weak upper gradient). We endow \(\mathit{W}^{1,2}(\Omega)\) with the norm given by \[\|f\|_{\mathit{W}^{1,2}(\Omega)}^{2}\coloneqq\|f\|_{L^{2}(\Omega)}^{2}+\||Df |\|_{L^{2}(\Omega)}^{2},\] which makes it a Banach space. Observe that by the Leibniz rule we have that for every \(f\in\mathit{W}^{1,2}(\mathrm{X})\) it holds that \(f_{\big{|}\Omega}\in\mathit{W}^{1,2}(\Omega)\) and also \(|Df|\big{|}_{\Omega}=|Df|_{\Omega}\)\(\mathfrak{m}\)-a.e. in \(\Omega\) (by the locality). Moreover, for every \(f\in\mathit{W}^{1,2}_{0}(\Omega)\) we have \(f=0\)\(\mathfrak{m}\)-a.e. in \(X\setminus\Omega\) and thus \(\|f_{\big{|}\Omega}\|_{\mathit{W}^{1,2}(\Omega)}=\|f\|_{\mathit{W}^{1,2}( \mathrm{X})}\), which shows that the map \[T:\mathit{W}^{1,2}_{0}(\Omega)\to\mathit{W}^{1,2}(\Omega),\qquad T(f):=f_{ \big{|}\Omega} \tag{2.3}\] is a linear isometry. For these reasons, with a little abuse of notation, sometimes we identify \(\mathit{W}^{1,2}_{0}(\Omega)\) with \(T(\mathit{W}^{1,2}_{0}(\Omega))\subset\mathit{W}^{1,2}(\Omega)\) and think to \(f\in\mathit{W}^{1,2}_{0}(\Omega)\) as an element of \(L^{2}(\Omega)\). If we choose \(\Omega=\mathrm{X}\), then \(\mathit{W}^{1,2}(\mathrm{X})=\mathit{W}^{1,2}(\mathrm{X},\mathsf{d},\mathfrak{ m})\) with the same norm and minimal w.u.g., so the notation is consistent with the one given above. **Remark 2.3**.: It can be shown (see e.g. [6, Remark 2.15]) that \(\mathit{W}^{1,2}(\Omega)\) coincides, up to \(\mathfrak{m}\)-a.e. equivalence of functions, with the Newtonian Sobolev space \(N^{1,2}(\Omega,\mathsf{d},\mathfrak{m}_{|\Omega})\) defined in [85, 31] (see also [19]). The norms of the two spaces coincide as well thanks to the equivalence proved in [5] between the various notions of minimal weak upper gradients. \(\blacksquare\) Following [48] we say that \((\mathrm{X},\mathsf{d},\mathfrak{m})\) is _infinitesimally Hilbertian_ if \(\mathit{W}^{1,2}(\mathrm{X})\) is a Hilbert space or equivalently if the Cheeger energy satisfies the parallelogram identity: \[\mathrm{Ch}(f+g)+\mathrm{Ch}(f-g)=2\mathrm{Ch}(f)+2\mathrm{Ch}(g),\quad\forall f,g\in\mathit{W}^{1,2}(\mathrm{X}). \tag{2.4}\] If \((\mathrm{X},\mathsf{d},\mathfrak{m})\) is infinitesimally Hilbertian, then \(\mathit{W}^{1,2}(\Omega)\) is a Hilbert space as well for every \(\Omega\subset\mathrm{X}\) open (see e.g. [27, Remark A.3]). Moreover, we can give a notion of _scalar product between gradients_ of functions \(f,g\in\mathit{W}^{1,2}(\Omega)\) by setting \[L^{1}(\Omega)\ni\nabla f\cdot\nabla g\coloneqq\frac{1}{2}\left(|D(f+g)|^{2}-| Df|^{2}-|Dg|^{2}\right), \tag{2.5}\] which is bilinear and satisfies \[|\nabla f\cdot\nabla g|\leq|Df||Dg|,\quad\mathfrak{m}\text{-a.e.,} \quad\forall\,f,g\in\mathit{W}^{1,2}(\Omega), \tag{2.6}\] \[|\nabla f\cdot\nabla f|=|Df|^{2},\quad\mathfrak{m}\text{-a.e.,} \quad\forall\,f\in\mathit{W}^{1,2}(\Omega).\] Under the infinitesimally Hilbertian assumption we can define a notion of Laplacian via integration by parts. **Definition 2.4** (Neumann Laplacian).: Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be an infinitesimally Hilbertian metric measure space and \(\Omega\subset\mathrm{X}\) be open. We say that \(f\in\,W^{1,2}(\Omega)\) belongs to the domain of the Neumann Laplacian, and we write \(f\in\mathsf{D}(\Delta_{\mathcal{N}},\Omega)\), if there exists \(h\in L^{2}(\Omega)\) such that \[\int_{\Omega}hg\,\mathrm{d}\mathfrak{m}=-\int_{\Omega}\nabla f\cdot\nabla g\, \mathrm{d}\mathfrak{m}\,,\quad\forall\,g\in\,W^{1,2}(\Omega). \tag{2.7}\] If \(f\in\mathsf{D}(\Delta_{\mathcal{N}},\Omega)\) then the function \(h\) is unique and is denoted by \(\Delta_{\mathcal{N}}f\). **Definition 2.5** (Dirichlet Laplacian).: Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be an infinitesimally Hilbertian metric measure space and \(\Omega\subset\mathrm{X}\) be open. Then \(f\in\,W^{1,2}_{0}(\Omega)\) belongs to the domain of the Dirichlet Laplacian, and we write \(f\in\mathsf{D}(\Delta_{\mathcal{D}},\Omega)\), if there exists \(h\in L^{2}(\Omega)\) such that \[\int_{\Omega}hg\,\mathrm{d}\mathfrak{m}=-\int_{\Omega}\nabla f\cdot\nabla g\, \mathrm{d}\mathfrak{m}\,,\quad\forall\,g\in\,W^{1,2}_{0}(\Omega). \tag{2.8}\] If \(f\in\mathsf{D}(\Delta_{\mathcal{D}},\Omega)\) then the function \(h\) is unique and is denoted by \(\Delta_{\mathcal{D}}f\). Since under the infinitesimally Hilbertian assumption Lipschitz and bounded functions are dense in \(\,W^{1,2}(\mathrm{X})\) (see [5]) we have \(\,W^{1,2}_{0}(\mathrm{X})=\,W^{1,2}(\mathrm{X})\) and so the Dirichlet and Neumann Laplacian coincide for \(\Omega=\mathrm{X}\). In this situation we simply write \(\Delta=\Delta_{\mathcal{N}}=\Delta_{\mathcal{D}}\) and call it simply Laplacian operator and write \(f\in\mathsf{D}(\Delta)\) in place of \(f\in\mathsf{D}(\Delta_{\mathcal{N}},\mathrm{X})\) or \(f\in\mathsf{D}(\Delta_{\mathcal{D}},\mathrm{X})\). **Definition 2.6** (Eigenfunctions).: Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be an infinitesimally Hilbertian metric measure space and \(\Omega\subset\mathrm{X}\) be open. We say that a non-null \(f\in\mathsf{D}(\Delta_{\mathcal{D}},\Omega)\) (resp. \(\mathsf{D}(\Delta_{\mathcal{N}},\Omega)\)) is a Dirichlet (resp. Neumann) _eigenfunction of the Laplacian_ in \(\Omega\) of _eigenvalue_\(\lambda\in\mathbb{R}\) if \(\Delta_{\mathcal{D}}f=-\lambda f\) (resp. \(\Delta_{\mathcal{N}}f=-\lambda f\)). In the case \(\Omega=\mathrm{X}\) we simply write that \(f\) is an eigenfunction of the Laplacian of eigenvalue \(\lambda\). **Remark 2.7** (Compatibility with Euclidean Laplacian).: If \((\mathrm{X},\mathsf{d},\mathfrak{m})=(\mathbb{R}^{N},|\cdot|,\mathscr{L}^{N})\) and \(\Omega\subset\mathbb{R}^{N}\) is open, the spaces \(W^{1,2}(\Omega)\) and \(\,W^{1,2}_{0}(\Omega)\) coincide with the usual ones, also with the same norms, as shown in [85, Theorem 4.5] (see also [19, Theorem A.2 and Corollary A.4] or [51, Section 2.1.5]). In particular, by polarization, the right-hand side of both (2.7) and (2.8) coincides with the integral of the scalar product between weak gradients in the classical sense. This shows that the definition of eigenfunction (and eigenvalue) of the Neumann or Dirichlet Laplacian in \(\Omega\) given above coincides with the usual one in the Euclidean case. \(\blacksquare\) For later use we observe that, whenever Lipschitz functions are dense in \(\,W^{1,2}(\mathrm{X})\), for every bounded and open set \(\Omega\subset\mathrm{X}\) it holds that \[\{f\in\,W^{1,2}(\mathrm{X})\ :\ \mathsf{d}(\mathrm{supp}(f),\mathrm{X} \setminus\Omega)>0\}\subset\,W^{1,2}_{0}(\Omega). \tag{2.9}\] Indeed there exist \(\eta\in\mathsf{LIP}(\mathrm{X})\) such that \(\eta=1\) in \(\mathrm{supp}(f)\) and \(\mathrm{supp}(\eta)\subset\Omega\) and a sequence \(f_{n}\in\mathsf{LIP}(\mathrm{X})\) with \(f_{n}\to f\) in \(\,W^{1,2}(\mathrm{X})\), by density. Then \(\eta f_{n}\in\mathsf{LIP}_{bs}(\Omega)\) and \(\eta f_{n}\to f\) in \(\,W^{1,2}(\mathrm{X})\), which shows that \(f\in\,W^{1,2}_{0}(\Omega)\). We state in the next lemma an inequality between Neumann and Dirichlet eigenvalues that will play a key role in the sequel. Note that in the statement by \(\,W^{1,2}_{0}(\Omega)\hookrightarrow L^{2}(\Omega)\) we mean, more precisely, that \(T(\,W^{1,2}_{0}(\Omega))\hookrightarrow L^{2}(\Omega)\) where \(T\) is defined in (2.3). **Lemma 2.8**.: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be an infinitesimally Hilbertian metric measure space and let \(\Omega\subset\mathrm{X}\) be open. Let us suppose that \(\,W^{1,2}_{0}(\Omega)\hookrightarrow L^{2}(\Omega)\) with compact inclusion. Then \(-\Delta_{\mathcal{D}}\) has discrete spectrum, i.e. the eigenvalues form a diverging sequence (counted with multiplicity) that we denote by_ \[0\leq\lambda^{\mathcal{D}}_{1}(\Omega)\leq\lambda^{\mathcal{D}}_{2}(\Omega)\leq \ldots\lambda^{\mathcal{D}}_{k}(\Omega)\leq\cdots\to+\infty\,. \tag{2.10}\] _If moreover \(W^{1,2}(\Omega)\hookrightarrow L^{2}(\Omega)\) with compact inclusion, then also \(-\Delta_{\mathcal{N}}\) has discrete spectrum denoted by_ \[0=\lambda_{1}^{\mathcal{N}}(\Omega)\leq\lambda_{2}^{\mathcal{N}}(\Omega)\leq \ldots\lambda_{k}^{\mathcal{N}}(\Omega)\leq\cdots\to+\infty\,, \tag{2.11}\] _and it holds_ \[\lambda_{k}^{\mathcal{N}}(\Omega)\leq\lambda_{k}^{\mathcal{D}}(\Omega)\,, \quad\forall k\in\mathbb{N}. \tag{2.12}\] Proof.: Let us introduce the local Cheeger energies \[\mathsf{Ch}_{\mathcal{D}}^{\Omega}:L^{2}(\Omega)\to[0,+\infty],\qquad\mathsf{ Ch}_{\mathcal{D}}^{\Omega}(f):=\begin{cases}\int_{\Omega}|Df|^{2}\mathrm{d}\mathsf{m}& \text{if }f=g\big{|}_{\Omega}\,\text{for some }g\in W^{1,2}_{0}(\Omega)\,,\\ +\infty&\text{otherwise},\end{cases}\] \[\mathsf{Ch}_{\mathcal{N}}^{\Omega}:L^{2}(\Omega)\to[0,+\infty],\qquad\mathsf{ Ch}_{\mathcal{N}}^{\Omega}(f):=\begin{cases}\int_{\Omega}|Df|^{2}\mathrm{d}\mathsf{m}& \text{if }f\in\,\mathit{W}^{1,2}(\Omega)\,,\\ +\infty&\text{otherwise},\end{cases}\] and notice that they define two Dirichlet forms, i.e. two densely defined, Markovian, closed, quadratic forms [24, 46]. To check this, it is sufficient to recall the calculus rules given in (2.1) and, for the \(L^{2}\)-lower semicontinuity, the equivalent definition through relaxation (see [27] for all the details). We denote by \(L_{\mathcal{D}}\) (resp. \(L_{\mathcal{N}}\)) the infinitesimal generator of \(\mathsf{Ch}_{\mathcal{D}}^{\Omega}\) (resp. \(\mathsf{Ch}_{\mathcal{N}}^{\Omega}\)) with its associated domain \(\mathsf{D}(L_{\mathcal{D}})\) (resp. \(\mathsf{D}(L_{\mathcal{N}})\)). Notice that, by the very definition, \(\mathsf{D}(\Delta_{\mathcal{N}},\Omega)=\mathsf{D}(L_{\mathcal{N}})\) with \(\Delta_{\mathcal{N}}=L_{\mathcal{N}}\). Regarding the Dirichlet Laplacian, we have \(f\in\mathsf{D}(\Delta_{\mathcal{D}},\Omega)\) if and only if \(f\big{|}_{\Omega}\in\mathsf{D}(L_{\mathcal{D}})\) with \(\Delta_{\mathcal{D}}f=L_{\mathcal{D}}(f\big{|}_{\Omega})\). In particular, \(\lambda\) is an eigenvalue of \(-\Delta_{\mathcal{D}}\) (resp. \(-\Delta_{\mathcal{N}}\)) if and only if it is and eigenvalue of \(-L_{\mathcal{D}}\) (resp. \(-L_{\mathcal{N}}\)). From the classical theory of Dirichlet forms [24, 46] we know that \(-L_{\mathcal{D}}\) and \(-L_{\mathcal{N}}\) are non-negative, densely defined, linear, self-adjoint operators on \(L^{2}(\Omega)\). Under these assumptions, it is well known (see e.g. [39]) that the compactness of the embedding of \(\mathit{W}^{1,2}_{0}(\Omega)\) (resp. \(W^{1,2}(\Omega)\)) in \(L^{2}(\Omega)\) implies the discreteness of the spectrum of \(-L_{\mathcal{D}}\) (resp. \(-L_{\mathcal{N}}\)) and thus of \(-\Delta_{\mathcal{D}}\) (resp. \(-\Delta_{\mathcal{N}}\)). Since \(T(\mathit{W}^{1,2}_{0}(\Omega))\subset\mathit{W}^{1,2}(\Omega)\) as Hilbert spaces, we also know that whenever \(\mathit{W}^{1,2}(\Omega)\hookrightarrow L^{2}(\Omega)\) with compact inclusion both the spectra are discrete. We also have at disposal the variational characterization of the eigenvalues, see e.g. [39, Theorems 4.5.1, 4.5.3]. More precisely, defined \[\begin{split}\lambda^{\mathcal{N}}(\Omega)[M]&:= \sup\{\mathsf{Ch}_{\mathcal{N}}^{\Omega}(f)\ :\ f\in M,\|f\|_{L^{2}(\Omega)}=1\},\\ \lambda^{\mathcal{D}}(\Omega)[M]&:=\sup\{\mathsf{ Ch}_{\mathcal{D}}^{\Omega}(f)\ :\ f\in M,\|f\|_{L^{2}(\Omega)}=1\},\end{split} \tag{2.13}\] we know that for every \(k\in\mathbb{N}\) \[\begin{split}\lambda_{k}^{\mathcal{N}}(\Omega)=\inf\{\lambda^{ \mathcal{N}}(\Omega)[M]\ :\ M\subset\mathit{W}^{1,2}(\Omega),\ \dim(M)=k\},\\ \lambda_{k}^{\mathcal{D}}(\Omega)=\inf\{\lambda^{\mathcal{D}}( \Omega)[M]\ :\ M\subset T(\mathit{W}^{1,2}_{0}(\Omega)),\ \dim(M)=k\},\end{split} \tag{2.14}\] The inequality (2.12) thus follows immediately from (2.14) since the infimum is taken on a larger set and \(\mathsf{Ch}_{\mathcal{N}}^{\Omega}(f)=\mathsf{Ch}_{\mathcal{D}}^{\Omega}(f)\) for every \(f\in T(\mathit{W}^{1,2}_{0}(\Omega))\). We will use in the sequel the notation introduced in the previous lemma, i.e. whenever \(-\Delta_{\mathcal{D}}\) (resp. \(-\Delta_{\mathcal{N}}\)) has discrete spectrum in \(\Omega\) we will denote by \(\{\lambda_{k}^{\mathcal{D}}(\Omega)\}_{k\in\mathbb{N}}\) (resp. \(\{\lambda_{k}^{\mathcal{N}}(\Omega)\}_{k\in\mathbb{N}}\)) the sequence of its eigenvalues. In the case \(\Omega=\mathrm{X}\), assuming the discreteness of the spectrum of \(-\Delta\), we will simply write \(\lambda_{k}\) in place of \(\lambda_{k}^{\mathcal{N}}(\mathrm{X})\). For an arbitrary m.m.s. \((\mathrm{X},\mathsf{d},\mathsf{m})\) and any \(\Omega\subset\mathrm{X}\) open subset we also introduce \[\lambda_{1}(\Omega)\coloneqq\inf\left\{\frac{\int|Du|^{2}\mathrm{d}\mathsf{m}}{ \int u^{2}\mathrm{d}\mathsf{m}}:u\in\mathsf{LIP}_{bs}(\Omega),\ u\not\equiv 0\right\} \tag{2.15}\] and we call \(\lambda_{1}(\Omega)\) the _first eigenvalue of the Laplacian on \(\Omega\) with zero Dirichlet boundary conditions_. Recalling the definition definition of \(\mathit{W}^{1,2}_{0}(\Omega)\) we have the following characterization of \(\lambda_{1}(\Omega)\): \[\lambda_{1}(\Omega)=\inf\left\{\frac{\int|Du|^{2}\mathrm{d}\mathsf{m}}{\int u^{ 2}\mathrm{d}\mathsf{m}}:u\in\mathit{W}^{1,2}_{0}(\Omega),\ u\not\equiv 0\right\}. \tag{2.16}\] Note that differently from \(\lambda_{i}^{\mathcal{D}}(\Omega)\), which we defined only when the inclusion \(\,W_{0}^{1,2}(\Omega)\hookrightarrow L^{2}(\Omega)\) is compact, \(\lambda_{1}(\Omega)\) is always defined. Nevertheless, even if not needed, we stress that whenever \(\lambda_{1}^{\mathcal{D}}(\Omega)\) exists we do have \[\lambda_{1}(\Omega)=\lambda_{1}^{\mathcal{D}}(\Omega),\] as follows by (2.16) and (2.14). ### Sets of finite perimeter Let \(f\in L^{1}_{\mathsf{loc}}(\mathrm{X},\mathfrak{m})\) and let \(U\subset\mathrm{X}\) be open. Following [73, 4] we define \[|\mathbf{D}f|(U):=\inf\left\{\varliminf_{n\to\infty}\int_{U}\mathrm{lip}(f_{n} )\,\mathrm{d}\mathfrak{m}\ :\ f_{n}\in\mathsf{LIP}_{\mathsf{loc}}(U),\ f_{n}\to f\ \mathrm{in}\ L^{1}_{\mathsf{loc}}(U,\mathfrak{m})\right\}\,, \tag{2.17}\] and we say that \(f\) is of locally bounded variation if \(|\mathbf{D}f|(U)<+\infty\) for every \(U\) open and bounded. We also set \[|\mathbf{D}f|(A)\coloneqq\inf\{|\mathbf{D}f|(U)\ :\ U\subset\mathrm{X}\ \mathrm{open},\,A\subset U\},\quad \forall\,A\subset\mathrm{X}\ \mathrm{Borel}\] (note that this coincides with (2.17) if \(A\) is open). For every Borel set \(E\subset\mathrm{X}\) and \(A\subset\mathrm{X}\) Borel we define \(\mathrm{Per}(E,A)\coloneqq|\mathbf{D}\chi_{E}|(A)<+\infty\), where \(\chi_{E}:X\to\{0,1\}\) denotes the characteristic function of \(E\). We say that \(E\) is of finite perimeter if \(\mathrm{Per}(E)\coloneqq\mathrm{Per}(E,\mathrm{X})<+\infty\). When \(f\) is of locally bounded variation (respectively, \(E\) is a set of finite perimeter), the map \(A\mapsto|\mathbf{D}f|(A)\) (respectively, \(A\mapsto\mathrm{Per}(E,A)\)) defines a Borel measure (see [73, 4]). Every \(f\in\mathsf{LIP}(\mathrm{X})\) is of locally bounded variation and \(|\mathbf{D}f|\leq\mathrm{lip}(f)\mathfrak{m}\) (see [4, Remark 5.1]). From the definitions it immediately follows that \(E\) is of finite perimeter if and only if \(E^{c}\) is of finite perimeter, in which case \(\mathrm{Per}(E,\cdot)=\mathrm{Per}(E^{c},\cdot)\) holds. In the sequel we will take advantage of the following coarea-type inequality. **Proposition 2.9**.: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a metric measure space and fix \(x\in\mathrm{X}.\) Then for a.e. \(r>0\) the ball \(B_{r}(x)\) has finite perimeter and for every Borel set \(A\subset\mathrm{X}\) it holds_ \[\int_{0}^{R}\mathrm{Per}(B_{r}(x),A)\,\mathrm{d}r\leq\mathfrak{m}(B_{R}(x)\cap A ),\quad\forall\,R>0. \tag{2.18}\] Proof.: Since the function \(\mathsf{d}_{x}(\cdot)\coloneqq\mathsf{d}(x,\cdot)\) is \(1\)-Lipschitz, it is of locally bounded variation and \(|\mathbf{D}\mathsf{d}_{x}|\leq\mathrm{lip}(\mathsf{d}_{x})\mathfrak{m}\leq \mathfrak{m}.\) Then by the coarea formula (see Proposition 4.2 in [73]) we get directly that \(B_{r}(x)=\{\mathsf{d}_{x}(\cdot)<r\}\) has finite perimeter for a.e. \(r>0\) and that \[\int_{0}^{+\infty}\mathrm{Per}(B_{r}(x),E)\mathrm{d}r=|\mathbf{D}\mathsf{d}_{ x}|(E)\leq\mathfrak{m}(E),\quad\forall\ E\subset\mathrm{X}\ \mathrm{Borel}.\] Then (2.18) follows taking \(E\coloneqq B_{R}(x)\cap A\) and observing that by the very definition in (2.17) it holds \(\mathrm{Per}(B_{r}(x),B_{R}(x))=0\) for every \(r>R\). Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a metric measure space. Given a Borel set \(E\subset\mathrm{X}\) we define the upper and lower densities at \(x\) as \[\overline{D}(E,x)\coloneqq\varlimsup_{r\to 0^{+}}\frac{\mathfrak{m}(B_{r}(x) \cap E)}{\mathfrak{m}(B_{r}(x))},\quad\underline{D}(E,x)\coloneqq\varliminf_{r \to 0^{+}}\frac{\mathfrak{m}(B_{r}(x)\cap E)}{\mathfrak{m}(B_{r}(x))}.\] Clearly, if \(x\in\mathrm{X}\) is such that \(\overline{D}(E,x)>0\), by definition of limit superior we have \(x\in\overline{E}\) (since every open ball with center \(x\) must intersect \(E\)). The _essential boundary_ and the _essential interior_ are given respectively by \[\partial^{e}E \coloneqq\{x\in\mathrm{X}\ :\ \overline{D}(E,x)>0,\,\overline{D}(E^{c},x)>0\},\] \[E^{(1)} \coloneqq\{x\in\mathrm{X}\ :\ \overline{D}(E,x)=\underline{D}(E,x)=1\},\] which are both Borel sets. As a direct consequence of the definition of these sets, notice that if \(E\subset F\), then \(E^{(1)}\subset F^{(1)}\). Moreover, \(\partial^{e}E=\partial^{e}(E^{c})\). We collect in the next lemma all the elementary facts that we will need about the essential boundary and the essential interior. **Lemma 2.10**.: _Let \((\mathrm{X},\mathsf{d})\) be a metric space and \(E,F\subset\mathrm{X}\) be Borel sets. We have the following:_ 1. _If_ \(E\) _is open,_ \(E\subset E^{(1)}\)_._ 2. \(\partial^{e}E\subset\partial E\)_._ 3. \((E\cap F)^{(1)}\subset E^{(1)}\cap F^{(1)}\)_._ 4. \((E^{c})^{(1)}\subset(E^{(1)})^{c}\)_. In particular, if_ \(E\) _is open then_ \((E^{c})^{(1)}\subset E^{c}\)_._ 5. _If_ \(E\) _and_ \(F\) _are disjoint, then also_ \(E^{(1)}\) _and_ \(F^{(1)}\) _are disjoint._ 6. \(\partial^{e}(E\cap F)\cup\partial^{e}(E\cup F)\subset\partial^{e}E\cup\partial ^{e}F\)_._ Proof.: 1. Let \(x\in E\). Since \(E\) is open, \(B_{r}(x)\subset E\) for sufficiently small \(r>0\), thus \(\underline{D}(E,x)=1\) and \(x\in E^{(1)}\). 2. As we have already observed, if \(x\in\partial^{e}E\) it holds \(x\in\overline{E}\) and \(x\in\overline{E^{c}}\), thus \(x\in\partial E\). 3. The result is a direct consequence of the fact that \(\underline{D}(E\cap F,x)\leq\underline{D}(E,x)\leq 1\) for every \(x\in\mathrm{X}\). 4. Let \(x\in(E^{c})^{(1)}\), i.e. \(\underline{D}(E^{c},x)=1\). In particular, \[\frac{2}{3}\mathfrak{m}(B_{r}(x))<\mathfrak{m}(B_{r}(x)\cap E^{c})=\mathfrak{ m}(B_{r}(x))-\mathfrak{m}(B_{r}(x)\cap E)\] for sufficiently small \(r>0\). Thus \(\mathfrak{m}(B_{r}(x)\cap E)<\frac{1}{3}\mathfrak{m}(B_{r}(x))\) for \(r>0\) small enough, which implies \(x\notin E^{(1)}\). The second conclusion follows from what we have just proven and point \((i)\). 5. The assumption \(E\cap F=\emptyset\) is equivalent to \(E\subset F^{c}\). Passing to the essential interior it holds \(E^{(1)}\subset(F^{c})^{(1)}\) and using \((iv)\) one deduces that \(E^{(1)}\subset(F^{(1)})^{c}\) which gives the desired conclusion. 6. This is proven e.g. in [23, Prop. 1.16] (note that the doubling assumption on \(\mathfrak{m}\) is not used in that statement). We conclude this part with the following elementary and well known result. Since we could not find it stated exactly in this form in the literature, we include a proof. **Lemma 2.11**.: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a metric measure space and let \(C\subset\mathrm{X}\) be closed. Then for every \(E\subset C\) Borel satisfying \(\mathsf{d}(E,\mathrm{X}\setminus C)>0\) it holds_ \[\mathrm{Per}(E)=\mathrm{Per}_{C}(E),\] _where \(\mathrm{Per}_{C}(E)\) denotes the perimeter of \(E\) computed in the metric measure space \((C,\mathsf{d}_{\big{|}C},\mathfrak{m}_{\big{|}C})\)._ Proof.: First observe that for every \(f\in\mathsf{LIP}_{\mathsf{loc}}(\mathrm{X})\) it holds \(f_{\big{|}_{C}}\in\mathsf{LIP}_{\mathsf{loc}}(C)\) and \(\mathrm{lip}_{C}(f)\leq\mathrm{lip}(f)\), where \(\mathrm{lip}_{C}(f)\) denotes the slope of \(f\) computed in the metric space \((C,\mathsf{d}_{\big{|}_{C}})\). This and the definitions implies \(\mathrm{Per}_{C}(E)\leq\mathrm{Per}(E).\) For the other inequality it is sufficient to find a sequence \(f_{n}\in\mathsf{LIP}_{\mathsf{loc}}(C,\mathsf{d}_{\big{|}_{C}})\) such that \(\mathsf{d}(\mathrm{supp}(f_{n}),\mathrm{X}\setminus C)>0\), \(f_{n}\to\chi_{E}\) in \(L^{1}(\mathfrak{m})\) and \(\int_{C}\mathrm{lip}_{C}(f_{n})\mathrm{d}\mathfrak{m}\to\mathrm{Per}_{C}(E)\). Indeed extending \(f_{n}\) by zero to the whole \(\mathrm{X}\) we have \(\int_{\mathrm{X}}\mathrm{lip}(f_{n})\mathrm{d}\mathfrak{m}=\int_{C}\mathrm{lip }_{C}(f_{n})\mathrm{d}\mathfrak{m}\), so that \[\mathrm{Per}(E)\leq\lim_{n\to+\infty}\int_{X}\mathrm{lip}(f_{n})\mathrm{d} \mathfrak{m}=\lim_{n\to+\infty}\int_{C}\mathrm{lip}_{C}(f_{n})\mathrm{d} \mathfrak{m}=\mathrm{Per}_{C}(E).\] To produce such sequence we consider a sequence \(g_{n}\in\mathsf{LIP}_{\mathsf{loc}}(C,\mathsf{d}_{\big{|}_{C}})\) such that \(g_{n}\to\chi_{E}\) in \(L^{1}(C,\mathfrak{m}_{\big{|}_{C}})\) and \(\int_{C}\mathrm{lip}_{C}(g_{n})\mathrm{d}\mathfrak{m}\to\mathrm{Per}_{C}(E)\), which exists by definition. Then we take any \(\eta\in\mathsf{LIP}_{\mathsf{loc}}(C)\) satisfying \(\mathsf{d}(\mathrm{supp}(\eta),\mathrm{X}\setminus C)>0\), \(\eta=1\) in a neighbourhood of \(E\), \(0\leq\eta\leq 1\) and set \(f_{n}\coloneqq\eta g_{n}\in\mathsf{LIP}_{\mathsf{loc}}(C,\mathsf{d}|_{C})\). Clearly \(f_{n}\to\chi_{E}\) in \(L^{1}(C;\mathfrak{m}|_{C})\). Moreover by the Leibniz rule for the slope \(\operatorname{lip}_{C}(f_{n})\leq g_{n}\!\operatorname{lip}_{C}(\eta)+ \operatorname{lip}_{C}(g_{n})\). In addition \(\operatorname{lip}_{C}(\eta)=0\) in \(E\). Therefore \[\operatorname{Per}_{C}(E)\leq\varliminf_{n}\int_{C}\operatorname{lip}_{C}(f_ {n})\mathrm{d}\mathfrak{m}\leq\varlimsup_{n}\int_{C}\operatorname{lip}_{C}(f_ {n})\mathrm{d}\mathfrak{m}\] \[\leq\varlimsup_{n}\int_{C}\operatorname{lip}_{C}(g_{n})\mathrm{ d}\mathfrak{m}+\operatorname{lip}_{C}(\eta)\int_{\mathrm{X}\setminus E}g_{n} \mathrm{d}\mathfrak{m}=\operatorname{Per}_{C}(E),\] where the second term vanishes because \(g_{n}\to\chi_{E}\) in \(L^{1}(C;\mathfrak{m}|_{C})\). ### PI spaces Most of the arguments along the note will be carried out in the general setting of locally doubling m.m. spaces supporting a Poincare inequality, also called PI spaces. We refer to [19, 57] and references therein for a thorough introduction on this topic and recall here only the properties of these spaces that will be used in this note. **Definition 2.12** (PI space).: A metric measure space \((\mathrm{X},\mathsf{d},\mathfrak{m})\) is said to be a _PI space_ if: * it is _uniformly locally doubling_, i.e. if there exists a function \(C_{D}\,:(0,\infty)\to(0,\infty)\) such that \[\mathfrak{m}\big{(}B_{2r}(x)\big{)}\leq C_{D}(R)\,\mathfrak{m}\big{(}B_{r}(x) \big{)}\,,\quad\text{for every $0<r<R$ and $x\in\mathrm{X}$},\] * supports a _weak local_\((1,1)\)_-Poincare inequality_, i.e there exists a constant \(\lambda\geq 1\) and a function \(C_{P}\,:(0,\infty)\to(0,\infty)\) such that for any \(f\in\mathsf{LIP}_{\mathsf{loc}}(X)\) it holds \[\fint_{B_{r}(x)}\left|f-\fint_{B_{r}(x)}f\,\mathrm{d}\mathfrak{m}\right| \mathrm{d}\mathfrak{m}\leq C_{P}(R)\,r\fint_{B_{\lambda r}(x)}\operatorname{ lip}(f)\,\mathrm{d}\mathfrak{m}\,,\quad\text{for every $0<r<R$ and $x\in\mathrm{X}$}.\] Observe that the uniformly locally doubling assumption implies that PI spaces are proper. Additionally PI spaces are connected and locally connected (see e.g. [19, Theorem 4.32] or [20, Prop. 4.8]). We will need the following approximation result, which is a variation of [74, Lemma 3.6]. **Lemma 2.13** (Approximation with non-vanishing slope).: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a bounded PI space. Then for every open subset \(\Omega\subset\mathrm{X}\) and any non-negative \(u\in\mathsf{LIP}_{c}(\Omega)\) there exists a sequence of non-negative \(u_{n}\in\mathsf{LIP}_{c}(\Omega)\) satisfying \(\operatorname{lip}(u_{n})\neq 0\)\(\mathfrak{m}\)-a.e. in \(\{u_{n}>0\}\) and such that \(u_{n}\to u\) in \(W^{1,2}(\mathrm{X})\)._ Proof.: Since \((\mathrm{X},\mathsf{d},\mathfrak{m})\) is a bounded PI space it admits a geodesic distance \(\tilde{\mathsf{d}}\) that is bi-Lipschitz equivalent to \(\mathsf{d}\), i.e. \(L^{-1}\tilde{\mathsf{d}}\leq\mathsf{d}\leq L\tilde{\mathsf{d}}\) for some constant \(L\geq 1\) (see e.g. [57, Corollary 8.3.16]). Hence we can apply [74, Lemma 3.6] to deduce that the conclusion of the lemma holds in the m.m.s. \((\mathrm{X},\tilde{\mathsf{d}},\mathfrak{m})\). However \(u\in\mathsf{LIP}_{c}(\Omega;\mathsf{d})\) if and only if \(\mathsf{LIP}_{c}(\Omega;\tilde{\mathsf{d}})\), with \(\operatorname{lip}(u)\geq L^{-1}\tilde{\operatorname{lip}}(u)\), where \(\tilde{\operatorname{lip}}(\cdot)\) denotes the slope computed in the metric space \((\mathrm{X},\tilde{\mathsf{d}})\). Hence if \(\tilde{\operatorname{lip}}(u)\neq 0\)\(\mathfrak{m}\)-a.e. in \(\{u>0\}\), then \(\operatorname{lip}(u)\neq 0\) in \(\{u>0\}\)\(\mathfrak{m}\)-a.e.. Moreover, since \(\mathsf{LIP}_{\mathsf{loc}}(\mathrm{X},\mathsf{d})=\mathsf{LIP}_{\mathsf{loc}}( \mathrm{X},\tilde{\mathsf{d}})\) and \(\operatorname{lip}(\cdot)\leq L\tilde{\operatorname{lip}}(\cdot)\) we have that \(\|u\|_{W^{1,2}(\mathrm{X},\tilde{\mathsf{d}},\mathfrak{m})}\leq L\|u\|_{W^{1,2} (\mathrm{X},\tilde{\mathsf{d}},\mathfrak{m})}\) for all \(u\in\mathsf{LIP}_{bs}(\mathrm{X},\mathsf{d})\). Therefore the conclusion holds also for the m.m.s. (X, \(\mathsf{d},\mathfrak{m}\)). The above lemma allows to give the following characterization of \(\lambda_{1}(\Omega)\). **Lemma 2.14** (Characterization of \(\lambda_{1}(\Omega)\) via functions with non-vanishing slope).: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a bounded PI space and \(\Omega\subset\mathrm{X}\) be open. Then_ \[\lambda_{1}(\Omega)=\inf\left\{\frac{\int|Du|^{2}\mathrm{d}\mathfrak{m}}{\int u ^{2}\mathrm{d}\mathfrak{m}}:u\in\mathsf{LIP}_{c}(\Omega),\ u\not\equiv 0,\ u\geq 0,\ \operatorname{lip}(u)\neq 0\ \mathfrak{m}\text{-a.e. in $\{u>0\}$}\right\}\,. \tag{2.19}\] Proof.: For every for every \(u\in\mathsf{LP}_{bs}(\Omega)\), setting \(\tilde{u}\coloneqq|u|\) we have \(\tilde{u}\in\mathsf{LP}_{bs}(\Omega)\) and by the chain rule (see (2.1)) also that \(|D\tilde{u}|=|Du|\)\(\mathfrak{m}\)-a.e.. This shows that (2.19) holds if we remove the requirement that \(\operatorname{lip}(u)\neq 0\)\(\mathfrak{m}\)-a.e. in \(\{u>0\}\). From this, to get the validity of the full (2.19) it is sufficient to apply Lemma 2.13. We recall the following deep result proved in [31], relating the notions of minimal weak upper gradient and slope, in the setting of PI spaces. **Theorem 2.15**.: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a PI space. Then_ \[\operatorname{lip}(f)=|Df|,\quad\mathfrak{m}\text{-a.e., for every }f\in\mathsf{ LIP}_{bs}(\mathrm{X}). \tag{2.20}\] The following is a consequence of the Rellich-Kondrachov compactness theorem in PI spaces. **Theorem 2.16** ([54, Theorem 8.3]).: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a PI space and \(\Omega\subset\mathrm{X}\) be open and bounded. Then the embedding \(W_{0}^{1,2}(\Omega)\hookrightarrow L^{2}(\Omega)\) is compact._ From Theorem 2.16 and the discussion in Section 2.1 we deduce that The Dirichlet Laplacian has discrete spectrum on any bounded open subset of an infinitesimally Hilbertian PI space. The following result is well known. In particular the proof can be achieved by a standard Moser iteration scheme (see e.g. [52, Theorem 8.24]), which is available in a PI space. Indeed, as firstly observed in [84] (see also [54]), a Poincare inequality and a doubling condition together imply a Sobolev inequality, which is then sufficient to perform the Moser scheme (see, for example, [19, Chapter 8] or [21]). **Theorem 2.17** (Continuity of eigenfunctions).: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be an infinitesimally Hilbertian PI space, \(\Omega\subset\mathrm{X}\) be open and \(u\) be a Dirichlet or Neumann eigenfunction of the Laplacian in \(\Omega\). Then \(u\) is locally Holder continuous in \(\Omega\)._ We now pass to the properties of sets of finite perimeter in the setting of PI spaces. As the measure \(\mathfrak{m}\) is locally doubling, the Lebesgue's differentiation theorem holds (see e.g. [57, Section 3.4]) hence we have \(\mathfrak{m}(E\triangle E^{(1)})=0\) for every Borel set \(E\), where \(E\triangle E^{(1)}:=(E\setminus E^{(1)})\cup(E^{(1)}\setminus E)\) denotes the symmetric difference between \(E\) and \(E^{(1)}\). Moreover by [2, Theorem 5.3] we have that for every set of finite perimeter \(E\subset\mathrm{X}\) the measure \(P(E,\cdot)\) is concentrated on \(\partial^{c}E.\) In particular we get \[\operatorname{Per}(B^{c},.)=\operatorname{Per}(B,.)=\operatorname{Per}(B,.)_{ \big{|}B^{c}}, \tag{2.22}\] for every ball \(B=B_{r}(x)\subset\mathrm{X}\) having finite perimeter, having used \(\partial^{c}B_{r}(x)\subset\partial B_{r}(x)\subset(B_{r}(x))^{c}\) (see \((ii)\) in Lemma 2.10). It is well known that every PI space admits an isoperimetric inequality (see [2, 73, 54]). We report in the following statement a simplified version sufficient to our purposes. **Proposition 2.18** (Isoperimetric inequality for small volumes).: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a bounded PI space satisfying for some constant \(s>1\) and \(c>0\)_ \[\frac{\mathfrak{m}(B_{r}(x))}{\mathfrak{m}(B_{R}(x))}\geq c\left(\frac{r}{R} \right)^{s},\quad\forall\,x\in\mathrm{X},\quad\forall\,0<r<R. \tag{2.23}\] _Then there exist constants \(w_{0}=w_{0}(\mathrm{X})>0\) and \(C_{I}=C_{I}(\mathrm{X},s)>0\) such that_ \[\operatorname{Per}(E)\geq C_{I}\mathfrak{m}(E)^{\frac{s-1}{s}},\quad\forall\, E\subset\mathrm{X}\text{ Borel such that }\mathfrak{m}(E)\leq w_{0}. \tag{2.24}\] Proof.: By [2, Theorem 4.3] there exist constants \(\sigma=\sigma(\mathrm{X})\geq 1\) and \(C=C(\mathrm{X},s)>0\) such that \[\operatorname{Per}(E,B_{\sigma r}(x))\geq C\frac{\mathfrak{m}(B_{r}(x))^{ \frac{1}{s}}}{r}\min\left(\mathfrak{m}(B_{r}(x)\cap E),\mathfrak{m}(B_{r}(x) \setminus E)\right)^{\frac{s-1}{s}},\quad\forall x\in\mathrm{X},\quad\forall r >0.\] Taking \(r\coloneqq\operatorname{diam}(\mathrm{X})\) and \(w_{0}\coloneqq\mathfrak{m}(\mathrm{X})/2\) the result follows (recall that \(\mathfrak{m}(\mathrm{X})<+\infty\) since \(\mathrm{X}\) is bounded). We conclude this part reporting the following technical result. **Proposition 2.19** ([12, Lemma 2.6]).: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a PI space. Let \(E,F\subseteq\mathrm{X}\) be sets of finite perimeter with \(P(E,\partial^{e}F)=0\). Then_ \[P(E\cap F,\cdot)\leq P(E,\cdot)\big{|}_{F^{(1)}}+P(F,\cdot)\big{|}_{E^{(1)}}.\] ### Polya-Szego inequality in metric measure spaces Here we report a version of the Polya-Szego rearrangement inequality for metric measure spaces, following [74, 76]. The main difference with the classical version in the Euclidean space [80] is that, even if the initial function lives in a metric space, the symmetrization will be defined in \(\mathbb{R}^{N}\). The result is a generalization of the Polya-Szego inequality introduced in [17] in the case of Riemannian manifolds. **Definition 2.20** (Distribution function).: Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a metric measure space, \(\Omega\subseteq\mathrm{X}\) be an open set with \(\mathfrak{m}(\Omega)<+\infty\) and \(u:\Omega\to[0,+\infty)\) be a non-negative Borel function. We define \(\mu:[0,+\infty)\to[0,\mathfrak{m}(\Omega)]\), the distribution function of \(u\), as \[\mu(t):=\mathfrak{m}(\{u>t\}). \tag{2.25}\] For \(u\) and \(\mu\) as above, we let \(u^{\#}\) be the generalized inverse of \(\mu\), defined by \[u^{\#}(s):=\begin{cases}\operatorname{ess}\sup u&\text{if }s=0,\\ \inf\{t:\mu(t)<s\}&\text{if }s>0.\end{cases}\] It can be checked that \(u^{\#}\) is non-increasing and left-continuous. Next, we define the _Euclidean monotone rearrangement_ into the Euclidean space \((\mathbb{R}^{N},|\cdot|,\mathscr{L}^{N})\), where \(\mathscr{L}^{N}\) is the \(N\)-dimensional Lebesgue measure. From now on, we denote by \(\omega_{N}\coloneqq\mathscr{L}^{N}(B_{1}(0))\) the Lebesgue measure of the unit ball in the Euclidean space \(\mathbb{R}^{N}\). **Definition 2.21** (Euclidean monotone rearrangement).: Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a metric measure space and \(\Omega\subset\mathrm{X}\) be open with \(\mathfrak{m}(\Omega)<+\infty\) and \(N\in\mathbb{N}.\) For any Borel function \(u\colon\Omega\to\mathbb{R}^{+}\), we define \(\Omega^{*}:=B_{r}(0)\subset\mathbb{R}^{N}\), choosing \(r>0\) so that \(\mathscr{L}^{N}(B_{r}(0))=\mathfrak{m}(\Omega)\) (i.e. \(r^{N}=\omega_{N}^{-1}\mathfrak{m}(\Omega)\)) and the monotone rearrangement \(u_{N}^{*}:\Omega^{*}\to\mathbb{R}^{+}\) by \[u_{N}^{*}(x):=u^{\#}(\mathscr{L}^{N}(B_{|x|}(0)))=u^{\#}(\omega_{N}|x|^{N}), \qquad\forall x\in\Omega^{*}.\] In particular \(u\) and \(u_{N}^{*}\) are equimeasurable, i.e. \(\mathfrak{m}(\{u>t\})=\mathscr{L}^{N}(\{u_{N}^{*}>t\})\) for all \(t>0.\) In the sequel, whenever we fix \(\Omega\) and \(u\colon\Omega\to[0,\infty)\), the set \(\Omega^{*}\) and the rearrangement \(u_{N}^{*}\) are automatically defined as above. Observe also that, given \(u\in L^{2}(\Omega)\), its monotone rearrangement must be defined by fixing a Borel representative of \(u\). However, this choice does not affect the outcome object \(u_{N}^{*}\), as clearly the distribution function \(\mu(t)\) of \(u\) is independent of the representative. The following result is essentially contained in [76], see in particular [76, Remark 3.7], (see also [74] for a similar result), since the only difference is that here the rearrangement is defined in \(\mathbb{R}^{N}\) instead that on an interval. Nevertheless we include a short argument outlining the main points of the proof. **Theorem 2.22** (Euclidean Polya-Szego inequality).: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a bounded PI space, \(\Omega\subsetneq\mathrm{X}\) be open and fix \(N\in\mathbb{N}\setminus\{1\}.\) Suppose there exists a constant \(\tilde{C}>0\) such that_ \[\mathrm{Per}(E)\geq\tilde{C}\mathfrak{m}(E)^{\frac{N-1}{N}},\quad\forall\,E \subset\Omega\text{ Borel}. \tag{2.26}\] _Then:_ 1. _For every_ \(u\in\mathsf{LIP}_{c}(\Omega)\) _non-negative,_ \(u\not\equiv 0\)_, with_ \(\mathrm{lip}(u)\neq 0\)__\(\mathfrak{m}\)_-a.e. in_ \(\{u>0\}\)_, then_ \(u_{N}^{*}\in\mathsf{LIP}_{c}(\Omega^{*})\) _and it holds_ \[\int_{\{u\leq s\}}|Du|^{2}\,\mathrm{d}\mathfrak{m}\geq\int_{0}^{s}\left(\frac{ \mathrm{Per}(\{u>t\})}{N\omega_{N}^{\frac{1}{N}}\mu(t)^{\frac{N-1}{N}}}\right) ^{2}\int_{\mathbb{R}^{N}}|Du_{N}^{*}|\mathrm{d}\mathrm{Per}(\{u_{N}^{*}>t\}) \,\mathrm{d}t,\quad\forall s\in(0,\max u].\] (2.27) 2. _The Euclidean-rearrangement maps_ \(W_{0}^{1,2}(\Omega)\) _to_ \(W_{0}^{1,2}(\Omega^{*})\) _and_ \[\int_{\Omega}|Du|^{2}\mathrm{d}\mathfrak{m}\geq\left(\frac{\tilde{C}}{N \omega_{N}^{1/N}}\right)^{2}\int_{\Omega^{*}}|Du_{N}^{*}|^{2}\mathrm{d} \mathscr{L}^{N},\quad\forall\,u\in\,W_{0}^{1,2}(\Omega).\] (2.28) Proof.: It is enough to prove \(i)\), since \(ii)\) then follows by approximation with Lipschitz functions using Lemma 2.13 as in [76, Theorem 3.6] (see also [74]). We fix \(u\in\mathsf{LIP}_{c}(\Omega)\), \(u\not\equiv 0\), with \(\mathrm{lip}(u)\neq 0\)\(\mathfrak{m}\)-a.e. \(\{u>0\}\). Set \(M\coloneqq\sup u.\) Under these assumptions \(\mu\) is strictly monotone, absolutely continuous (hence differentiable almost everywhere) and \[\int_{\{u\leq s\}}|Du|^{2}\,\mathrm{d}\mathfrak{m}\geq\int_{0}^{s}\frac{ \mathrm{Per}(\{u>t\})^{2}}{-\mu^{\prime}(t)}\,\mathrm{d}t,\quad\forall s\in(0,M]. \tag{2.29}\] This can be seen arguing exactly as in the proof of [74, Prop. 3.12 and (3.23)] (see also [76]), recalling also that \(|\mathbf{D}u|\leq\mathrm{lip}(u)\mathfrak{m}=|Du|\mathfrak{m}\) (see (2.20)). Next we claim that \(u_{N}^{*}\in\mathsf{LIP}_{c}(\Omega^{*})\). Recall that by definition \(\Omega^{*}=B_{r}(0)\subset\mathbb{R}^{N}\), where \(r>0\) satisfies \(\mathscr{L}^{N}(B_{r}(0))=\mathfrak{m}(\Omega).\) From the definitions \(u_{N}^{*}(x)=\tilde{u}_{N}^{*}(|x|)\), where \(\tilde{u}_{N}^{*}:[0,r]\to\mathbb{R}^{+}\) is the rearrangement into the space \(([0,\infty),|.|,N\omega_{N}t^{N-1}\mathrm{d}t)\) as defined in [76, Definition 3.1]. Then the fact that \(u_{N}^{*}\in\mathsf{LIP}(\Omega^{*})\) follows directly from \(\tilde{u}_{N}^{*}\in\mathsf{LIP}[0,r]\) which is proved in [76, Prop. 3.4] under the same assumptions on \(u\) and \(\Omega\). Finally \(\mathrm{supp}(u_{N}^{*})\subset\Omega^{*}\). Indeed \(\mathrm{supp}(u)\subsetneq\Omega\), otherwise \(\Omega\) would be closed and would coincide with X (as X is connected). This implies that \(\mathscr{L}^{N}(\{u_{N}^{*}>t\})\leq\mathfrak{m}(\mathrm{supp}(u))<\mathfrak{ m}(\Omega)=\mathscr{L}^{N}(\Omega^{*})\), for all \(t>0\), because \(\mathfrak{m}(\Omega\setminus\mathrm{supp}(u))>0\), as non-empty open sets in X have positive measure. Since \(u_{N}^{*}\) is a radial function centered at the origin this shows \(\mathrm{supp}(u_{N}^{*})\subset\Omega\). Next we observe that \(\tilde{u}_{N}^{*}\) is strictly decreasing in \((0,\mathfrak{m}(\mathrm{supp}(u)))\) (since \(\mu(t)\) is continuous) and in particular \(\{u_{N}^{*}>t\}=B_{r_{t}}(0)\) (and \(\{u_{N}^{*}=t\}=\partial B_{r_{t}}(0)\)) for some \(r_{t}\in[0,\mathfrak{m}(\Omega)]\), for every \(t\in(0,M)\). Note that \(r_{t}\) can be computed explicitly to be \(r_{t}=(\omega_{N}^{-1}\mu(t))^{1/N}\), which also shows that \((0,M)\ni t\mapsto r_{t}\) is a strictly monotone and locally absolutely continuous map. In particular \[\mathcal{H}^{N}(\partial B_{r_{t}}(0))=N\omega_{N}^{\frac{1}{N}}\mu(t)^{\frac{ N-1}{N}}. \tag{2.30}\] Combining these observations with the expression for the derivative of \(\mu\) given in [74, Lemma 3.10] (see also [76, Lemma 3.5]) we have \[-\mu^{\prime}(t)=\int_{\partial B_{r_{t}}(0)}(\mathrm{lip}(u_{N}^{*}))^{-1} \mathrm{d}\mathcal{H}^{N-1}=\frac{N\omega_{N}^{\frac{1}{N}}\mu(t)^{\frac{N-1} {N}}}{\mathrm{lip}(\tilde{u}_{N}^{*})(r_{t})}\quad\text{for a.e.}\ t\in(0,M), \tag{2.31}\] where we have used (2.30) and that \(\mathrm{lip}(u_{N}^{*})(x)=\mathrm{lip}(\tilde{u}_{N}^{*})(|x|)\) which easily follows from the identity \(u_{N}^{*}(x)=\tilde{u}_{N}^{*}(|x|)\). Plugging the above in (2.29) we reach \[\int_{\{u\leq s\}}|Du|^{2}\,\mathrm{d}\mathfrak{m} \geq\int_{0}^{s}\frac{\mathrm{Per}(\{u>t\})^{2}\mathrm{lip}( \tilde{u}_{N}^{*})(r_{t})}{N\omega_{N}^{\frac{1}{N}}\mu(t)^{\frac{N-1}{N}}}\, \mathrm{d}t\] \[=\int_{0}^{s}\left(\frac{\mathrm{Per}(\{u>t\})}{N\omega_{N}^{ \frac{1}{N}}\mu(t)^{\frac{N-1}{N}}}\right)^{2}\mathrm{lip}(\tilde{u}_{N}^{*})(r_ {t})N\omega_{N}^{\frac{1}{N}}\mu(t))^{\frac{N-1}{N}}\,\mathrm{d}t\] \[=\int_{0}^{s}\left(\frac{\mathrm{Per}(\{u>t\})}{N\omega_{N}^{ \frac{1}{N}}\mu(t)^{\frac{N-1}{N}}}\right)^{2}\int_{\partial B_{r_{t}}(0)} \mathrm{lip}(u_{N}^{*})\mathrm{d}\mathcal{H}^{N-1}\,\mathrm{d}t\,,\] where for the last step we argue as in (2.31). This concludes the proof. ### RCD spaces For brevity we do not recall the definition of \(\mathrm{RCD}(K,N)\) spaces (with \(N\in[1,\infty)\) and \(K\in\mathbb{R}\)) since it will not be directly used in this note, instead we recall here all the properties of these spaces that will be needed. For further details on the definition and on the theory of metric measure spaces with synthetic Ricci curvature lower bound we refer to the surveys [49, 3] and references therein. First recall that every RCD space is infinitesimally Hilbertian from the very definition. In every \(\mathrm{RCD}(K,N)\) space the _Bishop-Gromov inequality_ holds [87], that is \[\frac{\mathfrak{m}(B_{r}(x))}{v_{K,N}(r)}\geq\frac{\mathfrak{m}(B_{R}(x))}{v_{ K,N}(R)},\quad\forall x\in\mathrm{X},\,\forall 0<r<R, \tag{2.32}\] where the quantities \(v_{K,N}(r)\) coincides, for \(N\in\mathbb{N}\), with the volume of the ball of radius \(r\) in the model space of curvature \(K\) and dimension \(N\). For the definition of \(v_{K,N}\) for non integer \(N\) see [87], however in the results of this note only the case \(N\in\mathbb{N}\) will be relevant. As a consequence of (2.32) we obtain that for every \(R_{0}>0\), \(N<+\infty\) and \(K\in\mathbb{R}\) there exists a constant \(C_{R_{0},K,N}\) such that for every \(\mathrm{RCD}(K,N)\) space \((\mathrm{X},\mathsf{d},\mathfrak{m})\) it holds: \[\frac{\mathfrak{m}(B_{r}(x))}{\mathfrak{m}(B_{R}(x))}\geq C_{R_{0},K,N}\left( \frac{r}{R}\right)^{N},\quad\forall x\in\mathrm{X},\,\forall 0<r<R\leq R_{0}. \tag{2.33}\] Taking \(R=2r\) this shows also that every \(\mathrm{RCD}(K,N)\) space with \(N<+\infty\) is uniformly locally doubling (recall Definition 2.12). It is also proved in [81, 82] that every \(\mathrm{RCD}(K,N)\) space supports also a weak local (1,1)-Poincare inequality. Combining the last two observations we conclude that every \(\mathrm{RCD}(K,N)\) space with \(N<+\infty\) is an infinitesimally Hilbertian PI space. We recall the following embedding result. **Proposition 2.23** ([50, Theorem 6.3, _ii_)]).: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be an \(\mathrm{RCD}(K,N)\) space and let \(\Omega\subset\mathrm{X}\) be bounded. Then the inclusion \(W^{1,2}_{0}(\Omega)\hookrightarrow L^{2}(\mathfrak{m})\) is compact._ Given an \(\mathrm{RCD}(K,N)\) space \((\mathrm{X},\mathsf{d},\mathfrak{m})\) we define the _Bishop-Gromov_ density function \(\theta_{N}:\mathrm{X}\to(0,+\infty]\) by \[\theta_{N}(x)\coloneqq\lim_{r\to 0^{+}}\frac{\mathfrak{m}(B_{r}(x))}{\omega_{N}r ^{N}}=\lim_{r\to 0^{+}}\frac{\mathfrak{m}(B_{r}(x))}{v_{N,K}(r)}\,, \tag{2.34}\] where the existence of the limits is ensured by (2.32) (see [41, Def. 1.9]). As shown in [41, Lemma 2.2] the function \(\theta_{N}\) is lower-semicontinuous in \(\mathrm{X}\). A key property that we will need is the validity of a local almost-Euclidean isoperimetric inequality. We will use the following version essentially proved in [76] (see also [11, 29] for similar results in the setting of RCD spaces and [17] for the Riemannian setting). **Theorem 2.24** (Local almost-Euclidean isoperimetric inequality).: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be an \(\mathrm{RCD}(K,N)\) space for some \(N\in(1,\infty),K\in\mathbb{R}\). Then for every \(x\in\mathrm{X}\) with \(\theta_{N}(x)<+\infty\) and every \(\varepsilon\in(0,\theta_{N}(x))\) there exists \(\rho=\rho(\varepsilon,x,N)\) such that_ \[\mathrm{Per}(E)\geq\mathfrak{m}(E)^{\frac{N-1}{N}}N\omega_{N}^{\frac{1}{N}}( \theta_{N}(x)-\varepsilon)^{\frac{1}{N}}(1-\varepsilon),\quad\forall\,E\subset B _{\rho}(x)\text{ Borel}. \tag{2.35}\] Proof.: It is sufficient to prove the statement with \(\varepsilon\in(0,\theta_{N}(x)/2\wedge 1/2)\). From [76, Theorem 3.9] there exists \(\bar{R}=\bar{R}(\varepsilon,K,N)\) such that for every \(x\in\mathrm{X}\), \(R\in(0,\bar{R}]\) it holds \[\mathrm{Per}(E)\geq\mathfrak{m}(E)^{\frac{N-1}{N}}N\omega_{N}^{\frac{1}{N}} \theta_{N,R}(x)^{\frac{1}{N}}(1-(2C_{\varepsilon,R}^{1/N}(x)+1)\varepsilon- \varepsilon),\quad\forall\,E\subset B_{\varepsilon R}(x),\] where \(\theta_{N,\rho}(x)\coloneqq\frac{\mathfrak{m}(B_{\rho}(x))}{\omega_{N}\rho^{N}}\) and \(C_{\varepsilon,R}(x)\coloneqq\frac{\theta_{N,\rho}(x)}{\theta_{N,R}(x)}.\) Since \(\theta_{N}(x)=\lim_{\rho\to 0}\theta_{N,\rho}(x)<+\infty\) there exists \(\bar{r}=\bar{r}(x,\varepsilon)\) so that \(\theta_{N,\rho}(x)\in(\theta_{N}(x)-\varepsilon,\theta_{N}(x)+\varepsilon)\) for all \(\rho\leq\bar{r}\). Moreover since \(\varepsilon<\theta_{N}(x)/2\) we have that for every \(R\leq\bar{r}\) it holds \(C_{\varepsilon,R}(x)\leq\frac{\frac{\lambda}{2}\theta_{N}(x)}{\frac{\lambda}{2} \theta_{N}(x)}=3.\) Hence choosing \(\rho=\rho(x,\varepsilon,K,N)\coloneqq\varepsilon(\bar{r}(\varepsilon,x)\wedge \bar{R}(\varepsilon,K,N))\), we have that \[\mathrm{Per}(E)\geq\mathfrak{m}(E)^{\frac{N-1}{N}}N\omega_{N}^{\frac{1}{N}}( \theta_{N}(x)-\varepsilon)^{\frac{1}{N}}(1-8\varepsilon),\quad\forall\,E \subset B_{\rho}(x),\] from which the conclusion follows. Next we introduce the subclass of non-collapsed \(\mathrm{RCD}(K,N)\) space. **Definition 2.25** ([41]).: An \(\mathrm{RCD}(K,N)\) space \((\mathrm{X},\mathsf{d},\mathfrak{m})\) is said to be _non-collapsed_ if \(\mathfrak{m}=\mathcal{H}^{N}\), where \(\mathcal{H}^{N}\) denotes the \(N\)-dimensional Hausdorff measure on \((\mathrm{X},\mathsf{d})\). After the works [60, 26] this definition is known to be equivalent to one given in [67]. As showed in [41, Theorem 1.12] if \((\mathrm{X},\mathsf{d},\mathfrak{m})\) is a non-collapsed \(\mathrm{RCD}(K,N)\) space, then \(N\in\mathbb{N}.\) Moreover by [41, Corollary 1.7] (see also [8, Theorem 1.4]) it holds that \[\begin{split}\theta_{N}(x)&=1,\quad\mathfrak{m}\text {-a.e. }x\in\mathrm{X},\\ \theta_{N}(x)&\leq 1,\quad\forall x\in\mathrm{X}. \end{split} \tag{2.36}\] **Remark 2.26** (Consistency with the smooth setting).: It is worth to recall that RCD spaces are compatible with the smooth setting in the following sense. Any \(N\)-dimensional Riemannian manifold \((M,g)\) with Ricci curvature bounded below by a number \(K\in\mathbb{R}\), i.e. \(\mathrm{Ric}_{g}\geq Kg\), endowed with the Riemannian distance and volume measure is a non-collapsed \(\mathrm{RCD}(K,N)\) metric measure space [90, 34]. In particular the metric measure space \((\mathbb{R}^{N},|\cdot|,\mathcal{H}^{N})\), where \(\mathcal{H}^{N}\) is the \(N\)-dimensional Hausdorff measure is a non-collapsed \(\mathrm{RCD}(0,N)\) space. \(\blacksquare\) We conclude recalling the validity of the Weyl law in non-collapsed setting proved in [92, 8]. **Theorem 2.27** (Weyl law in RCD spaces).: _Let \((\mathrm{X},\mathsf{d},\mathcal{H}^{N})\) be an \(\mathrm{RCD}(K,N)\) space and \(\Omega\subset\mathrm{X}\) be open and bounded. Then_ \[\lim_{k\to+\infty}\frac{k}{\lambda_{k}^{\mathcal{D}}(\Omega)^{N/2}}=\frac{ \omega_{N}}{(2\pi)^{N}}\mathcal{H}^{N}(\Omega), \tag{2.37}\] _where \(\{\lambda_{k}^{\mathcal{D}}(\Omega)\}_{k\in\mathbb{N}}\) denotes the spectrum of the Dirichlet Laplacian in \(\Omega\) defined in (2.10)._ Proof.: By the results in [92, 8] it holds that \[\lim_{\lambda\to+\infty}\frac{N(\lambda)}{\lambda^{N/2}}=\frac{\omega_{N}}{(2 \pi)^{N}}\mathcal{H}^{N}(\Omega), \tag{2.38}\] where \(N(\lambda)\coloneqq\#\{k\in\mathbb{N}\ :\lambda_{k}^{\mathcal{D}}(\Omega) \leq\lambda\}\). This implies (2.37). To see this, set \(N^{-}(\lambda)\coloneqq\#\{k\in\mathbb{N}\ :\ \lambda_{k}^{\mathcal{D}}(\Omega)<\lambda\}\), and observe \[N(\lambda_{k}^{\mathcal{D}}(\Omega)-1)\leq N^{-}(\lambda_{k}^{\mathcal{D}}( \Omega))\leq k\leq N(\lambda_{k}^{\mathcal{D}}(\Omega)),\quad\forall k\in \mathbb{N}.\] Note that in [92] formula (2.38) is stated with the further assumption that \(\mathrm{diam}(\Omega)<\mathrm{diam}(\mathrm{X})\) (when \(\Omega\neq\mathrm{X}\)), however this assumption is needed in [92] only to ensure the discreteness of the spectrum, for which we know by Proposition 2.23 that the boundedness of \(\Omega\) is sufficient. ## 3 Sobolev spaces and Neumann eigenfunctions in uniform domains Our method to deal with Neumann eigenfunctions in domains with irregular boundary in \(\mathbb{R}^{N}\) (or in more abstract RCD spaces) will be to translate the problem to a global one, by viewing the domain as a metric measure space. The idea is that if the boundary satisfies an appropriate regularity condition, then the resulting m.m. space is a PI space and in particular has good analytic properties, like the isoperimetric inequality and embedding theorems (recall Section 2.3). We stress that the possibility of using abstract metric spaces to deal with Neumann and mixed boundary value problems in irregular domains of the Euclidean space was noted before (see e.g. [19, Page 33]). The key notion that we will use is the one of _uniform domain_ that we now introduce. **Definition 3.1** (Uniform domains).: A bounded open subset \(\Omega\) of a metric space \((\mathrm{X},\mathsf{d})\) is called a _uniform domain_ if there exists a constant \(C>1\) such that every pair of points \(x,y\in\Omega\) can be joined by a rectifiable curve \(\gamma:[0,1]\to\Omega\) such that \(l(\gamma)\leq C\mathsf{d}(x,y)\) and \[\mathsf{d}(\gamma(t),\partial\Omega)\geq C^{-1}\min\left(l(\gamma_{\left|[0,t \right]}),l(\gamma_{\left|[t,1\right]})\right),\quad\forall t\in[0,1].\] Uniform domains were introduced in [72] and [65] (see also [89]) and are central in the theory of BV and Sobolev extension domains (see [22, 47, 58, 59, 65, 68]). Uniform domains are also equivalent to one-sided non tangentially accessible (1-sided NTA) domains (see e.g. [13, Theorem 2.15] or the Appendix in [55]). They include Lipschitz domains, but also more irregular domains such as the quasi-disks, i.e. images of the unit ball under a global quasi-conformal maps (see [72, Theorem 2.15], [47], [63, Section 3] or also [88, Remarks 2.1]). In particular the interior of the Koch snowflake is an example of uniform domain. It has also been proved recently that in every doubling quasi-convex metric space (in particular any bounded \(\mathrm{RCD}(K,N)\) space) any bounded open set can be approximated from inside and outside by uniform domains, see [83] for the precise statement. It easily follows from the definition that every uniform domain is both connected and locally connected (see e.g. [71]). The main goal of this section is to prove the following theorem. It says that a uniform domain in a non-collapsed \(\mathrm{RCD}(K,N)\) space (and in particular in \(\mathbb{R}^{N}\)) when viewed as a m.m. space admits an almost Euclidean isoperimetric inequality near almost every point (point \((ii)\)) and the eigenvalues of the Laplacian satisfy a weak version of the Weyl law (point \((iii)\)). Item \((i)\) is a technical condition that we will need to apply the Faber-Krahn inequality in Proposition 5.1. We remark that, differently from the preceding sections, in the following statement we will denote by \(\mathrm{Y}\) the ambient space, while the notation \((\mathrm{X},\mathsf{d},\mathfrak{m})\) is reserved to the metric measure space associated to the closure of the domain \(\Omega\subset\mathrm{Y}\). **Theorem 3.2**.: _Let \((\mathrm{Y},\tilde{\mathsf{d}},\mathcal{H}^{N})\) be an \(\mathrm{RCD}(K,N)\) space and let \(\Omega\subset\mathrm{Y}\) be a uniform domain. Then the metric measure space \((\mathrm{X},\mathsf{d},\mathfrak{m})\coloneqq(\overline{\Omega},\tilde{ \mathsf{d}}_{\left|\overline{\Omega}\right.}\,\mathcal{H}^{N}{}_{\left| \overline{\Omega}\right.})\) is an infinitesimally Hilbertian PI space and satisfies the following properties:_ 1. _there exists a constant_ \(c>0\) _such that_ \[\frac{\mathfrak{m}(B^{\mathrm{X}}_{r}(x))}{\mathfrak{m}(B^{\mathrm{X}}_{R}(x) )}\geq c\left(\frac{r}{R}\right)^{N},\quad\forall\,x\in\mathrm{X},\,\forall \,0<r<R.\] 2. _For every_ \(\varepsilon>0\) _there exists a closed set_ \(C_{\varepsilon}\subset\mathrm{X}\) _with_ \(\mathfrak{m}(C_{\varepsilon})=0\) _such that for every_ \(x\in\mathrm{X}\setminus C_{\varepsilon}\) _there exists a constant_ \(\rho=\rho(x,N,\varepsilon)>0\) _satisfying_ \[\mathrm{Per}_{\mathrm{X}}(E)\geq(1-\varepsilon)N\omega_{N}^{\frac{1}{N}} \mathfrak{m}(E)^{\frac{N-1}{N}},\quad\forall\,E\subset B^{\mathrm{X}}_{\rho}( x)\,\text{ Borel,}\] (3.1) _where_ \(\mathrm{Per}_{\mathrm{X}}\) _and_ \(B^{\mathrm{X}}_{\rho}(x)\) _are respectively the perimeter and the metric ball in the space_ \((\mathrm{X},\mathsf{d},\mathfrak{m})\)_._ 3. _denoted by_ \(\{\lambda_{k}\}_{k}\) _the spectrum of the Laplacian in_ \((\mathrm{X},\mathsf{d},\mathfrak{m})\) _(recall (_2.21_)) it holds_ \(\lambda_{k}=\lambda_{k}^{\mathcal{N}}(\Omega)\) _for all_ \(k\in\mathbb{N}\) _and_ \[\varlimsup_{k\to+\infty}\frac{\lambda_{k}^{N/2}}{k}\leq\frac{(2\pi)^{N}}{ \omega_{N}\mathfrak{m}(\mathrm{X})}.\] (3.2) **Remark 3.3** (The 'bad' set \(C_{\varepsilon}\)).: If the ambient space \((\mathrm{Y},\tilde{\mathsf{d}},\mathcal{H}^{N})\) is the Euclidean space then item \(ii)\) in Theorem 3.2 is immediate by taking \(C_{\varepsilon}=\partial\Omega\) and by the isoperimetric inequality (indeed \(\partial\Omega\) is negligible as we will show in Lemma 3.6). More generally if \((\mathrm{Y},\tilde{\mathsf{d}},\mathcal{H}^{N})\) is a Riemannian manifold with \(\tilde{\mathsf{d}}\) the geodesic distance then item \(ii)\) follows taking again \(C_{\varepsilon}=\partial\Omega\) and applying the local almost-Euclidean isoperimetric inequality in [17, Appendice C]. In the general case, as will be shown in the proof, the set \(C_{\varepsilon}\) in Theorem 3.2 can be taken to be \[C_{\varepsilon}\coloneqq\{x\in\overline{\Omega}\ :\ \theta_{N}(x)\leq 1- \varepsilon\}\cup\partial\Omega,\] where \(\theta_{N}:\mathrm{X}\to(0,+\infty]\) is the Bishop-Gromov density function defined in (2.34). In other words the set \(C_{\varepsilon}\) contains \(\partial\Omega\) plus a subset of the singular points, the latter being the points where \(\theta_{N}<1\). Note that \(\theta_{N}\equiv 1\) if \((\mathrm{Y},\tilde{\mathsf{d}},\mathcal{H}^{N})\) is a Riemannian manifold, so that \(C_{\varepsilon}\) reduces to \(\partial\Omega\), in accordance to what we said above. \(\blacksquare\) We start with some basic properties of uniform domains. **Definition 3.4** (Corkscrew-condition).: A bounded open subset \(\Omega\) of a metric space \((\mathrm{X},\mathsf{d})\) satisfies the corkscrew-condition if there exists a constant \(\varepsilon>0\) such that for every point \(x\in\overline{\Omega}\) and all \(0<r\leq\mathrm{diam}(\Omega),\) the set \(\Omega\cap B_{r}(x)\) contains a ball of radius \(\varepsilon r.\) In Definition 3.4 it is equivalent to require only \(x\in\partial\Omega.\) For the proof of the following well known fact see e.g. [22, Lemma 4.2]. **Lemma 3.5**.: _Every uniform domain satisfies the corkscrew-condition._ **Lemma 3.6**.: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a uniformly locally doubling metric measure space and let \(\Omega\subset\mathrm{X}\) satisfy the corkscrew-condition. Then \(\mathfrak{m}(\partial\Omega)=0.\) In particular this holds if \(\Omega\subset\mathrm{X}\) is a uniform domain._ Proof.: Thanks to the corkscrew-condition there exists a constant \(\varepsilon>0\) such that for every \(x\in\partial\Omega\) and \(r>0\) there exists \(B_{\varepsilon r}(y)\subset B_{r}(x)\cap\Omega\). Then by the uniformly locally doubling assumption we have \[\mathfrak{m}(B_{r}(x)\cap\Omega)\geq\mathfrak{m}(B_{\varepsilon r}(y))\geq C _{\varepsilon}\mathfrak{m}(B_{2r}(y))\geq C_{\varepsilon}\mathfrak{m}(B_{r}( x)),\quad\forall r\in(0,1), \tag{3.3}\] where \(C_{\varepsilon}>0\) is a constant depending only on \(\varepsilon\). Therefore no point of \(x\in\partial\Omega\) can be a one-density point for \(\partial\Omega\). The conclusion follows by the Lebesgue differentiation theorem for locally doubling metric measure spaces (see e.g. [57, Section 3.4]). The result applies to uniform domains, since by Lemma 3.5 they satisfy they corkscrew-condition. The following lemma gives a lower bound on the measure of balls in domain satisfying the corkscrew-condition. **Lemma 3.7**.: _Fix \(s>0\). Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a metric measure space such that for every \(R_{0}>0\) there exists a constant \(c_{0}>0\) such that_ \[\frac{\mathfrak{m}(B_{r}(x))}{\mathfrak{m}(B_{R}(x))}\geq c_{0}\left(\frac{r}{ R}\right)^{s},\quad\forall x\in\mathrm{X},\ \forall 0<r<R\leq R_{0}. \tag{3.4}\] _Then for every bounded domain \(\Omega\subset\mathrm{X}\) satisfying the corkscrew-condition there exists a constant \(C>0\) such that_ \[\frac{\mathfrak{m}(B_{r}(x)\cap\Omega)}{\mathfrak{m}(B_{R}(x)\cap\Omega)}\geq C \left(\frac{r}{R}\right)^{s},\quad\forall x\in\overline{\Omega},\ \forall 0<r<R. \tag{3.5}\] _In particular this holds if \(\Omega\subset\mathrm{X}\) is a uniform domain._ Proof.: Taking \(R=2r\) in (3.4) shows that \((\mathrm{X},\mathsf{d},\mathfrak{m})\) is uniformly locally doubling. Then as in (3.3) we have the existence of a constant \(\tilde{C}\) such that for every \(x\in\overline{\Omega}\) \[\mathfrak{m}(B_{r}(x)\cap\Omega)\geq\tilde{C}\mathfrak{m}(B_{r}( x))\stackrel{{\eqref{eq:corkscrew-condition}}}{{\geq}}c_{0}\tilde{C} \mathfrak{m}(B_{R}(x))\left(\frac{r}{R}\right)^{s}\] \[\geq c_{0}\tilde{C}\mathfrak{m}(B_{R}(x)\cap\Omega)\left(\frac{r }{R}\right)^{s},\quad\forall 0<r<R\leq\mathrm{diam}(\Omega).\] This proves (3.5) for \(0<r<R\leq\mathrm{diam}(\Omega)\). If instead \(r>\mathrm{diam}(\Omega)\) we have \[\frac{\mathfrak{m}(B_{r}(x))\cap\Omega}{\mathfrak{m}(B_{R}(x)\cap\Omega)}= \frac{\mathfrak{m}(\Omega)}{\mathfrak{m}(\Omega)}=1\geq\frac{r^{s}}{R^{s}}, \quad\forall x\in\overline{\Omega},\quad\forall R\geq r.\] This shows (3.5) also for \(\mathrm{diam}(\Omega)<r<R\) and concludes the proof. The next step is to show that the Sobolev space on a uniform domain coincides with the Sobolev space on its closure. In what follows, given a m.m. space \((\mathrm{X},\mathsf{d},\mathfrak{m})\) and an open subset \(\Omega\subset\mathrm{X},\) we denote by \((\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega}\right.}\mathfrak{m}_{ \left|\overline{\Omega}\right.})\) the m.m. space obtained by endowing \(\overline{\Omega}\) with the restriction distance \(\mathsf{d}_{\left|\overline{\Omega}\right.}\coloneqq\mathsf{d}_{\left| \overline{\Omega}\times\overline{\Omega}\right.}\) and measure \(\mathfrak{m}_{\left|\overline{\Omega}\right.}\) obtained by restricting \(\mathfrak{m}\) to the induced Borel \(\sigma\)-algebra on \(\overline{\Omega}.\) Note that by definition \((\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega}\right.}\mathfrak{m}_{ \left|\overline{\Omega}\right.})\) is a complete and separable metric measure space with \(\mathrm{supp}(\mathfrak{m}_{\left|\overline{\Omega}\right.})=\overline{\Omega}.\) Moreover for a function \(u\in L^{2}(\overline{\Omega},\mathfrak{m}_{\left|\overline{\Omega}\right.})\) we will denote by \(u_{\left|\Omega\right.}\in L^{2}(\Omega,\mathfrak{m})\) the function which agrees \(\mathfrak{m}\)-a.e. with \(u\) in \(\Omega\). **Theorem 3.8** (Equivalence between \(\left.\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _ * _the embedding_ \(\left.\mathit{W}^{1,2}(\Omega)\hookrightarrow L^{2}(\Omega)\right.\) _is compact and in particular the Neumann Laplacian in_ \(\Omega\) _has a discrete spectrum_ \(\{\lambda_{k}^{\mathcal{N}}(\Omega)\}_{k\in\mathbb{N}}\) _(counted with multiplicity) satisfying_ \[0=\lambda_{1}^{\mathcal{N}}(\Omega)\leq\lambda_{2}^{\mathcal{N}}(\Omega)\leq \ldots\lambda_{k}^{\mathcal{N}}(\Omega)\leq\cdots\to+\infty.\] * _denoted by_ \(\{\lambda_{k}\}_{k}\) _the spectrum for the Laplacian in_ \((\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega}\right.},\mathfrak{m}_{ \left|\overline{\Omega}\right.})\)_, it holds that_ \[\lambda_{k}^{\mathcal{N}}(\Omega)=\lambda_{k},\quad\forall k\in\mathbb{N}.\] Proof.: First observe that the statement makes sense since by Theorem 3.9 we have that \((\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega}\right.},\mathfrak{m}_{ \left|\overline{\Omega}\right.})\) is a bounded infinitesimally Hilbertian PI space and so by (2.21) we have that the Laplacian in \((\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega}\right.},\mathfrak{m}_{ \left|\overline{\Omega}\right.})\) has a discrete spectrum \(\{\lambda_{k}\}_{k}.\) It is sufficient to show \(i)\) and \(ii),\) because then \(iii)\) would follow from the definitions. Suppose that \(u\in\left.\mathit{W}^{1,2}(\Omega)\right.\) is an eigenfunction for the Neumann Laplacian in \(\Omega\) of eigenvalue \(\lambda\). Then by Theorem 3.8 there exists \(\tilde{u}\in\left.\mathit{W}^{1,2}(\overline{\Omega},\mathsf{d}_{\left| \overline{\Omega}\right.},\mathfrak{m}_{\left|\overline{\Omega}\right.})\) such that \(\tilde{u}\right|_{\Omega}=u.\) Moreover for every \(v\in\left.\mathit{W}^{1,2}(\overline{\Omega},\mathsf{d}_{\left|\overline{ \Omega}\right.},\mathfrak{m}_{\left|\overline{\Omega}\right.})\), again by Theorem 3.8, we have that \(\left.v\right|_{\Omega}\in\left.\mathit{W}^{1,2}(\Omega)\right.\). Then applying (3.6) we obtain \[\int_{\overline{\Omega}}\nabla\tilde{u}\cdot\nabla v\,\mathrm{d}\mathfrak{m}= \int_{\Omega}\nabla u\cdot\nabla v|_{\Omega}\,\mathrm{d}\mathfrak{m}=-\lambda \int_{\Omega}uv\rfloor_{\Omega}\mathrm{d}\mathfrak{m}=-\lambda\int_{\overline {\Omega}}\tilde{u}v\mathrm{d}\mathfrak{m},\] where in the second identity we used the definition of eigenfunction and in the last one that \(\mathfrak{m}(\partial\Omega)=0,\) because \(\Omega\) is a uniform domain. This shows that \(\tilde{u}\) is an eigenfunction of eigenvalue \(\lambda\) for the Laplacian in \((\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega}\right.},\mathfrak{m}_{ \left|\overline{\Omega}\right.})\). Then by Theorem 2.17\(\tilde{u}\) has a Holder continuous representative in \(\overline{\Omega}\), which implies that \(u\) has also a continuous representative in \(\Omega\). Conversely suppose that \(\tilde{u}\) is an eigenfunction of eigenvalue \(\lambda\) for the Laplacian in \((\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega}\right.}\mathfrak{m}_{ \left|\overline{\Omega}\right.})\). Then by Theorem 3.8 we have \(\tilde{u}\right|_{\Omega}\in\left.\mathit{W}^{1,2}(\Omega)\). Moreover, again by Theorem 3.8, for every \(v\in\left.\mathit{W}^{1,2}(\Omega)\right.\) there exists \(\tilde{v}\in\left.\mathit{W}^{1,2}(\overline{\Omega},\mathsf{d}_{\left| \overline{\Omega}\right.},\mathfrak{m}_{\left|\overline{\Omega}\right.})\) such that \(\tilde{v}\right|_{\Omega}=v.\) Therefore as above using (3.6) \[-\lambda\int_{\Omega}\tilde{u}\rvert_{\Omega}v\mathrm{d}\mathfrak{m}=-\lambda \int_{\overline{\Omega}}\tilde{u}\tilde{v}\mathrm{d}\mathfrak{m}=\int_{ \overline{\Omega}}\nabla\tilde{u}\cdot\nabla\tilde{v}\,\mathrm{d}\mathfrak{m} =\int_{\Omega}\nabla\tilde{u}\rvert_{\Omega}\cdot\nabla v\,\mathrm{d} \mathfrak{m}.\] This shows that \(\tilde{u}\rvert_{\Omega}\) is an eigenfunction of eigenvalue \(\lambda\) for the Neumann Laplacian in \(\Omega\) and completes the proof of \(i).\) For \(ii)\) recall that by Theorem 3.8 the map \(\Phi:\left.\mathit{W}^{1,2}(\overline{\Omega},\mathsf{d}_{\left|\overline{ \Omega}\right.},\mathfrak{m}_{\left|\overline{\Omega}\right.})\right.\)\(\mathit{W}^{1,2}(\Omega)\), given by \(\Phi(u)=\left.\mathit{u}\right|_{\Omega}\) is an isometry and that by Theorem 2.16 the inclusion \(\iota:\left.\mathit{W}^{1,2}(\overline{\Omega},\mathsf{d}_{\left|\overline{ \Omega}\right.},\mathfrak{m}_{\left|\overline{\Omega}\right.})\right.\)\(\left.L^{2}(\overline{\Omega},\mathfrak{m}_{\left|\overline{\Omega}\right.})\right.\) is compact. Let now \(u_{n}\in\left.\mathit{W}^{1,2}(\Omega)\right.\) be a sequence bounded in \(\left.\mathit{W}^{1,2}(\Omega)\right.\)Then the sequence \(\Phi^{-1}(u_{n})\in\left.\mathit{W}^{1,2}(\overline{\Omega},\mathsf{d}_{ \left|\overline{\Omega}\right.},\mathfrak{m}_{\left|\overline{\Omega}\right.})\) is also bounded, hence it has a converging subsequence in \(L^{2}(\overline{\Omega},\mathfrak{m}_{\left|\overline{\Omega}\right.})\). However by definition \(\Phi^{-1}(u_{n})\right|_{\Omega}=\Phi(\Phi^{-1}(u_{n})=u_{n}\) for every \(n\). Hence \(u_{n}\) has also a converging subsequence in \(L^{2}(\Omega,\mathfrak{m}_{\left|\Omega\right.})\), which shows that the embedding \(\left.\mathit{W}^{1,2}(\Omega)\hookrightarrow L^{2}(\Omega,\mathfrak{m}_{ \left|\Omega\right.})\right.\) is compact. This completes the proof of \(ii)\). **Remark 3.11**.: If the ambient space \((\mathrm{X},\mathsf{d},\mathfrak{m})\) is an \(\mathrm{RCD}(K,N)\) space, with \(N<+\infty\), then the Neumann and Dirichlet eigenfunctions are actually locally Lipschitz in the interior of the domain, as follows directly from [64, Theoem 1.1] (see also [7, Prop. 7.1]). Recall that the continuity of eigenfunctions is crucial to define their nodal domains. Next we show that for a uniform domain \(\Omega\) there is a one to one correspondence between the nodal domains in \(\Omega\) and the nodal domains in its closure. **Proposition 3.12** (Compatibility of nodal domains).: _Let \((\mathrm{X},\mathsf{d})\) be a metric space, \(\Omega\subset\mathrm{X}\) be a uniform domain and \(f:\overline{\Omega}\to\mathbb{R}\) be a continuous function. Denote by \(\mathcal{C}\) (resp. \(\overline{\mathcal{C}}\)) the set all the connected components of \(\Omega\setminus\{f=0\}\) (resp. \(\overline{\Omega}\setminus\{f=0\}\)). Then_ \[\overline{\mathcal{C}}=\{U\cup(\partial U\cap(\partial\Omega\setminus\{f=0\})) \ :\ U\in\mathcal{C}\}. \tag{3.7}\] _In particular the sets \(\mathcal{C}\) and \(\overline{\mathcal{C}}\) have the same cardinality._ Proof.: For every \(U\in\mathcal{C}\) we put \(\varphi(U)\coloneqq U\cup(\partial U\cap(\partial\Omega\setminus\{f=0\}))\) and we want to show that \(\varphi\) defines a bijective map \(\varphi:\mathcal{C}\to\overline{\mathcal{C}}\). We note immediately that \(\varphi\) is injective, because if \(\varphi(U)=\varphi(V)\) then \(U=\varphi(U)\cap\Omega=\varphi(V)\cap\Omega=V\). To conclude it is sufficient to prove that the sets \(\{\varphi(U)\}_{U\in\mathcal{C}}\) are open, closed and connected in the topology of \(\overline{\Omega}\setminus\{f=0\}\) and that their union is \(\overline{\Omega}\setminus\{f=0\}\). Indeed this would show that \(\{\varphi(U)\}_{U\in\mathcal{C}}\) are exactly the connected components of \(\overline{\Omega}\setminus\{f=0\}\), which would imply (3.7) and in particular that \(\varphi\) is surjective. Note first that the elements of \(\mathcal{C}\) are open in the topology of \(\Omega\setminus\{f=0\}\) (and thus also in the one of X) because \(\Omega\setminus\{f=0\}\) is locally connected (recall Lemma 2.2) since it is an open subset of \(\Omega\) which is a uniform domain and thus locally connected. The key observation is that for every \(x\in\partial\Omega\setminus\{f=0\}\) there exists \(r>0\) and \(U\in\mathcal{C}\) such that \[B_{r}(x)\cap\overline{\Omega}\subset\varphi(U)=U\cup(\partial U\cap(\partial \Omega\setminus\{f=0\})). \tag{3.8}\] To prove this note that \(f(x)\neq 0\). Then by continuity there exists \(r_{0}>0\) so that \(B_{r}(x)\cap\overline{\Omega}\subset\overline{\Omega}\setminus\{f=0\}\) for every \(r\in(0,r_{0}]\). In particular for every \(r\in(0,r_{0}]\) the set \(B_{r}(x)\cap\Omega\) is contained in \(\bigcup_{U\in\mathcal{C}}U\). Suppose that for some \(r>0\) the set \(B_{r}(x)\cap\Omega\) intersects at at least two distinct sets \(U_{r},V_{r}\in\mathcal{C}\) and take two points \(u_{r}\in U_{r}\cap B_{r}(x),v_{r}\in V_{r}\cap B_{r}(x)\). Since \(\Omega\) is a uniform domain there exists a rectifiable curve \(\gamma:[0,1]\to\mathcal{X}\) contained in \(\Omega\), connecting \(u_{r}\) and \(v_{r}\) and of length \(l(\gamma)\leq\mathcal{C}\text{d}(u_{r},v_{r})\leq 2Cr\), where \(C>0\) is some constant independent of \(r.\) Then \(\gamma([0,1])\) must intersect \(\{f=0\}\) otherwise \(\gamma([0,1])\subset\Omega\setminus\{f=0\}\) and \(U_{r}\cup V_{r}\cup\gamma([0,1])\) would be a connected subset of \(\Omega\setminus\{f=0\}\), which contradicts the fact that \(U_{r},V_{r}\) are distinct connected components of \(\Omega\setminus\{f=0\}.\) Therefore, since \(\gamma([0,1])\subset B_{(2C+1)r}(x)\), it holds \(B_{(2C+1)r}(x)\cap\Omega\cap\{f=0\}\neq\emptyset\) and so by the choice of \(r_{0}\) we must have \((2C+1)r>r_{0}\). This proves that for \(r\) small enough \(B_{r}(x)\cap\Omega\) intersects at most one set in \(\mathcal{C}\). However, as observed above, \(B_{r}(x)\cap\Omega\subset\bigcup_{U\in\mathcal{C}}U\) for every \(r\in(0,r_{0}]\). Therefore for \(r>0\) small enough we must have that \(B_{r}(x)\cap\Omega\subset U\) for some \(U\in\mathcal{C}.\) Fix one such \(r>0\) and fix \(y\in B_{r}(x)\cap\partial\Omega\). Then for every \(s>0\) small enough \(B_{s}(y)\cap\Omega\neq\emptyset\) and \(B_{s}(y)\cap\Omega\subset B_{r}(x)\cap\Omega\subset U\). This shows that \(y\in\partial U\). Therefore for every \(r>0\) small enough \(B_{r}(x)\cap\Omega\subset U\), \(B_{r}(x)\cap\partial\Omega\subset\partial U\) for some \(U\in\mathcal{C}\) and as observed above \(B_{r}(x)\cap\overline{\Omega}\subset\overline{\Omega}\setminus\{f=0\}\). Combining these three facts proves (3.8). Consider now any \(U\in\mathcal{C}\) and note that \(\varphi(U)\) is precisely the closure of \(U\) in the topology of \(\overline{\Omega}\setminus\{f=0\}\). Indeed \[\overline{U}\cap(\overline{\Omega}\setminus\{f=0\}) =\left(U\cap(\overline{\Omega}\setminus\{f=0\})\right)\cup\left( \partial U\cap(\overline{\Omega}\setminus\{f=0\})\right)\] \[=U\cup(\partial U\cap(\partial\Omega\setminus\{f=0\}))\cup( \partial U\cap(\Omega\setminus\{f=0\}))\] \[=U\cup(\partial U\cap(\partial\Omega\setminus\{f=0\}))=\varphi(U),\] where we used that \(U\subset\Omega\setminus\{f=0\}\) and that \(\partial U\cap(\Omega\setminus\{f=0\})=\emptyset\) because \(U\) is closed in the topology of \(\Omega\setminus\{f=0\}\), being a connected component, but also open as observed above. Moreover \(U\) is connected in \(\Omega\setminus\{f=0\}\) and thus also in \(\overline{\Omega}\setminus\{f=0\}\), hence \(\varphi(U)\) is also connected in the same topology, being the closure of a connected set. Additionally (3.8) implies that \(\varphi(U)\) is open in the topology of \(\overline{\Omega}\setminus\{f=0\}\). It remains to prove that the union of the sets \(\{\varphi(U)\}_{U\in\mathcal{C}}\) is \(\overline{\Omega}\setminus\{f=0\}\), which can be seen as follows \[\overline{\Omega}\setminus\{f=0\} =\left(\Omega\setminus\{f=0\}\right)\cup(\partial\Omega\setminus \{f=0\})\stackrel{{\eqref{eq:f=0}}}{{\subset}}(\Omega\setminus\{f= 0\})\cup\left(\bigcup_{U\in\mathcal{C}}\varphi(U)\right)\] \[\subset\left(\bigcup_{U\in\mathcal{C},}U\right)\cup\left(\bigcup_{U \in\mathcal{C}}\varphi(U)\right)=\bigcup_{U\in\mathcal{C}}\varphi(U).\] We pass now to prove the main result of this section. Proof of Theorem 3.2.: The fact that \((\mathrm{X},\mathsf{d},\mathfrak{m})\) is an infinitesimally Hilbertian PI space is contained in Theorem 3.9. Recall also that by Lemma 3.6 it holds \(\mathcal{H}^{N}(\partial\Omega)=0.\) Item \((i)\) follows immediately combining (2.33) and Lemma 3.7 and recalling that \(\Omega\) is bounded. We pass to the proof of \((ii).\) Fix \(\varepsilon\in(0,1)\) arbitrary. We choose \[C_{\varepsilon}\coloneqq\{x\in\overline{\Omega}\;:\;\theta_{N}(x)\leq 1- \varepsilon\}\cup\partial\Omega\] (see (2.34) for the definition of \(\theta_{N}\)). From the lower semicontinuity of the function \(\theta_{N}(\cdot)\), it follows that \(C_{\varepsilon}\) is a closed subset of \(\overline{\Omega}\). Moreover, since \(\theta_{N}(x)=1\) for \(\mathcal{H}^{N}\)-a.e. \(x\) (recall (2.36)) and \(\mathcal{H}^{N}(\partial\Omega)=0,\) it follows \(\mathcal{H}^{N}(C_{\varepsilon})=0\). Note that by construction \(1-\varepsilon<\theta_{N}(x)\leq 1\) for all \(x\in\overline{\Omega}\setminus C_{\varepsilon}\subset\Omega\). Therefore we can apply the local almost-Euclidean isoperimetric inequality given by Theorem 2.24 (recall also Lemma 2.11) and obtain that for every \(x\in\overline{\Omega}\setminus C_{\varepsilon}\) and every \(\varepsilon\in(0,1/4)\) there exists \(\rho=\rho(x,N,\varepsilon)<\tilde{\mathsf{d}}(x,\Omega^{c})\) such that \[\mathrm{Per}_{\mathrm{X}}(E)\geq\mathfrak{m}(E)^{\frac{N-1}{N}}N\omega_{N}^{ \frac{1}{N}}(1-2\varepsilon)^{\frac{1}{N}}(1-\varepsilon),\quad\forall\,E \subset B_{\rho}(x)=B_{\rho}^{\mathrm{X}}(x)\text{ Borel},\] which shows (3.1). It remains to show \((iii)\). Denote by \(\lambda_{k}\), \(k\in\mathbb{N}\) the spectrum of the Laplacian in \((\mathrm{X},\mathsf{d},\mathfrak{m})\) (counted with multiplicity and in non-decreasing order). Recall that by Corollary 3.10\(\lambda_{k}=\lambda_{k}^{\mathcal{N}}(\Omega)\) for every \(k\in\mathbb{N}\), where \(\lambda_{k}^{\mathcal{N}}(\Omega)\) is the \(k\)-th Neumann eigenvalue of \(\Omega\) (in non-decreasing order). Recalling Lemma 2.8 we know that \(\lambda_{k}^{\mathcal{N}}(\Omega)\leq\lambda_{k}^{\mathcal{D}}(\Omega)\) for every \(k\in\mathbb{N}\), where \(\lambda_{k}^{\mathcal{D}}(\Omega)\) is the \(k\)-th Dirichlet Laplacian eigenvalue of \(\Omega.\) Then (3.2) follows from the Weyl law for the Dirichlet Laplacian (see Theorem 2.27): \[\varlimsup_{k\to+\infty}\frac{\lambda_{k}^{N/2}}{k}=\varlimsup_{k\to+\infty} \frac{(\lambda_{k}^{\mathcal{N}}(\Omega))^{N/2}}{k}\leq\lim_{k\to+\infty}\frac {(\lambda_{k}^{\mathcal{D}}(\Omega))^{N/2}}{k}=\frac{(2\pi)^{N}}{\omega_{n} \mathcal{H}^{N}(\Omega)}=\frac{(2\pi)^{N}}{\omega_{n}\mathfrak{m}(\mathrm{X})},\] having used again \(\mathfrak{m}(\partial\Omega)=0.\) ## 4 From local to global isoperimetric inequality In this section we prove the following crucial result. Informally speaking, it says that in a PI space satisfying an almost-Euclidean isoperimetric inequality around almost-every point, the same isoperimetric inequality extends to all sets having sufficiently small volume and avoiding a 'bad' but small region of the space. **Theorem 4.1**.: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a bounded PI space and fix \(N>1\). Suppose that for every \(\varepsilon>0\) there exists a closed set \(C_{\varepsilon}\subset\mathrm{X}\) with \(\mathfrak{m}(C_{\varepsilon})=0\) such that for every \(x\in\mathrm{X}\setminus C_{\varepsilon}\) there exists a constant \(\rho=\rho(x,N,\varepsilon)>0\) satisfying_ \[\mathrm{Per}(E)\geq(1-\varepsilon)N\omega_{N}^{\frac{1}{N}}\mathfrak{m}(E)^{ \frac{N-1}{N}},\quad\forall\,E\subset B_{\rho}(x)\text{ Borel}. \tag{4.1}\] _Then for every \(\varepsilon\in(0,1)\) and \(\eta>0\) there exists an open set \(U_{\varepsilon,\eta}\subset\mathrm{X}\) with \(\mathfrak{m}(U_{\varepsilon,\eta})<\eta\) and constants \(\beta=\beta(\mathrm{X},\varepsilon,N,\eta)>0\), \(\beta^{\prime}=\beta^{\prime}(\varepsilon)>0\) such that_ \[\mathrm{Per}(E)\geq(1-\varepsilon)N\omega_{N}^{\frac{1}{N}}\mathfrak{m}(E)^{ \frac{N-1}{N}}, \tag{4.2}\] _for every \(E\subset\mathrm{X}\) Borel satisfying_ \[0<\mathfrak{m}(E)\leq\beta,\quad\frac{\mathfrak{m}(E\cap U_{\varepsilon,\eta}) }{\mathfrak{m}(E)}\leq\beta^{\prime}.\] Observe that assumption (4.1) is the same as item \(ii)\) in Theorem 3.2. The proof of Theorem 4.1 takes inspiration from the arguments in Appendix \(C\) of [17] in the smooth setting, but requires also to deal with the technical issues arising from working in a non-smooth metric space. We start with an estimate for the perimeter of the complement of the union of a finite number of balls. **Lemma 4.2**.: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a PI space. Suppose that \(B_{i}\coloneqq B_{r_{i}}(x_{i})\subset\mathrm{X}\), \(i=1,...,k\), \(k\in\mathbb{N},\) have all finite perimeter and satisfy \(\mathrm{Per}(B_{i},\partial^{e}B_{j})=0\) for all \(i\neq j\). Then_ \[\mathrm{Per}(B_{1}^{c}\cap B_{2}^{c}\cap...\cap B_{k}^{c},.)\leq\sum_{i=1}^{k }\mathrm{Per}(B_{i},.)_{|B_{1}^{c}\cap B_{2}^{c}\cap...\cap B_{k}^{c}}. \tag{4.3}\] Proof.: We argue by induction on \(k\). By (2.22) we have that for every ball \(B\subset\mathrm{X}\) of finite perimeter \[\mathrm{Per}(B^{c},.)=\mathrm{Per}(B,.)=\mathrm{Per}(B,.)_{|B^{c}},\] which shows that the statement holds for \(k=1\). Suppose that the statement is true for some \(k\in\mathbb{N}\) and let \(B_{i}\coloneqq B_{r_{i}}(x_{i})\subset\mathrm{X}\), \(i=1,...,k+1\) be as in the statement. By a repeated application of \((vi)\) in Lemma 2.10 we get \[\partial^{e}(B_{1}^{c}\cap B_{2}^{c}\cap...\cap B_{k}^{c})\subset\partial^{e }B_{i}\cup...\cup\partial^{e}B_{k}. \tag{4.4}\] Using (4.4) and the assumption \(\mathrm{Per}(B_{k+1},\partial^{e}B_{j})=0\) for all \(j\neq k+1\) gives \[\mathrm{Per}(B_{k+1},\partial^{e}(B_{1}^{c}\cap B_{2}^{c}\cap...\cap B_{k}^{c }))=0.\] Hence we can apply Proposition 2.19 and obtain \[\mathrm{Per}(B_{1}^{c}\cap B_{2}^{c}\cap...\cap B_{k+1}^{c},.)\leq\mathrm{Per }(B_{k+1},.)_{|(B_{1}^{c}\cap B_{2}^{c}\cap...\cap B_{k}^{c})^{(1)}}+\mathrm{ Per}(B_{1}^{c}\cap B_{2}^{c}\cap...\cap B_{k}^{c},.)_{|(B_{k+1}^{c})^{(1)}}. \tag{4.5}\] Since balls are open sets, by using \((iii)\) and \((iv)\) in Lemma 2.10 it holds \[(B_{1}^{c}\cap B_{2}^{c}\cap...\cap B_{k}^{c})^{(1)}\subset B_{1}^{c}\cap B_{ 2}^{c}\cap...\cap B_{k}^{c},\] that combined with (4.5) and the induction hypothesis gives \[\mathrm{Per}(B_{1}^{c}\cap B_{2}^{c}\cap...\cap B_{k+1}^{c},.) \leq\mathrm{Per}(B_{k+1},.)_{|B_{1}^{c}\cap B_{2}^{c}\cap...\cap B _{k}^{c}}+\mathrm{Per}(B_{1}^{c}\cap B_{2}^{c}\cap...\cap B_{k}^{c},.)_{|(B_{ k+1}^{c})^{(1)}}\] \[\leq\mathrm{Per}(B_{k+1},.)_{|B_{1}^{c}\cap B_{2}^{c}\cap...\cap B _{k}^{c}}+\mathrm{Per}(B_{1}^{c}\cap B_{2}^{c}\cap...\cap B_{k}^{c},.)|_{B_{ k+1}^{c}}\] \[\leq\mathrm{Per}(B_{k+1},.)_{|B_{1}^{c}\cap B_{2}^{c}\cap...\cap B _{k}^{c}\cap B_{k+1}^{c}}+\sum_{i=1}^{k}\mathrm{Per}(B_{i},.)_{|B_{1}^{c}\cap B _{2}^{c}\cap...\cap B_{k}^{c}\cap B_{k+1}^{c}},\] where in the second line we used that \((B_{k+1}^{c})^{(1)}\subset B_{k+1}^{c}\) (recall \((iv)\) in Lemma 2.10) and in the last line that \(\mathrm{Per}(B_{k+1},.)=\mathrm{Per}(B_{k+1},.)_{|B_{k+1}^{c}}\) (recall (2.22)) for the first term and the induction hypothesis for the second term. This concludes the proof. Combining the above estimate with a covering argument we can prove the following proposition, from which Theorem 4.1 will easily follow. **Proposition 4.3** (From local-to-global isoperimetric inequality).: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a PI space. Suppose there exist constants \(\lambda>0\), \(\alpha\in(0,1]\) and a compact set \(\mathsf{K}\subset\mathrm{X}\) such that for all \(x\in\mathsf{K}\) there exists \(\rho(x)>0\) so that_ \[\mathrm{Per}(E)\geq\lambda\mathfrak{m}(E)^{\alpha},\quad\forall\,E\subset B_ {\rho(x)}(x)\,\text{ Borel}. \tag{4.6}\] _Then there exists a constant \(C=C(\mathsf{K},\alpha,\lambda)\) such that_ \[\mathrm{Per}(V)\geq\lambda\mathfrak{m}(V\cap\mathsf{K})^{\alpha}-C\mathfrak{m} (V),\quad\forall\,V\subset\mathrm{X}\text{ Borel}. \tag{4.7}\] Proof.: We start by extracting once and for all a finite covering \(\mathsf{K}\subset\cup_{i=1}^{M}B_{\frac{\rho(x_{i})}{2}}(x_{i})\), with \(x_{i}\in\mathsf{K}\), and we set \(\overline{\rho}\coloneqq\min_{i}\rho(x_{i})>0\). It is enough to prove (4.7) for sets \(V\) of finite perimeter. Fix one such set \(V.\) We claim that there exist \(r_{1},\ldots,r_{M}\), with \(r_{i}\in\left(\frac{\rho(x_{i})}{2},\rho(x_{i})\right)\) such that the following hold: * \(B_{r_{i}}(x_{i})\) has finite perimeter, for every \(i=1,...,M,\) * \(\operatorname{Per}(V,\partial B_{r_{i}}(x_{i}))=0,\) for every \(i=1,...,M,\) * \(\operatorname{Per}(B_{r_{i}}(x_{i}),V^{(1)})\leq\frac{3\mathfrak{m}(V)}{\rho(x_ {i})},\) for every \(i=1,...,M,\) * \(\operatorname{Per}(B_{r_{i}}(x_{i}),\partial B_{r_{j}}(x_{j}))=0,\) for every \(i,j=1,...,M\) with \(i\neq j.\) It is sufficient to prove that: * for any \(i\in\{1,...,M\},\) there exists \(A_{i}\subset\left(\frac{\rho(x_{i})}{2},\rho(x_{i})\right)\) with \(\mathcal{H}^{1}(A_{i})>0\) such that a),b) and c) holds for every \(r_{i}\in A_{i},\) * for every \(r_{i}>0\) such that \(B_{r_{i}}(x_{i})\) has finite perimeter, d) holds for every \(j\neq i\) and for a.e. \(r_{j}>0.\) Indeed if these were true, up to removing from each \(A_{i}\) a set of measure zero, we would have that every choice \((r_{1},\dots,r_{M})\in A_{i}\times\dots,\times A_{M}\) satisfies all a),b), c) and d). We start proving (i). Fix \(i\in\{1,...,M\}\). From Proposition 2.9 we have that \(B_{r}(x_{i})\) has finite perimeter for a.e. \(r>0\) (i.e. a) holds for a.e. \(r_{i}>0\)). Moreover since \(\operatorname{Per}(V,.)\) is a finite measure and \(\{\partial B_{r}(x_{i})\}_{r>0}\) are pairwise disjoint sets, we must have that \(\operatorname{Per}(V,\partial B_{r_{i}}(x_{i}))=0\) for a.e. \(r_{i}>0\) (i.e. b) holds for a.e. \(r_{i}>0\)). By applying Proposition 2.9 with \(R\coloneqq\rho(x_{i})\) we get \[\int_{\frac{\rho(x_{i})}{2}}^{\rho(x_{i})}\operatorname{Per}(B_{r}(x_{i}),V^{ (1)})\,\mathrm{d}r\leq\int_{0}^{\rho(x_{i})}\operatorname{Per}(B_{r}(x_{i}),V ^{(1)})\,\mathrm{d}r\leq\mathfrak{m}(B_{\rho(x_{i})}(x_{i})\cap V^{(1)})= \mathfrak{m}(B_{\rho(x_{i})}(x_{i})\cap V),\] and by the Markov inequality \[\mathcal{H}^{1}\left(\left\{r\in\left(\frac{\rho(x_{i})}{2},\rho(x_{i})\right) :\ \operatorname{Per}(B_{r}(x_{i}),V^{(1)})>3\mathfrak{m}(V\cap B_{\rho(x_{i})}(x_ {i}))/\rho(x_{i})\right\}\right)\leq\frac{\rho(x_{i})}{3},\] which shows that the set \[\left\{r\in\left(\frac{\rho(x_{i})}{2},\rho(x_{i})\right):\ \operatorname{Per}(B_{r}(x_{ i}),V^{(1)})\leq 3\mathfrak{m}(V\cap B_{\rho(x_{i})}(x_{i}))/\rho(x_{i})\right\}\] has positive \(\mathcal{H}^{1}\)-measure, i.e. c) holds for every \(r_{i}\) in a subset of \(\left(\frac{\rho(x_{i})}{2},\rho(x_{i})\right)\) of positive \(\mathcal{H}^{1}\)-measure. Combining all the above observations gives (i). To show (ii) fix \(i,j\in\{1,...,M\},\)\(i\neq j\) and \(r_{i}>0\) such that \(B_{r_{i}}(x_{i})\) has finite perimeter. As above \(\{\partial B_{r}(x_{j})\}_{r>0}\) are pairwise disjoint sets, therefore we must have \(\operatorname{Per}(B_{r_{i}}(x_{i}),\partial B_{r}(x_{j}))=0\) for a.e. \(r>0.\) This proves (ii) and completes the proof of the claim. From now on we assume to have fixed \(r_{1},\dots r_{M}\) such that a), b), c) and d) above hold (note that this choice might depend on the set \(V\)) and we set \(B_{i}\coloneqq B_{r_{i}}(x_{i}).\) By construction \(K\subset\cup_{i=1}^{M}B_{i}.\) Consider the pairwise disjoints sets \(\{U_{i}\}_{i=1}^{M}\) defined inductively as follows: \[U_{1}\coloneqq B_{1},\quad U_{i}\coloneqq B_{i}\cap(B_{i-1}^{c}\cap...\cap B_ {1}^{c}),\quad\forall\,i=2,...,M.\] Clearly \(\{U_{i}\}_{i=1}^{M}\) is a family of disjoint Borel sets which is a covering of \(K\). We claim that \[\operatorname{Per}(V,\partial^{e}U_{i})=0,\quad\forall\,i=1,...,M. \tag{4.8}\] Indeed, from \((vi)\) and \((ii)\) of Lemma 2.10 one infers that \[\partial^{e}U_{i}\subset\partial^{e}B_{1}\cup\partial^{e}(B_{2}^{c})\cup... \cup\partial^{e}(B_{i-1}^{c})\subset\partial B_{1}\cup...\cup\partial B_{i-1}.\] From this (4.8) follows recalling b). Thanks to (4.8) we are in position to apply Proposition 2.19 to deduce that \[\operatorname{Per}(V\cap U_{i})\leq\operatorname{Per}(V,U_{i}^{(1)})+ \operatorname{Per}(U_{i},V^{(1)}),\quad\forall\,i=1,...,M. \tag{4.9}\] The goal is now to give an upper bound on each term on the right-hand side of (4.9). Since \(U_{i}\) are pairwise disjoint by construction, by point \((v)\) of Lemma 2.10 it follows that also the sets \(U_{i}^{(1)}\) are pairwise disjoint, hence \[\sum_{i=1}^{M}\operatorname{Per}(V,U_{i}^{(1)})\leq\operatorname{Per}(V). \tag{4.10}\] To estimate \(\operatorname{Per}(U_{i},V^{(1)})\) we note that from d) it holds \[\operatorname{Per}(B_{i},\partial^{e}(B_{1}^{c}\cap...\cap B_{i-1}^{c}))=0,\] indeed \(\partial^{e}(B_{1}^{c}\cap...\cap B_{i-1}^{c})\subset\partial B_{1}\cup... \cup\partial B_{i-1}\) (recall \((ii)\) in Lemma 2.10). Hence, recalling that by construction \(U_{i}=B_{i}\cap(B_{i-1}^{c}\cap...\cap B_{1}^{c})\), we can apply again Proposition 2.19 to get \[\operatorname{Per}(U_{i},V^{(1)})\leq\operatorname{Per}(B_{i},V^{(1)})+ \operatorname{Per}(B_{1}^{c}\cap...\cap B_{i-1}^{c},V^{(1)})|_{B_{i}^{(1)}}.\] From this and Lemma 4.2 \[\operatorname{Per}(U_{i},V^{(1)}) \stackrel{{\eqref{eq:V_i}}}{{\leq}}\operatorname{Per }(B_{i},V^{(1)})+\sum_{j=1}^{i-1}\operatorname{Per}(B_{j},V^{(1)})|_{B_{i}^{(1) }\cap B_{1}^{c}\cap...\cap B_{i-1}^{c}}\] \[\leq\operatorname{Per}(B_{i},V^{(1)})+\sum_{j=1}^{i-1} \operatorname{Per}(B_{j},V^{(1)})|_{B_{i}\cap B_{1}^{c}\cap...\cap B_{i-1}^{c}}\] \[=\operatorname{Per}(B_{i},V^{(1)})+\sum_{j=1}^{i-1} \operatorname{Per}(B_{j},V^{(1)})|_{U_{i}},\] where in the second line we used that \(B_{i}^{(1)}\subset B_{i}\cup\partial B_{i}\) and that \(\operatorname{Per}(B_{j},\partial B_{i})=0\) for all \(j\neq i\), thanks to d). Summing in \(i\) and recalling that \(U_{i}\) are disjoint \[\sum_{i=1}^{M}\operatorname{Per}(U_{i},V^{(1)}) \leq\sum_{i=1}^{M}\operatorname{Per}(B_{i},V^{(1)})+\sum_{i=1}^{ M}\sum_{j=1}^{i-1}\operatorname{Per}(B_{j},V^{(1)})|_{U_{i}}\] \[\leq\sum_{i=1}^{M}\operatorname{Per}(B_{i},V^{(1)})+\sum_{j=1}^{ M}\sum_{i=1}^{M}\operatorname{Per}(B_{j},V^{(1)})|_{U_{i}} \tag{4.11}\] \[\leq 2\sum_{i=1}^{M}\operatorname{Per}(B_{i},V^{(1)})\stackrel{{ c}}{{\leq}}6\sum_{i=1}^{M}\frac{\mathfrak{m}(V)}{\rho(x_{i})}\leq 6M \frac{\mathfrak{m}(V)}{\bar{\rho}}.\] Combining (4.11), (4.10) and (4.9) we get \[\sum_{i=1}^{M}\operatorname{Per}(V\cap U_{i})\leq\operatorname{Per}(V)+6M \frac{\mathfrak{m}(V)}{\bar{\rho}}. \tag{4.12}\] On the other hand \(V\cap U_{i}\subset B_{i}\subset B_{\rho(x_{i})}(x_{i})\), hence by assumption (4.6) \[\sum_{i=1}^{M}\operatorname{Per}(V\cap U_{i})\geq\lambda\sum_{i=1}^{M} \mathfrak{m}(V\cap U_{i})^{\alpha}\geq\lambda\left(\sum_{i=1}^{M}\mathfrak{m} (V\cap U_{i})\right)^{\alpha}\geq\lambda\mathfrak{m}(V\cap\mathsf{K})^{ \alpha},\] since the function \(x\mapsto x^{\alpha}\) is subadditive and the sets \(\{U_{i}\}_{i=1}^{M}\) cover \(\mathsf{K}\). This combined with (4.12) yields \[\operatorname{Per}(V)\geq\lambda\mathfrak{m}(V\cap\mathsf{K})^{\alpha}-6M\frac {\mathfrak{m}(V)}{\bar{\rho}}.\] The constants \(M\) and \(\bar{\rho}\) depend only on the initial choice of the covering \(B_{\frac{\rho(x_{i})}{2}}(x_{i})\) and thus depend only on \(K\), \(\alpha\) and \(\lambda\), (and not on \(V\)). This concludes the proof. An application of Proposition 4.3 yields immediately the main result of this section. Proof of Theorem 4.1.: Fix \(\varepsilon\in(0,1)\) and \(\eta>0\). Let \(C_{\varepsilon/2}\subset\mathrm{X}\) be as in the statement. Since \(C_{\varepsilon/2}\) is closed, by upper regularity there exists an open set \(U_{\varepsilon,\eta}\) containing \(C_{\varepsilon/2}\) and such that \(\mathfrak{m}(U_{\varepsilon,\eta})<\eta\). Set \(\mathsf{K}_{\varepsilon,\eta}\coloneqq\mathrm{X}\setminus U_{\varepsilon,\eta}\), which is compact (because it is closed and bounded and \(\mathrm{X}\) is a proper, being a PI space). In particular (4.1) holds also for every \(x\in K_{\varepsilon,\eta}\). Therefore the hypotheses of Proposition 4.3 are satisfied with \(K=K_{\varepsilon,\eta}\), \(\alpha=\frac{N-1}{N}\) and \(\lambda=(1-\varepsilon)N\omega_{N}^{\frac{1}{N}}\) and we deduce that there exists a constant \(C\) depending only on \(K_{\varepsilon,\eta},N\) and \(\varepsilon\) (and thus only on \(\mathrm{X}\), \(\varepsilon\), \(N\) and \(\eta\)) such that \[\mathrm{Per}(E) \geq(1-\varepsilon/2)N\omega_{N}^{\frac{1}{N}}\mathfrak{m}(E \cap\mathsf{K}_{\varepsilon,\eta})^{\frac{N-1}{N}}-C\mathfrak{m}(E)\] \[=(1-\varepsilon/2)N\omega_{N}^{\frac{1}{N}}\mathfrak{m}(E)^{ \frac{N-1}{N}}\left[\left(1-\frac{\mathfrak{m}(E\cap U_{\varepsilon,\eta})}{ \mathfrak{m}(E)}\right)^{\frac{N-1}{N}}-\hat{C}\mathfrak{m}(E)^{\frac{1}{N}}\right]\] \[\geq(1-\varepsilon/2)N\omega_{N}^{\frac{1}{N}}\mathfrak{m}(E)^{ \frac{N-1}{N}}\left[1-\frac{\mathfrak{m}(E\cap U_{\varepsilon,\eta})}{ \mathfrak{m}(E)}-\hat{C}\mathfrak{m}(E)^{\frac{1}{N}}\right],\quad\forall E \subset\mathrm{X}\text{ Borel},\] where \(\hat{C}=C((1-\varepsilon/2)N\omega_{N}^{\frac{1}{N}})^{-1}\). From this we obtain that the conclusion of theorem holds taking \(\beta\coloneqq\hat{C}^{-N}\delta_{\varepsilon}^{N}\) and \(\beta^{\prime}\coloneqq\delta_{\varepsilon}\), where \(\delta_{\varepsilon}\coloneqq\frac{1}{2}\frac{\varepsilon}{2-\varepsilon}\). ## 5 Almost Euclidean Faber-Krahn inequality for small volumes Similarly to the classical Faber-Krahn inequality in \(\mathbb{R}^{N}\), combining the almost Euclidean isoperimetric inequality for small volumes in Section 4 and the Polya-Szego inequality of Section 2.4, we deduce here an almost-Euclidean Faber-Krahn inequality for small volumes similar to [17, Lemme 16] in the case of Riemannian manifolds. However it is not possible to apply directly the Polya-Szego inequality as in [17], because our isoperimetric inequality applies only to sets that have small volumes _and avoid_ a bad set with small measure. This technical difficulty will require a more careful argument, which will eventually lead to a Faber-Krahn inequality that applies only to sets that again avoid a portion of the space with small measure (see Theorem 5.3). We start by recalling the well known expression of the first Dirichlet eigenvalue of a ball in the \(N\)-dimensional Euclidean space (see e.g. [17]): \[\lambda_{1}(B_{r}^{\mathbb{R}^{N}}(0))=\left(\frac{\omega_{N}}{\mathscr{L}^{N }(B_{r}^{\mathbb{R}^{N}}(0))}\right)^{2/N}j_{\frac{(N-2)}{N}}^{2},\qquad\forall \,r>0, \tag{5.1}\] where \(j_{\frac{(N-2)}{N}}\) denotes the first positive zero of the Bessel function (of the first kind) of index \(\frac{(N-2)}{N}\). Next we obtain a weaker Faber-Krahn inequality, i.e. with a rough constant, but that applies to sets also with large volume. **Proposition 5.1** (Faber-Krahn inequality in PI spaces).: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a bounded PI space satisfying for some \(N\in\mathbb{N}\) and some constant \(c>0\)_ \[\frac{\mathfrak{m}(B_{r}(x))}{\mathfrak{m}(B_{R}(x))}\geq c\left(\frac{r}{R} \right)^{N},\quad\forall\,x\in\mathrm{X},\,\forall\,0<r<R.\] _Then there exist constants \(v_{0}=v_{0}(\mathrm{X})>0\) and \(C=C(\mathrm{X},N)>0\) such that_ \[\lambda_{1}(\Omega)\geq\frac{C}{\mathfrak{m}(\Omega)^{\frac{2}{N}}},\quad \forall\,\Omega\subset\mathrm{X}\text{ open, }0<\mathfrak{m}(\Omega)\leq v_{0}. \tag{5.2}\] Proof.: By Proposition 2.18 there exist constants \(w_{0}=w_{0}(\mathrm{X})>0\) and \(C_{I}=C_{I}(\mathrm{X},N)>0\) such that \[\mathrm{Per}(E)\geq C_{I}\mathfrak{m}(E)^{\frac{N-1}{N}},\qquad\forall\,E \subset\mathrm{X}\text{ Borel such that }\mathfrak{m}(E)\leq w_{0} \tag{5.3}\] In particular if \(\mathsf{m}(\Omega)\leq v_{0}\coloneqq w_{0}\), then (5.3) holds for every \(E\subset\Omega\) Borel. Moreover taking \(v_{0}<\mathsf{m}(\mathrm{X})\) we have \(\Omega\neq\mathrm{X}.\) Therefore we can apply the Polya-Szego inequality (2.28) and deduce \[\frac{\int_{\Omega}|Du|^{2}\mathrm{d}\mathsf{m}}{\int_{\Omega}|u|^{2}\mathrm{d} \mathsf{m}}\geq\Big{(}\frac{C_{I}}{N\omega_{N}^{1/N}}\Big{)}^{2}\frac{\int_{ \Omega^{*}}|Du_{N}^{*}|^{2}\mathrm{d}\mathscr{L}^{N}}{\int_{\Omega^{*}}|u_{N}^ {*}|^{2}\mathrm{d}\mathscr{L}^{N}}\geq\frac{C}{\mathsf{m}(\Omega)^{\frac{2}{ N}}},\quad\forall u\in\,W_{0}^{1,2}(\Omega),\,u\geq 0,\,u\not\equiv 0, \tag{5.4}\] where we have also used that \(\|u\|_{L^{2}(\Omega^{*},\mathscr{L}^{N})}=\|u_{N}^{*}\|_{L^{2}(\Omega^{*}, \mathscr{L}^{N})},\,\mathsf{m}(\Omega)=\mathscr{L}^{N}(\Omega^{*}),\) and in the last passage the identity (5.1) and the fact that \(u_{N}^{*}\in\,W_{0}^{1,2}(\Omega^{*})\). By taking the infimum with respect to all the possible \(u\) in the characterization (2.16), we get the result. From the previous proposition we can deduce the following version of the Faber-Krahn in the Euclidean setting. Even if it will be not used in this note, we think it is worth to be isolated in a separate statement. Indeed it has been pointed out repeatedly in the previous literature that one of the major difficulties in counting nodal domains for non-Dirichlet boundary condition in subset of \(\mathbb{R}^{N}\) with irregular boundary is the absence of a suitable Faber-Krahn inequality for subdomains close to the boundary (see e.g. discussions in [56, Section 2], [69, Section 1.2], [53, Section 1.2], [15, Section 1.1]). This was one of the main issues faced [69] which also forced the assumption of a \(C^{1,1}\) boundary (see also [56]). Here we show precisely that a Faber-Krahn-type inequality does hold in any uniform domain, no matter how close is the support of the function to the boundary. **Corollary 5.2** (Faber-Krahn inequality for uniform domains).: _Let \(\Omega\subset\mathbb{R}^{N}\) be a uniform domain. Then there exist constants \(v_{0}\in(0,\mathscr{L}^{N}(\Omega))\) and \(C>0\), depending only on \(\Omega,\) such that_ \[\frac{\int_{\Omega}|\nabla u|^{2}\mathrm{d}\mathscr{L}^{N}}{\int_{\Omega}u^{ 2}\mathrm{d}\mathscr{L}^{N}}\geq\frac{C}{\big{(}\mathscr{L}^{N}(\mathrm{supp }(u))\big{)}^{\frac{2}{N}}},\quad\forall u\in\,W^{1,2}(\Omega),u\not\equiv 0, \text{ such that }\mathscr{L}^{N}(\mathrm{supp}(u))\leq v_{0}.\] Proof.: Let \(u\in\,W^{1,2}(\Omega)\). Then by Theorem 3.8 there exists \(\tilde{u}\in L^{2}(\overline{\Omega},\mathscr{L}^{N})\) such that \(\tilde{u}=u\,\,\mathscr{L}^{N}\)-a.e. in \(\Omega\) and such that \(\tilde{u}\in\,W^{1,2}(\overline{\Omega},\mathsf{d}_{\lfloor\overline{\Omega} \rfloor},\mathscr{L}^{N}\lfloor_{\overline{\Omega}}\)) where \(\mathsf{d}\) denotes the Euclidean distance (recall also Remark 2.7). Moreover by Theorem 3.2 the m.m.s. \((\mathrm{X},\mathsf{d},\mathsf{m})\coloneqq(\overline{\Omega},\mathsf{d}_{ \lfloor\overline{\Omega}\rfloor},\mathscr{L}^{N}\lfloor_{\overline{\Omega}}\)) satisfies the hypotheses of Proposition 5.1. Finally by Lemma 3.6 we have \(\mathscr{L}^{N}(\partial\Omega)=0.\) Let \(v_{0}<\mathsf{m}(\mathrm{X})=\mathscr{L}^{N}(\Omega)\) and \(C>0\) be the constants given by Proposition 5.1, which depend only on \(\Omega.\) Consider the open set \(U_{\varepsilon}\coloneqq(\mathrm{supp}(\tilde{u}))^{\varepsilon},\,\varepsilon>0\). Then \(\mathscr{L}^{N}(U_{\varepsilon})\to\mathscr{L}^{N}(\mathrm{supp}(\tilde{u}))= \mathscr{L}^{N}(\mathrm{supp}(u))\) as \(\varepsilon\to 0,\) having also used that \(\mathscr{L}^{N}(\partial\Omega)=0.\) Hence assuming that \(\mathscr{L}^{N}(\mathrm{supp}(u))<v_{0}(\mathrm{X})\) we have that \(\mathsf{m}(U_{\varepsilon})\leq v_{0}\) for \(\varepsilon\) small enough. Clearly \(\tilde{u}\in\,W_{0}^{1,2}(U_{\varepsilon})\) (recall (2.9)), hence we can apply (5.2) and obtain \[\frac{\int_{\overline{\Omega}}|\nabla\tilde{u}|^{2}\mathrm{d}\mathscr{L}^{N}} {\int_{\overline{\Omega}}\tilde{u}^{2}\mathrm{d}\mathscr{L}^{N}}\geq\frac{C}{ \mathsf{m}(U_{\varepsilon})^{\frac{2}{n}}}.\] Letting \(\varepsilon\to 0\) and recalling that \(\int_{\overline{\Omega}}|\nabla\tilde{u}|^{2}\mathrm{d}\mathscr{L}^{N}=\int_ {\Omega}|\nabla u|^{2}\mathrm{d}\mathscr{L}^{N}\) concludes the proof. We pass to the statement of our main Faber-Krahn inequality for small volumes. **Theorem 5.3** (Almost Euclidean Faber-Krahn inequality for small volumes).: _Let \((\mathrm{X},\mathsf{d},\mathsf{m})\) be a bounded PI space satisfying for some \(N\in\mathbb{N}\) and some constant \(c>0\)_ \[\frac{\mathsf{m}(B_{r}(x))}{\mathsf{m}(B_{R}(x))}\geq c\left(\frac{r}{R} \right)^{N},\quad\forall\,x\in\mathrm{X},\,\forall 0<r<R.\] _Suppose that for every \(\varepsilon>0\) there exists a closed set \(C_{\varepsilon}\subset\mathrm{X}\) with \(\mathsf{m}(C_{\varepsilon})=0\) such that for every \(x\in\mathrm{X}\setminus C_{\varepsilon}\) there exists a constant \(\rho=\rho(x,N,\varepsilon)>0\) satisfying_ \[\mathrm{Per}(E)\geq(1-\varepsilon)N\omega_{N}^{\frac{1}{N}}\mathsf{m}(E)^{\frac {N-1}{N}},\quad\forall\,E\subset B_{\rho}(x)\,\,\text{ Borel,} \tag{5.5}\] _Then for every \(\varepsilon\in(0,1)\) and \(\eta>0\) there exists an open set \(U_{\varepsilon,\eta}\) with \(\mathsf{m}(U_{\varepsilon,\eta})<\eta\) and constants \(\delta=\delta(\mathrm{X},\varepsilon,N,\eta)>0\), \(\delta^{\prime}=\delta^{\prime}(\mathrm{X},\varepsilon,N)>0\) such that for every \(\Omega\subset\mathrm{X}\) open satisfying_ \[\mathsf{m}(\Omega)\leq\delta,\quad\frac{\mathsf{m}(\Omega\cap U_{\varepsilon, \eta})}{\mathsf{m}(\Omega)}\leq\delta^{\prime},\] _and, denoted by \(\Omega^{*}\coloneqq B_{r}(0)\subset\mathbb{R}^{N}\) the ball satisfying \(\mathfrak{m}(\Omega)=\mathscr{L}^{N}(\Omega^{*})\), it holds_ \[\lambda_{1}(\Omega)\geq(1-\varepsilon)\lambda_{1}(\Omega^{*}). \tag{5.6}\] The key point of Theorem 5.3 is that the constant \(\delta^{\prime}\)_does not depend_ on \(\eta\). This will be crucial in the proof of the main result, which will be done in Section 6. Indeed we will eventually need to get rid of \(\delta^{\prime}\) by sending \(\eta\to 0\) (see in particular (6.8)). **Remark 5.4**.: Thanks to Theorem 3.2 we know that, given \((\mathrm{Y},\tilde{\mathsf{d}},\mathcal{H}^{N})\) an \(\mathrm{RCD}(K,N)\) space and \(\Omega\subset\mathrm{Y}\) a uniform domain, the metric measure space \((\mathrm{X},\mathsf{d},\mathfrak{m})\coloneqq(\overline{\Omega},\tilde{ \mathsf{d}}_{\left|\overline{\Omega}\right.},\mathcal{H}^{N}{}_{\left| \overline{\Omega}\right.})\) satisfies the hypotheses of the above Theorem 5.3. This will actually be the way in which we will apply this result in the sequel. \(\blacksquare\) In the proof of Theorem 5.3 we will make use of the following elementary observation. **Lemma 5.5**.: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a m.m.s. and \(f\in\mathsf{LIP}_{c}(X)\), \(f\not\equiv 0\), \(f\geq 0\). It holds_ \[\frac{\int_{X}|Df|^{2}\,\mathrm{d}\,\mathfrak{m}}{\int_{X}|f|^{2}\,\mathrm{d} \mathfrak{m}}\geq\lambda_{1}(\{f>0\}),\] _where \(\{f>0\}\coloneqq\{x\in\mathrm{X}\colon f(x)>0\}\)._ Proof.: Let \(\psi_{n}\coloneqq(f-\frac{1}{n})^{+}\). Then \(\psi_{n}\in\mathsf{LIP}_{c}(\{f>0\})\) and for \(n\in\mathbb{N}\) sufficiently large, \(\psi_{n}\not\equiv 0\). Then for \(n\) large \(\psi_{n}\) is a competitor in (2.15), so \[\lambda_{1}(\{f>0\})\leq\lim_{n\to+\infty}\frac{\int_{X}|D\psi_{n}|^{2}\, \mathrm{d}\mathfrak{m}}{\int_{X}|\psi_{n}|^{2}\,\mathrm{d}\mathfrak{m}}=\lim_{ n\to+\infty}\frac{\int_{\{f\geq 1/n\}}|Df|^{2}\,\mathrm{d}\mathfrak{m}}{\int_{\{f \geq 1/n\}}|f-1/n|^{2}\,\mathrm{d}\mathfrak{m}}=\frac{\int_{X}|Df|^{2}\, \mathrm{d}\mathfrak{m}}{\int_{X}|f|^{2}\,\mathrm{d}\mathfrak{m}},\] where the first equality follows from the locality of the weak upper gradient. We are now ready to prove the Faber-Krahn inequality for small volumes. Proof of Theorem 5.3.: Fix \(\varepsilon\in(0,1)\) and \(\eta>0\). Let \(\beta=\beta(\mathrm{X},\frac{\varepsilon}{2},N,\eta)>0,\beta^{\prime}=\beta^{ \prime}(\frac{\varepsilon}{2})>0,U_{\frac{\varepsilon}{2},\eta}\subset\mathrm{X}\) be respectively the constants and the set as given in Theorem 4.1 and recall that \(\mathfrak{m}(U_{\frac{\varepsilon}{2},\eta})<\eta.\) In the following we will simply write \(\beta\) and \(\beta^{\prime}\) to denote these constants and write \(U\) to denote the set \(U_{\frac{\varepsilon}{2},\eta}\). Let \(\delta,\delta^{\prime}\in(0,1)\) be constants small enough to be chosen later and in such a way that \(\delta\) will depend in the end only on \(\mathrm{X},\varepsilon,N,\eta\), while \(\delta^{\prime}\) only on \(\mathrm{X},N,\varepsilon\). Fix \(\Omega\subset\mathrm{X}\) open such that \[\mathfrak{m}(\Omega)\leq\delta,\quad\frac{\mathfrak{m}(\Omega\cap U)}{ \mathfrak{m}(\Omega)}\leq\delta^{\prime},\] and define \(\Omega^{*}\coloneqq B_{r}(0)\subset\mathbb{R}^{N}\) where \(r>0\) is so that \(\mathscr{L}^{N}(\Omega^{*})=\mathfrak{m}(\Omega).\) Up to choosing \(\delta<\mathfrak{m}(\mathrm{X})\) we can also assume that \(\Omega\neq\mathrm{X}\). Let \(u\in\mathsf{LIP}_{c}(\Omega)\) be a competitor in the infimum of (2.19). In particular \(u\not\equiv 0\), \(u\geq 0\) and \(\mathrm{lip}(u)\neq 0\)\(\mathfrak{m}\)-a.e. in \(\{u>0\}\). We divide two cases: Case 1: \(\mathfrak{m}(\{u>0\})\leq 3\sqrt{\delta^{\prime}}\mathfrak{m}(\Omega)\). From Lemma 5.5 and Proposition 5.1 we have \[\frac{\int_{\mathrm{X}}|Du|^{2}\mathrm{d}\mathfrak{m}}{\int_{\mathrm{X}}u^{2} \mathrm{d}\mathfrak{m}}\geq\lambda_{1}(\{u>0\})\geq\frac{C(\mathrm{X},N)}{ \mathfrak{m}(u>0)^{\frac{2}{N}}}\geq\frac{C(\mathrm{X},N)(3\sqrt{\delta^{\prime }})^{-\frac{2}{N}}}{\mathfrak{m}(\Omega)^{\frac{2}{N}}}\geq\frac{2(j_{N-2})^{2} \omega_{N}^{\frac{2}{N}}}{\mathscr{L}^{N}(\Omega^{*})^{\frac{2}{N}}}\stackrel{{ \eqref{eq:2.1}}}{{=}}2\lambda_{1}(\Omega^{*}), \tag{5.7}\] with \(C(\mathrm{X},N)>0\) is the constant given by Proposition 5.1, which can be applied if \(3\sqrt{\delta^{\prime}}\delta\leq v_{0}(\mathrm{X})\) (where \(v_{0}(\mathrm{X})\) is given by Proposition 5.1) and the last inequality in (5.7) holds provided \(3\sqrt{\delta^{\prime}}<\frac{C(\mathrm{X},N)^{N/2}}{2^{N/2}(j_{N-2})^{N}\omega_ {N}}\). Case 2:\(\mathfrak{m}(\{u>0\})>3\sqrt{\delta^{\prime}}\mathfrak{m}(\Omega)\). Set \[s\coloneqq\sup\{t:\,\mathfrak{m}(\{u>t\})\geq 2\sqrt{\delta^{\prime}}\mathfrak{m}( \Omega)\}\] and observe that \(s>0\) and that \[\mathfrak{m}(\{u>s\}) \leq 2\sqrt{\delta^{\prime}}\mathfrak{m}(\Omega),\] \[\mathfrak{m}(\{u>t\}) \geq 2\sqrt{\delta^{\prime}}\mathfrak{m}(\Omega),\quad\forall\,t<s.\] The first one follows because \(\mathfrak{m}(\{u>s\})=\lim_{t\to s+}\mathfrak{m}(\{u>t\})\leq 2\sqrt{\delta^{ \prime}}\mathfrak{m}(\Omega)\), while for the second note that \(t\mapsto\mathfrak{m}(\{u>t\})\) is monotone non-increasing. Set \(\hat{u}\coloneqq u\wedge s\) and \(\tilde{u}\coloneqq(u-s)^{+}\), so that \(u=\hat{u}+\tilde{u}\) and \(\hat{u}\), \(\tilde{u}\) are in \(\mathsf{LIP}_{c}(\Omega)\) and \(\hat{u}\geq 0\), \(\tilde{u}\geq 0\), \(\hat{u}\not\equiv 0\), \(\tilde{u}\not\equiv 0\). Observe that if multiply \(u\) by a constant \(c>0\), \(u\) still satisfies the hypotheses of Case 2 and the number \(s\) defined above gets also multiplied by \(c\). Hence also \(\hat{u}\) gets multiplied by \(c\). Therefore, since the value \(\frac{\int|Du|^{2}\mathrm{d}\mathfrak{m}}{\int u^{2}\mathrm{d}\mathfrak{m}}\) is scaling invariant, up to multiplying \(u\) by a constant we can assume that \(\int_{\mathrm{X}}\hat{u}^{2}\mathrm{d}\mathfrak{m}=1\). Then \[\frac{\int_{\mathrm{X}}|Du|^{2}\mathrm{d}\mathfrak{m}}{\int_{\mathrm{X}}u^{2} \mathrm{d}\mathfrak{m}}=\frac{\int_{\mathrm{X}}|D\hat{u}|^{2}+|D\tilde{u}|^{2 }\mathrm{d}\mathfrak{m}}{\int_{\mathrm{X}}\hat{u}^{2}\mathrm{d}\mathfrak{m}+ \int_{\mathrm{X}}\tilde{u}^{2}+2\int_{\mathrm{X}}\tilde{u}\tilde{u}\mathrm{d} \mathfrak{m}}\geq\frac{\int_{\mathrm{X}}|D\tilde{u}|^{2}+|D\tilde{u}|^{2} \mathrm{d}\mathfrak{m}}{1+\int_{\mathrm{X}}\tilde{u}^{2}\mathrm{d} \mathfrak{m}+2\sqrt{\int_{\mathrm{X}}\tilde{u}^{2}\mathrm{d}\mathfrak{m}}}. \tag{5.8}\] Using again Lemma 5.5 applied with \(f=\tilde{u}\) and noting that \(\{\tilde{u}>0\}=\{u>s\}\) we have \[\frac{\int_{\mathrm{X}}|D\tilde{u}|^{2}\mathrm{d}\mathfrak{m}}{\int_{\mathrm{ X}}|\tilde{u}|^{2}\mathrm{d}\mathfrak{m}}\geq\lambda_{1}(\{u>s\})\geq\frac{C( \mathrm{X},N)}{\mathfrak{m}(\{u>s\})^{\frac{\rho}{N}}}\geq\frac{C(\mathrm{X},N) }{(2\sqrt{\delta^{\prime}}\mathfrak{m}(\Omega))^{\frac{\rho}{N}}}, \tag{5.9}\] where the second inequality follows from Proposition 5.1 (which as above can be applied provided \(2\sqrt{\delta^{\prime}}\delta\leq v_{0}(\mathrm{X})\)). Moreover, since \(\mathfrak{m}(\{u>t\})\geq 2\sqrt{\delta^{\prime}}\mathfrak{m}(\Omega)\), for every \(t<s\), we have \[\frac{\mathfrak{m}(\{u>t\}\cap U)}{\mathfrak{m}(\{u>t\})}\leq\frac{\mathfrak{ m}(\Omega\cap U)}{\mathfrak{m}(\{u>t\})}\leq\frac{\delta^{\prime}\mathfrak{m}( \Omega)}{2\sqrt{\delta^{\prime}}\mathfrak{m}(\Omega)}=\frac{\sqrt{\delta^{ \prime}}}{2}<\beta^{\prime},\quad\forall\,t<s,\] provided \(\delta^{\prime}\leq(\beta^{\prime})^{2}\) (recall that \(\beta^{\prime}\) depends only on \(\varepsilon\)). We also have \(\mathfrak{m}(\{u>t\})\leq\mathfrak{m}(\Omega)\leq\delta\leq\beta\), provided \(\delta\leq\beta\). Therefore we can apply Theorem 4.1 to the set \(E=\{u>t\}\) and obtain \[\mathrm{Per}(\{u>t\})\geq(1-\frac{\varepsilon}{2})N\omega_{N}^{\frac{1}{N}} \mathfrak{m}(\{u>t\})^{\frac{N-1}{N}},\quad\forall\,t<s.\] We can then apply the Polya-Szego inequality in point \(i)\) of Theorem 2.22 to the function \(\hat{u}\) (note that assumption (2.26) is satisfied by provided \(\delta\leq w_{0}(X)\), where \(w_{0}(\mathrm{X})\) is the constant given by Proposition 2.18), to get \[\int|D\hat{u}|^{2}\mathrm{d}\mathfrak{m} =\int_{\{u\leq s\}}|Du|^{2}\mathrm{d}\mathfrak{m}\geq(1-\frac{ \varepsilon}{2})^{2}\int_{0}^{s}\int_{\mathbb{R}^{N}}|Du_{N}^{*}|\mathrm{d} \mathrm{Per}(\{u_{N}^{*}>t\})\mathrm{d}t\] \[=(1-\frac{\varepsilon}{2})^{2}\int_{\{u_{N}^{*}\leq s\}}|Du_{N}^{ *}|^{2}\mathrm{d}\mathscr{L}^{N}=(1-\frac{\varepsilon}{2})^{2}\int_{\mathbb{R} ^{N}}|D(u_{N}^{*}\wedge s)|^{2}\mathrm{d}\mathscr{L}^{N},\] where \(u_{N}^{*}\in\mathsf{LIP}_{c}(\Omega^{*})\) is the Euclidean monotone rearrangement of \(u\) (see Definition 2.21) and where in the second to last equality we used the coarea formula in the Euclidean space (see e.g. [70, Theorem 18.1]). Moreover since \(u_{N}^{*}\wedge s\in\mathsf{LIP}_{c}(\Omega^{*})\) we have \[\int_{\mathbb{R}^{N}}|D(u_{N}^{*}\wedge s)|^{2}\mathrm{d}\mathscr{L}^{N}\geq \lambda_{1}(\Omega^{*})\int_{\mathbb{R}^{N}}|u_{N}^{*}\wedge s|^{2}\mathrm{d} \mathscr{L}^{N}=\lambda_{1}(\Omega^{*})\int_{\mathrm{X}}|\tilde{u}|^{2} \mathrm{d}\mathfrak{m}=\lambda_{1}(\Omega^{*}),\] where in the first identity we used the equimeasurability of \(u\) and \(u_{N}\). Hence \[\frac{\int|D\hat{u}|^{2}\mathrm{d}\mathfrak{m}}{\int_{\mathrm{X}}\hat{u}^{2} \mathrm{d}\mathfrak{m}}\geq(1-\frac{\varepsilon}{2})^{2}\lambda_{1}(\Omega^{*}). \tag{5.10}\] Therefore we can plug (5.9) and (5.10) into (5.8) to get \[\frac{\int_{\mathrm{X}}|Du|^{2}\mathrm{d}\mathfrak{m}}{\int_{\mathrm{X}}u^{2} \mathrm{d}\mathfrak{m}}\geq\frac{(1-\frac{\varepsilon}{2})^{2}\lambda_{1}( \Omega^{*})+\frac{C(\mathrm{X},N)}{(2\sqrt{\delta^{\prime}}\mathfrak{m}( \Omega))^{\frac{2}{N}}}A^{2}}{1+A^{2}+2A},\] where \(A\coloneqq\left(\int_{\mathrm{X}}\tilde{u}^{2}\mathrm{d}\mathfrak{m}\right)^{ \frac{1}{2}}.\) We now minimize in \(A\in[0,\infty).\) To do so we observe that the function \(f(t)\coloneqq\frac{a+bt^{2}}{1+t^{2}+2t}\), \(a,b>0,\) has derivative \(f^{\prime}(t)=\frac{2(bt-a)}{(1+t)^{3}}\). Hence \(f\) has a global minimum in \([0,\infty)\) at \(t=\frac{a}{b}\) of value \(f(\frac{a}{b})=\frac{a}{1+\frac{a}{b}}.\) Therefore we obtain \[\frac{\int_{\mathrm{X}}|Du|^{2}\mathrm{d}\mathfrak{m}}{\int_{ \mathrm{X}}u^{2}\mathrm{d}\mathfrak{m}} \geq\frac{(1-\frac{\varepsilon}{2})^{2}\lambda_{1}(\Omega^{*})}{1+(1- \frac{\varepsilon}{2})^{2}\lambda_{1}(\Omega^{*})(2\sqrt{\delta^{\prime}} \mathfrak{m}(\Omega))^{\frac{2}{N}}C(\mathrm{X},N)^{-1}}\] \[\stackrel{{\eqref{eq:C(X,N)}}}{{=}}\frac{(1-\frac{ \varepsilon}{2})^{2}\lambda_{1}(\Omega^{*})}{1+(1-\frac{\varepsilon}{2})^{2}( j_{\frac{N-2}{N^{2}}})^{2}(\omega_{N}2\sqrt{\delta^{\prime}})^{\frac{2}{N}}C( \mathrm{X},N)^{-1}}.\] Choosing \(\delta^{\prime}\) small enough, depending only on \(\varepsilon,N\) and \(\mathrm{X}\), such that \[\frac{(1-\frac{\varepsilon}{2})^{2}}{1+(1-\frac{\varepsilon}{2})^{2}(j_{ \frac{N-2}{N^{2}}})^{2}(\omega_{N}2\sqrt{\delta^{\prime}})^{\frac{2}{N}}C( \mathrm{X},N)^{-1}}>1-\varepsilon,\] we get the conclusion. ## 6 Proof of main theorem In this part we prove Theorem 1.1 and Corollaries 1.2, 1.3 combining the results of all the previous sections. One last ingredient, contained in the next statement, is a crucial inequality relating the eigenvalue of an eigenfunction with the first Dirichlet eigenvalue of one of its nodal domains. This can be seen as a generalization of Lemme 2 in Appendix D of [17] proved there in the setting of Riemannian manifolds. Recall also that eigenfunctions of the Laplacian in PI spaces are continuous (see Theorem 2.17). **Proposition 6.1**.: _Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be a bounded infinitesimally Hilbertian PI space, \(U\subset\mathrm{X}\) be open and \(f\) be a Dirichlet or Neumann eigenfunction of the Laplacian in \(U\) of eigenvalue \(\lambda\). If \(\Omega\subset U\) is a nodal domain of (the continuous representative of) \(f\), then \(\Omega\) is open in \(\mathrm{X}\) and it holds_ \[\lambda_{1}(\Omega)\leq\lambda=\frac{\int_{\Omega}|Df|^{2}\,\mathrm{d} \mathfrak{m}}{\int_{\Omega}f^{2}\,\mathrm{d}\mathfrak{m}}. \tag{6.1}\] In [17] (in Riemannian setting) it is shown that the first in (6.1) is actually an equality in the Dirichlet case, however we do not know whether the same is true also in this more general setting. Nevertheless (6.1) will be sufficient for our purposes. Specializing Proposition 6.1 to the Euclidean setting we also obtain the following result which, even if not needed in the sequel, we believe it is interesting on its own. In particular it extends previous results in [69, Proposition 1.7] and [15, Lemma 3.3], where the same was proved respectively for \(C^{1,1}\) domains and for planar piecewise smooth domains. **Corollary 6.2** (Green's formula for eigenfunctions).: _Let \(\Omega\subset\mathbb{R}^{N}\) be a uniform domain and let \(f\) be a Neumann eigenfunction in \(\Omega\) of eigenvalue \(\lambda\). Then for every \(U\) nodal domain of \(f\) it holds_ \[\int_{U}|\nabla f|^{2}\,\mathrm{d}\mathscr{L}^{N}=\lambda\int_{U}f^{2}\mathrm{ d}\mathscr{L}^{N}.\] Proof.: Let \(f\) be as in the statement and \(U\) be a nodal domain of \(f.\) From Corollary 3.10 there exists \(\tilde{f}\in L^{2}(\overline{\Omega},\mathscr{L}^{N})\) such that \(\tilde{f}=f\)\(\mathscr{L}^{N}\)-a.e. in \(\Omega\) and such that \(\tilde{f}\) is an eigenfunction for the Laplacian of eigenvalue \(\lambda\) in the m.m.s. \((\overline{\Omega},\mathsf{d}_{|\overline{\Omega}^{\ast}}\,\mathscr{L}^{N}_{| \overline{\Omega}^{\ast}})\) where \(\mathsf{d}\) denotes the Euclidean distance (recall also Remark 2.7). Moreover \((\overline{\Omega},\mathsf{d}_{|\overline{\Omega}^{\ast}}\,\mathscr{L}^{N}_{| \overline{\Omega}^{\ast}})\) is an infinitesimally Hilbertian PI space thanks to Theorem 3.9. Hence \(\tilde{f}\) is continuous in \(\overline{\Omega}\) (recall Theorem 2.17). Proposition 3.12 then says that the set \(\varphi(U)\coloneqq U\cup(\partial U\cap\partial\Omega\setminus\{f=0\})\) is a nodal domain of \(\tilde{f}\). Finally by Lemma 3.6 we have \(\mathscr{L}^{N}(\partial\Omega)=0.\) Therefore applying Proposition 6.1 to \((\mathrm{X},\mathsf{d},\mathfrak{m})=(\overline{\Omega},\mathsf{d}_{| \overline{\Omega}^{\ast}}\,\mathscr{L}^{N}_{|\overline{\Omega}^{\ast}})\) and \(\tilde{f}\) we get \[\lambda\int_{U}f^{2}\,\mathrm{d}\mathscr{L}^{N}=\lambda\int_{\varphi(U)}( \tilde{f})^{2}\,\mathrm{d}\mathscr{L}^{N}\stackrel{{(\ref{eq: 1})}}{{=}}\int_{\varphi(U)}|D\tilde{f}|^{2}\,\mathrm{d}\mathscr{L}^{N}=\int_{U} |D\tilde{f}|^{2}\,\mathrm{d}\mathscr{L}^{N}=\int_{U}|\nabla f|^{2}\,\mathrm{d} \mathscr{L}^{N},\] where the last equality follows by Theorem 3.8. This concludes the proof. Proof of Proposition 6.1.: Let \(f\) be a Dirichlet or Neumann eigenfunction of the Laplacian in \(U\) of eigenvalue \(\lambda\) and let \(\Omega\) be one of its nodal domains. Since \((\mathrm{X},\mathsf{d},\mathfrak{m})\) is a PI space, the metric space \((\mathrm{X},\mathsf{d})\) is locally connected, hence by Lemma 2.2 we have that \(\Omega\) is open and that either \(f>0\) or \(f<0\) in \(\Omega.\) Assume without loss of generality that \(f\) is positive in \(\Omega\). Define \(\psi_{n}\coloneqq\left(f-\frac{1}{n}\right)^{+}\chi_{\Omega}\) and note that \(\mathsf{d}(\mathrm{supp}(\psi_{n}),\mathrm{X}\setminus\Omega)>0.\) We claim that \(\psi_{n}\in W^{1,2}_{0}(\Omega)\). To see this let \(\varphi\in\mathsf{LIP}(\mathbb{R})\) satisfy \(|\varphi|\leq 1\), \(\varphi(t)=0\) for \(t\leq 0\) and \(\varphi(t)=1\) for \(t\geq\mathsf{d}(\mathrm{supp}(\psi_{n}),\Omega^{c})\). Then \(\psi_{n}=\left(f-\frac{1}{n}\right)^{+}\varphi(\mathsf{d}(\cdot,\mathrm{X} \setminus\Omega))\) with \(\left(f-\frac{1}{n}\right)^{+}\) in \(W^{1,2}(X)\) from the chain rule (see (2.1)) and \(\varphi(\mathsf{d}(\cdot,\mathrm{X}\setminus\Omega))\in\mathsf{LIP}\cap L^{ \infty}(\mathrm{X})\). Therefore the Leibniz rule for the minimal weak upper gradient (see (2.1)) implies that \(\psi_{n}\in W^{1,2}(X)\). Since \(\mathsf{d}(\mathrm{supp}(\psi_{n}),\mathrm{X}\setminus\Omega)>0\), the claim follows (recall (2.9)). From (2.16) we have \[\lambda_{1}(\Omega)\leq\varliminf_{n\to+\infty}\frac{\int_{\Omega}|\nabla\psi _{n}|^{2}\,\mathrm{d}\mathfrak{m}}{\int_{\Omega}\psi_{n}^{2}\,\mathrm{d} \mathfrak{m}}. \tag{6.2}\] Now we observe that \((\psi_{n})_{n\in\mathbb{N}}\) converges to \(f\chi_{\Omega}\) in \(L^{2}(\mathfrak{m})\). Indeed \[\lim_{n\to+\infty}\int_{\Omega}|\psi_{n}-f|^{2}\,\mathrm{d}\mathfrak{m}\leq \lim_{n\to+\infty}\frac{1}{n^{2}}\mathfrak{m}\left(\Omega\right)=0.\] Moreover \[\lim_{n\to+\infty}\||D\psi_{n}|\|_{L^{2}(X)}^{2}=\lim_{n\to+\infty}\int_{ \Omega\cap\left\{f>\frac{1}{n}\right\}}|Df|^{2}\,\mathrm{d}\mathfrak{m}=\int_{ \Omega}|Df|^{2}\,\mathrm{d}\mathfrak{m},\] where the first equality follows from the locality of the weak upper gradient and the second by monotone convergence theorem. This implies that \(f\chi_{\Omega}\in W^{1,2}(\mathrm{X})\) (see e.g. Proposition 2.1.19 in [51]). Again by the locality of the weak upper gradient we have \[\||D(f\chi_{\Omega})\|_{L^{2}(X)}=\||Df|\|_{L^{2}(\Omega)},\] and so \(\{\psi_{n}\}_{n\in\mathbb{N}}\) converges to \(f\chi_{\Omega}\) in \(W^{1,2}(X)\). In particular \[\varliminf_{n\to+\infty}\frac{\int_{\Omega}|\nabla\psi_{n}|^{2}\,\mathrm{d} \mathfrak{m}}{\int_{\Omega}\psi_{n}^{2}\,\mathrm{d}\mathfrak{m}}=\frac{\int_{ \Omega}|\nabla f|^{2}\,\mathrm{d}\mathfrak{m}}{\int_{\Omega}f^{2}\,\mathrm{d} \mathfrak{m}}.\] The result follows once we observe that \[\frac{\int_{\Omega}|\nabla f|^{2}\,\mathrm{d}\mathfrak{m}}{\int_{\Omega}f^{2}\, \mathrm{d}\mathfrak{m}}=\lambda. \tag{6.3}\] To see this by definition of eigenfunction (both in the Dirichlet case and the Neumann case) and since \(\mathrm{supp}(\psi_{n})\subset\Omega\) one has \[-\int_{\Omega}\nabla f\cdot\nabla\psi_{n}\,\mathrm{d}\mathfrak{m}=\lambda\int_ {\Omega}f\psi_{n}\,\mathrm{d}\mathfrak{m}\,,\quad\forall n\in\mathbb{N},\] from which (6.3) follows passing to the limit and noting \[\varlimsup_{n\to+\infty}\left|\int_{\Omega}\nabla f\cdot\nabla\psi_{n}\,\mathrm{d }\mathfrak{m}-\int_{\Omega}\nabla f\cdot\nabla f\,\mathrm{d}\mathfrak{m}\right| \leq\varlimsup_{n\to+\infty}\||Df|\|_{L^{2}(\Omega)}\||D(\psi_{n}-f\chi_{ \Omega})\||_{L^{2}(\Omega)}=0,\] where we used the bilinearity of the scalar product, (2.6) and that \(|D(\psi_{n}-f)|=|D(\psi_{n}-\chi_{\Omega}f)|\)\(\mathfrak{m}\)-a.e. in \(\Omega\), by the locality. Before proving Theorem 1.1 we give a precise definition of the nodal domain counting function which appears in its statement. **Definition 6.3** (Nodal domains counting functions).: Let \((\mathrm{X},\mathsf{d},\mathfrak{m})\) be an infinitesimally Hilbertian PI space and \(\Omega\subset\mathrm{X}\) be a uniform domain (resp. bounded open set). The _nodal domain counting function_\(M_{\Omega}^{\mathcal{N}}:\mathbb{N}\to\mathbb{N}\cup\{+\infty\}\) (resp. \(M_{\Omega}^{\mathcal{D}}:\mathbb{N}\to\mathbb{N}\cup\{+\infty\}\)) is given by \[M_{\Omega}^{\mathcal{N}}(k):=\sup\{M(u):u\text{ Neumann eigenfunction in }\Omega\text{ of eigenvalue }\lambda_{k}^{\mathcal{N}}(\Omega)\},\] \[\big{(}\text{resp. }M_{\Omega}^{\mathcal{D}}(k):=\sup\{M(u):u \text{ Dirichlet eigenfunction in }\Omega\text{ of eigenvalue }\lambda_{k}^{\mathcal{D}}(\Omega)\}\big{)},\quad k\in \mathbb{N},\] where \(\{\lambda_{k}^{\mathcal{N}}(\Omega)\}_{k}\) (resp. \(\{\lambda_{k}^{\mathcal{D}}(\Omega)\}_{k}\)) are the eigenvalues of the Neumann (resp. Dirichlet) Laplacian in \(\Omega\), which has discrete spectrum by Corollary 3.10 (resp. by observation (2.21)), and where \(M(u)\) is the number of nodal domains of the continuous representative of \(u\) in \(\Omega\), which exists by Theorem 2.17. We are finally ready to prove the main result of the note. We will first prove the Neumann case and then the Dirichlet case. The proofs are essentially identical with our approach, but to avoid confusion we decided to keep them separated. Proof of Theorem 1.1 in the Neumann case.: Consider the m.m.s. \((\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega}\right|},\mathfrak{m}_ {\left|\overline{\Omega}\right|})\) and observe that by Theorem 3.2 it satisfies the hypotheses of Theorem 5.3 (see also Remark 5.4). Fix \(\varepsilon\in(0,1)\), \(\eta>0\) and let \(\delta=\delta(\Omega,\varepsilon,N,\eta)>0\), \(\delta^{\prime}=\delta^{\prime}(\Omega,\varepsilon,N)>0\) and \(U_{\varepsilon,\eta}\subset\overline{\Omega}\) be the constants and the set given by Theorem 5.3 applied to \((\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega}\right|},\mathfrak{m}_ {\left|\overline{\Omega}\right|})\). Recall that \(\mathfrak{m}(U_{\varepsilon,\eta})\leq\eta\). Let \(u\in W^{1,2}(\Omega)\) be a Neumann eigenfunction in \(\Omega\) of eigenvalue \(\lambda_{k}^{\mathcal{N}}(\Omega)>0\). Let \(\{\Omega_{i}\}_{i=1}^{M(u)}\) an enumeration of the nodal domains of its continuous representative. Note that the nodal domains are countable because they are open, since \((\mathrm{X},\mathsf{d})\) is locally connected (recall Lemma 2.2). However a priori it could be that \(M(u)=+\infty.\) Thanks to Corollary 3.10, we know that there exist \(\tilde{u}\in W^{1,2}(\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega} \right|},\mathfrak{m}_{\left|\overline{\Omega}\right|})\) eigenfunction of the Laplacian in \((\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega}\right|},\mathfrak{m}_ {\left|\overline{\Omega}\right|})\) of the same eigenvalue \(\lambda_{k}^{\mathcal{N}}(\Omega)\) and such that \(\tilde{u}_{\left|\Omega\right|}=u.\) By Theorem 2.17 it holds that \(\tilde{u}\) has a continuous representative in \(\overline{\Omega}.\) Hence thanks to Proposition 3.12 we deduce that \(\tilde{u}\) has the same number of nodal domains \(\{\tilde{\Omega}_{i}\}_{i=1,\ldots,M(u)}\), where \(\tilde{\Omega}_{i}\subset\overline{\Omega}\). Thanks to Theorem 3.2 we can now apply Proposition 5.1 and Proposition 6.1 to the m.m.s. \((\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega}\right|},\mathfrak{m}_ {\left|\overline{\Omega}\right|})\) and deduce that for any \(i\in 1,\ldots,M(u)\) it holds \[\lambda_{k}^{\mathcal{N}}(\Omega)\geq\lambda_{1}(\tilde{\Omega}_{i}),\quad \lambda_{1}(\tilde{\Omega}_{i})\geq\frac{C(\Omega)}{\mathfrak{m}(\tilde{\Omega }_{i})^{\frac{2}{N}}}, \tag{6.4}\] where \(\lambda_{1}(\tilde{\Omega}_{i})\) is the first eigenvalue of the Dirichlet Laplacian computed in the m.m.s. \((\overline{\Omega},\mathsf{d}_{\left|\overline{\Omega}\right|},\mathfrak{m}_ {\left|\overline{\Omega}\right|})\) (see Definition 2.15) and where \(C(\Omega)>0\) is a constant depending only on \(\Omega\) and \(N.\) Combining the two inequalities above we deduce that \(M(u)<+\infty\) and that \[M(u)\left(\frac{C(\Omega)}{\lambda_{k}^{\mathcal{N}}(\Omega)}\right)^{\frac{N}{ 2}}\leq\sum_{i=1}^{M(u)}\left(\frac{C(\Omega)}{\lambda_{1}(\tilde{\Omega}_{i} )}\right)^{\frac{N}{2}}\leq\sum_{i=1}^{M(u)}\mathfrak{m}(\tilde{\Omega}_{i}) \leq\mathfrak{m}(\overline{\Omega}),\] because the sets \(\tilde{\Omega}_{i}\) are pairwise disjoint. We define three sets: \[\mathcal{S}_{1}\coloneqq\{i\in 1,...,M(u)\ :\ \mathfrak{m}(\tilde{\Omega}_{i})\geq \delta\}\,,\] \[\mathcal{S}_{2}\coloneqq\{i\in 1,...,M(u)\ :\ \mathfrak{m}(\tilde{\Omega}_{i}\cap U _{\varepsilon,\eta})\geq\delta^{\prime}\mathfrak{m}(\tilde{\Omega}_{i})\}\,,\] \[\mathcal{S}_{3}\coloneqq\{i\in 1,...,M(u)\ :\ \mathfrak{m}(\tilde{\Omega}_{i}) \leq\delta,\mathfrak{m}(\tilde{\Omega}_{i}\cap U_{\varepsilon,\eta})\leq \delta^{\prime}\mathfrak{m}(\tilde{\Omega}_{i})\}\,.\] Notice that \(\mathcal{S}_{1}\cup\mathcal{S}_{2}\cup\mathcal{S}_{3}=\{1,...,M(u)\}\). Clearly \[\#\mathcal{S}_{1}\leq\frac{\mathfrak{m}(\overline{\Omega})}{\delta}. \tag{6.5}\] On the other hand, similarly as above, using (6.4) \[\#\mathcal{S}_{2}\cdot\delta^{\prime}\left(\frac{C(\Omega)}{\lambda_{k}^{ \mathcal{N}}(\Omega)}\right)^{N/2}\leq\sum_{i\in\mathcal{S}_{2}}\delta^{\prime }\mathfrak{m}(\tilde{\Omega}_{i})\leq\sum_{i\in\mathcal{S}_{2}}\mathfrak{m}( \tilde{\Omega}_{i}\cap U_{\varepsilon,\eta})\leq\mathfrak{m}(U_{\varepsilon, \eta})\leq\eta\,,\] and thus \[\#\mathcal{S}_{2}\leq\left(\frac{\lambda_{k}^{\mathcal{N}}(\Omega)}{C(\Omega) }\right)^{\frac{N}{2}}\frac{\eta}{\delta^{\prime}}. \tag{6.6}\] Finally by Theorem 5.3 it holds that \(\lambda_{1}(\tilde{\Omega}_{i})\geq\lambda_{1}(\Omega_{i}^{*})(1-\varepsilon)\) for every \(i\in\mathcal{S}_{3}\), where \(\Omega_{i}^{*}\coloneqq B_{r}(0)\subset\mathbb{R}^{N}\) is the ball satisfying \(\mathfrak{m}(\tilde{\Omega}_{i})=\mathscr{L}^{N}(\Omega_{i}^{*})\). Therefore for every \(i\in\mathcal{S}_{3}\) \[\lambda_{k}^{\mathcal{N}}(\Omega)\stackrel{{\eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def Theorem 5.3. Moreover it is a direct verification that the function \(u\in L^{2}(\Omega)\) remains a Dirichlet eigenfunction in \(\Omega\) of eigenvalue \(\lambda_{k}^{\mathcal{D}}(\Omega)\) also in the new space \((\overline{U},\mathsf{d}_{\left|\overline{U}\right|},\mathfrak{m}_{\left|U \right|})\). Indeed \(\left.W_{0}^{1,2}(\Omega)\right.\) viewed as a subset of \(L^{2}(\Omega)\) in the space \((\mathrm{X},\mathsf{d},\mathfrak{m})\) coincides with \(W_{0}^{1,2}(\Omega)\) viewed as a subset of \(L^{2}(\Omega)\) in the space \((\overline{U},\mathsf{d}_{\left|\overline{U}^{*}},\mathfrak{m}_{\left|U \right|})\), and the corresponding minimal w.u.g. also coincide (see [9, Prop. 6.4]). Fix now \(\varepsilon\in(0,1)\), \(\eta>0\) and let \(\delta=\delta(\mathrm{X},\varepsilon,N,\eta)>0\), \(\delta^{\prime}=\delta^{\prime}(\mathrm{X},\varepsilon,N)>0\) and \(U_{\varepsilon,\eta}\subset\mathrm{X}\) be the constants and the set given by Theorem 5.3 applied to \((\mathrm{X},\mathsf{d},\mathfrak{m})\). Recall that \(\mathfrak{m}(U_{\varepsilon,\eta})\leq\eta.\) From here the proof proceeds almost verbatim as in the Neumann case, by considering the sets \[\mathcal{S}_{1}\coloneqq\{i\in 1,...,M(u)\ :\ \mathfrak{m}(\Omega_{i})\geq \delta\}\,,\] \[\mathcal{S}_{2}\coloneqq\{i\in 1,...,M(u)\ :\ \mathfrak{m}(\Omega_{i}\cap U_{ \varepsilon,\eta})\geq\delta^{\prime}\mathfrak{m}(\Omega_{i})\}\,,\] \[\mathcal{S}_{3}\coloneqq\{i\in 1,...,M(u)\ :\ \mathfrak{m}(\Omega_{i})\leq \delta,\mathfrak{m}(\Omega_{i}\cap U_{\varepsilon,\eta})\leq\delta^{\prime} \mathfrak{m}(\Omega_{i})\},\] where \(\{\Omega_{i}\}_{i=1}^{M(u)}\) is an enumeration of the nodal domains of the continuous representative of \(u\), and then exploiting the inequalities \[\lambda_{k}^{\mathcal{N}}(\Omega)\geq\lambda_{1}(\Omega_{i}),\quad\lambda_{1 }(\Omega_{i})\geq\frac{C_{\mathrm{X}}}{\mathfrak{m}(\Omega_{i})^{\frac{2}{N}} },\quad\text{ for all }i\in 1,\dots,M(u)\] which hold by Proposition 5.1 and Proposition 6.1 (that we can apply again by Theorem 3.2), together with \[\lambda_{1}(\Omega_{i})\geq\lambda_{1}(\Omega_{i}^{*})(1-\varepsilon),\quad \text{for all }i\in\mathcal{S}_{3},\] (\(\Omega_{i}^{*}\coloneqq B_{r}(0)\subset\mathbb{R}^{N}\) being the ball satisfying \(\mathfrak{m}(\Omega_{i})=\mathscr{L}^{N}(\Omega_{i}^{*})\)) that holds by Theorem 5.3. For completeness, we conclude with the proofs of the Corollaries 1.2 and 1.3, even if they are essentially already included in Theorem 1.1. Proof of Corollary 1.2.: We apply Theorem 1.1 with \(\Omega=\mathrm{X}\). Notice that the choice is admissible since \(\mathrm{X}\) is assumed to be compact, and thus \(\Omega\) is trivially a uniform domain inside \(\mathrm{X}\). From the discussion in Section 2.1 we know that \(\Delta=\Delta_{\mathcal{N}}\) in this situation, and thus the result follows. Proof of Corollary 1.3.: The metric measure space \((\mathbb{R}^{N},|\cdot|,\mathcal{H}^{N})\), where \(\mathcal{H}^{N}\) is the \(N\)-dimensional Hausdorff measure is a non-collapsed \(\mathrm{RCD}(0,N)\) (recall Remark 2.26). Hence the result follows applying Theorem 1.1, recalling also the compatibility between Neumann eigenfunctions in the metric setting and the usual ones (see Remark 2.7).
2307.02656
Revealing spoofing of classical radar using quantum noise
Electromagnetic remote sensing technologies such as radar can be mislead by targets that generate spoof pulses. Typically, a would-be spoofer must make measurements to characterize a received pulse in order to design a convincing spoof pulse. The precision of such measurements are ultimately limited by quantum noise. Here we introduce a model of electromagnetic spoofing that includes effects of practical importance that were neglected in prior theoretical studies. In particular, the model includes thermal background noise and digital quantization noise, as well as loss in transmission, propagation, and reception. We derive the optimal probability of detecting a spoofer allowed by quantum physics. We show that heterodyne reception and thresholding closely approaches this optimal performance. Finally, we show that a high degree of certainty in spoof detection can be reached by Bayesian inference from a sequence of received pulses. Together these results suggest that a practically realizable receiver could plausibly detect a radar spoofer by observing errors in the spoof pulses due to quantum noise.
Jonathan N. Blakely, Shawn D. Pethel, Kurt Jacobs
2023-07-05T21:11:36Z
http://arxiv.org/abs/2307.02656v1
# Revealing spoofing of classical radar using quantum noise ###### Abstract Electromagnetic remote sensing technologies such as radar can be misled by targets that generate spoof pulses. Typically, a would-be spoofer must make measurements to characterize a received pulse in order to design a convincing spoof pulse. The precision of such measurements is ultimately limited by quantum noise. Here we introduce a model of electromagnetic spoofing that includes effects of practical importance that were neglected in prior theoretical studies. In particular, the model includes thermal background noise and digital quantization noise, as well as loss in transmission, propagation, and reception. We derive the optimal probability of detecting a spoofer allowed by quantum physics. We show that heterodyne reception and thresholding closely approaches this optimal performance. Finally, we show that a high degree of certainty in spoof detection can be reached by Bayesian inference from a sequence of received pulses. Together these results suggest that a practically realizable receiver could plausibly detect a radar spoofer by observing errors in the spoof pulses due to quantum noise. **Keywords:** quantum sensing, radar spoofing, quantum hypothesis testing, heterodyne receiver ## 1 Introduction It was recently shown that quantum mechanics fundamentally limits the ability to spoof electromagnetic pulses to fool a sensor [1]. Specifically, the measurement made by an adversary to characterize a pulse is generally insufficient to fully determine its quantum state. Thus, in principle, a friendly receiver can use knowledge of the transmitted quantum state to detect spoofs. A classic application of spoofing is where an airborne target emits spoof pulses to avoid being tracked by a ground-based radar [2, 3]. Spoofing also has non-adversarial applications in hardware-in-the-loop testing [4, 5, 6]. A limitation of the work in ref. [1] was the neglect of important practical considerations such as noise and loss. Clearly, a full understanding of the importance of quantum physics to real world spoofing requires a model that includes these effects. Here we introduce such a model including both thermal background noise and digital quantization noise, as well as loss in transmission, propagation, and reception. The model provides insight into the relative importance of these effects in comparison to the purely quantum limits on spoofing previously identified. We analyze the performance of a quantum optimal receiver in discriminating spoofs. We find that, on one hand, loss and thermal noise degrade the ability to detect spoofing, while on the other hand, quantization noise in the spoof pulses acts similarly to quantum noise thus increasing the ability to discriminate. Finally, we examine a realizable receiver architecture, heterodyne reception combined with a thresholding procedure, which is shown to closely approach quantum optimal performance. Altogether, these results suggest that even under realistic conditions of large loss and background noise a realizable receiver can detect spoofing errors due to quantum noise. To be clear, quantum noise-based spoof detection is not a practical approach to current spoofing technologies. These devices introduce a variety of errors and a quantity of classical noise that provide the basis for existing spoof detection methods [2; 3]. Rather, this work is forward looking to a future spoofing technology that can mimic a transmitted pulse with an accuracy approaching the quantum limit [1]. We introduce our model of spoof detection in Sec. 2.1. The model takes the form of a quantum hypothesis test deciding between the presence or absence of a spoofer. In Sec. 2.2, we determine the quantum optimal probability of discriminating between the hypotheses and present a specific architecture for realizing optimal detection. In Sec. 2.3, we analyze a more practically realizable detection scheme based on heterodyne reception and thresholding. In Sec. 2.4, we examine a specific radar application where detection using heterodyne detection closely approaches optimal performance. Figure 1: Schematic depictions of the hypotheses to be discriminated where (a) and (c) illustrate a true echo from a target and a target-generated spoof, respectively, and (b) and (d) show the quantum channel models of each hypothesis. See text for more explanation In Sec. 2.5, we show how Bayesian inference can be used to aggregate information from multiple received pulses to detect spoofing with near certainty. Lastly, in Sec. 3, we give concluding remarks. ## 2 Results ### A quantum model of radar spoofing We model spoofing as a hypothesis test undertaken by the operator of a friendly receiver who must decide if a received pulse in a specific range-Doppler bin is a true reflection from a target of interest (hypothesis \(H_{0}\)), or a spoof pulse generated by an adversary (hypothesis \(H_{1}\)). We assume the target is probed by a narrowband, transform-limited pulse represented by a coherent state of a single, generalized, temporal mode (ignoring consideration of the spatial field pattern, for simplicity). By design, the amplitude \(\alpha\) of the transmitted coherent state is a complex random variable with the zero-mean Gaussian probability density given by \[P(\alpha)=\frac{\lambda}{\pi}e^{-\lambda|\alpha|^{2}}, \tag{1}\] where \(\lambda\) is a positive constant. The value of \(\alpha\) is assumed to be known by the operator, but not by the adversary. Under hypothesis \(H_{0}\), the received pulse is a true reflection off a target of interest, as depicted in Fig. 1(a). In this case, the pulse suffers loss as it is radiated from a transmitting source (e.g. an antenna or a laser) with some degree of impedance mismatch, propagated out to the target and back, and is received by a detector (e.g. an antenna or photodetector). Thermal noise is added to the signal at transmission, reflection, and reception. We model these processes by a single-mode, lossy, Gaussian bosonic channel \(\mathcal{L}_{\tau,N_{T}}\) with total transmissivity \(\tau\) and mean noise photon number \(N_{T}\)[7]. The action of \(\mathcal{L}_{\tau,N_{T}}\) on an input Gaussian state with displacement vector \(\mathbf{x}\) and covariance matrix \(\mathbf{V}\) is the transformation \[\mathbf{x} \rightarrow\sqrt{\tau}\mathbf{x}, \tag{2}\] \[\mathbf{V} \rightarrow\tau\mathbf{V}+\left(1-\tau\right)\left(2N_{T}+1\right) \mathbf{I}. \tag{3}\] In the transformation of the covariance matrix, the first term represents the reduction of the size of fluctuations due to loss processes, while the second term represents fluctuations added by thermal noise. In what follows, it will be useful to let \(N_{T}=N_{T}^{\prime}/(1-\tau)\) where \(N_{T}^{\prime}\) is a fixed mean noise photon number independent of \(\tau\). Quantization noise is added upon digitization of the received signal. Typically, quantization noise in high resolution digitization is modeled as uniformly distributed over the range \(E\) corresponding to the least significant bit, with zero mean and variance \(E/12\)[8]. For analytical convenience, it is here assumed that the quantization process is a classical Gaussian noise channel \(\mathcal{C}_{\xi}\) that adds Gaussian noise with variance \(\xi=E/12\) to the input signal. The action of \(\mathcal{C}_{\xi}\) on an input Gaussian state with displacement vector \(\mathbf{x}\) and covariance matrix \(\mathbf{V}\) is the transformation \(\mathbf{x}\rightarrow\mathbf{x}\), \(\mathbf{V}\rightarrow\mathbf{V}+\xi\mathbf{I}\)[7, 9]. The complete model under hypothesis \(H_{0}\), including the final measurement made by the receiver, is depicted in Fig. 1(b). Assuming the transmitted state is \(\hat{\rho}=\left|\alpha\right\rangle\left\langle\alpha\right|\), for which \[\mathbf{x}=\left[\begin{array}{c}\alpha+\alpha^{*}\\ i\left(\alpha^{*}-\alpha\right)\end{array}\right], \tag{4}\] and \(\mathbf{V}=\mathbf{I}\), where \(\mathbf{I}\) is the identity matrix, the state measured by the receiver under hypothesis \(H_{0}\), i.e. \(\mathcal{C}_{\xi}\left(\mathcal{L}_{\tau,N_{T}}\left(\hat{\rho}\right)\right)\), has displacement vector \[\mathbf{x}_{0}=\sqrt{\tau}\mathbf{x} \tag{5}\] and covariance matrix \[\mathbf{V}_{0}=\left\{2N_{0}+1\right\}\mathbf{I}, \tag{6}\] where \[N_{0}=N_{T}^{\prime}+\xi/2 \tag{7}\] and \(\mathbf{I}\) is the identity matrix. The '0' subscripts in Eqs.(5) and (6) indicate that these quantities describe the quantum state at the receiver under hypothesis \(H_{0}\). Equivalently, this state can be represented by the density operator \[\hat{\rho}_{0}=\frac{1}{\pi N_{0}}\int d^{2}\alpha^{\prime}e^{-\frac{|\alpha^ {\prime}-\sqrt{\tau}\alpha|^{2}}{N_{0}}}\left|\alpha^{\prime}\right\rangle \left\langle\alpha^{\prime}\right|. \tag{8}\] Under hypothesis \(H_{1}\), the received pulse is a spoof, as represented in Fig. 1(c). We assume the spoof is generated by an adversary who has performed a single measurement on the transmitted state and aims to reproduce this state as closely as possible. We refer to this measure-and-prepare approach as _classical_ spoofing [1]. We model propagation from the transmitter to the spoofer, and from the spoofer to the receiver as two separate passes through the lossy channel \(\mathcal{L}_{\sqrt{\tau},N_{T}}\), which effects the transformation \(\mathbf{x}\rightarrow\tau^{1/4}\mathbf{x}\), \(\mathbf{V}\rightarrow\sqrt{\tau}\mathbf{V}+\left(1-\sqrt{\tau}\right)\left(2 N_{T}+1\right)\mathbf{I}\). When the output of the first channel is fed directly to the second channel, the result is equivalent to the single channel under hypothesis \(H_{0}\), i.e. \(\mathcal{L}_{\sqrt{\tau},N_{T}}\left(\mathcal{L}_{\sqrt{\tau},N_{T}}\left( \hat{\rho}\right)\right)=\mathcal{L}_{\tau,N_{T}}\left(\hat{\rho}\right)\). Thus, if the adversary were able to exactly copy the transmitted quantum state, the receiver would have no basis for discriminating a spoof from a real return. However, quantum physics does not allow the adversary to fully characterize the transmitted state with a single measurement. The optimal single measurement for estimation of the Gaussian-distributed mean amplitude of a noisy coherent state such as is received by the adversary, is heterodyne detection [10]. Thus, we assume the adversary makes a heterodyne measurement of the complex amplitude. Heterodyne detection has a long history in quantum optics, but is also essentially the operation performed by a coherent radar receiver insofar as the received signal is mixed down to an intermediate frequency and then input to a quadrature detector and matched filters that output the real and imaginary parts of the complex amplitude. We further allow for the introduction of quantization noise by the adversary as the quadrature signals are typically digitized. Ideal heterodyne detection realizes the positive operator-valued measure with measurement operators \(\left|\beta\right\rangle\left\langle\beta\right|/\sqrt{\pi}\)[11]. The statistics for heterodyne measurement on the output of the lossy channel representing propagation from the transmitter to the spoofer with added quantization noise, i.e. \(\mathcal{C}_{\xi^{\prime}}\left(\mathcal{L}_{\sqrt{\tau},N_{T}}\left(\hat{ \rho}\right)\right)\), are described by the probability density \[P(\beta) =\mathrm{tr}\left[\frac{\left|\beta\right\rangle\left\langle \beta\right|}{\pi}\mathcal{C}_{\xi^{\prime}}\left(\mathcal{L}_{\sqrt{\tau},N_ {T}}\left(\hat{\rho}\right)\right)\right] \tag{9}\] \[=\frac{\exp\left(-\frac{\left|\tau^{1/4}\alpha-\beta\right|^{2}} {\left(1+\sqrt{\tau}\right)^{-1}N_{T}^{\prime}+\xi^{\prime}/2+1}\right)}{\pi \left[\left(1+\sqrt{\tau}\right)^{-1}N_{T}^{\prime}+\xi^{\prime}/2+1\right]} \tag{10}\] where \(\beta\) is the complex measurement outcome. The variance of the additive quantization noise is \(\xi^{\prime}\), which is generally not equal to that of the friendly receiver, \(\xi\). The quantization noise levels are different for these two receivers because they are typically receiving signals of very different amplitudes. The adversary generates a spoof pulse in the same generalized temporal mode with complex amplitude \(\beta\) and it passes through the lossy channel \(\mathcal{L}_{\sqrt{\tau},N_{T}}\) representing the path from the adversary to the friendly receiver. The receiver is assumed to introduce quantization noise upon reception, resulting in the state \(\mathcal{C}_{\xi}\left(\mathcal{L}_{\sqrt{\tau},N_{T}}\left(\left|\beta \right\rangle\left\langle\beta\right|\right)\right)\). It is assumed that the receiver knows the adversary's measurement statistics, but not the measurement outcome \(\beta\). Thus, the state of the pulse at the receiver is a mixture of coherent states weighted by the density Eq.(10) as expressed by the displacement vector \[\mathbf{x}_{1}=\mathbf{x}_{0}, \tag{11}\] and the covariance matrix \[\mathbf{V}_{1}=\mathbf{V}_{0}+2\sqrt{\tau}\left(1+\xi^{\prime}/2\right) \mathbf{I}. \tag{12}\] The subscripts in Eqs.(11) and (12) indicate that these quantities describe the quantum state under hypothesis \(H_{1}\). Equivalently, this state can be represented by the density operator \[\hat{\rho}_{1}=\frac{1}{\pi N_{1}}\int d^{2}\alpha^{\prime}e^{-\frac{|\alpha^ {\prime}-\sqrt{\tau}\alpha|^{2}}{N_{1}}}\left|\alpha^{\prime}\right\rangle \left\langle\alpha^{\prime}\right| \tag{13}\] where \[N_{1}=N_{T}^{\prime}+\xi/2+\sqrt{\tau}\left(1+\xi^{\prime}/2\right). \tag{14}\] Upon reception, a decision must be made as to whether a received pulse is most consistent with the state specified by Eqs.(5) and (6) under hypothesis \(H_{0}\) or by Eqs.(11) and (12) under hypothesis \(H_{1}\). Comparing Eqs.(5) and (11), it can be concluded that the displacement vector provides no basis for a decision because it is the same under both hypotheses. The second term on the right hand side of Eq.(12) does provide a basis for a decision. The first term in parentheses in this equation represents the quantum noise in the heterodyne measurement outcome. One half of this noise is attributable to quantum noise in the transmitted coherent state. The other half is quantum noise associated with the Heisenberg uncertainty relation between the real and imaginary field quadratures in the course of an ideal heterodyne measurement. The second term in parentheses in Eq.(12) represents the noise added by the adversary through digital quantization. Interestingly, the adversary's quantum and classical noise enter the discrimination problem in the same manner even though their physical origins are distinct. Having now framed spoof detection as a hypothesis test, we next turn to the analysis of specific measurement strategies that the receiver operator might adopt when seeking to detect the presence of a spoofer. In the sections that follow, the optimal measurement strategy allowed by quantum mechanics will be examined, as well as a practically realizable strategy that closely approaches the optimum. ### Quantum Optimal Detection of Spoofing Quantum detection theory enables the calculation of the probability of successful detection assuming the receiver executes the measurement and decision criterion that minimizes the Bayesian total probability of error over all positive operator-valued measures [12]. In this section, we examine this optimal performance and the receiver architecture that would achieve it. Throughout this section, the Bayesian prior probability that a pulse is a spoof is assumed to be 0.5. It is straightforward to generalize the results that follow to allow for other values of this probability, but for the sake of clarity, only the one case will be discussed. Letting \(P_{\textrm{opt}}\) denote the probability of choosing the hypothesis that corresponds to the truth using the optimal receiver, then \[P_{\textrm{opt}}=\frac{1}{2}\left(1+\frac{1}{2}||\hat{\rho}_{1}-\hat{\rho}_{0 }||_{1}\right), \tag{15}\] assuming equal Bayesian prior probabilities for the two hypotheses, equal costs for all types of error, and where \(||\cdot||_{1}\) denotes the trace norm [12]. We can obtain a fairly simple expression for \(P_{\textrm{opt}}\) by noting that it is unchanged if we apply a unitary transformation to both \(\hat{\rho}_{0}\) and \(\hat{\rho}_{1}\). Since according to Eq.(11) both states have the same displacement vector (phase space centroid), we can apply a displacement transformation to reduce the displacement vectors of both to zero while leaving the variances unchanged. This unitary transformation does not affect \(P_{\textrm{opt}}\), but the resulting states are then thermal states and are thus diagonal in the Fock basis. Following Helstrom [13], the optimal probability of successful discrimination for any value of \(\alpha\) is then \[P_{\textrm{opt}} =\frac{1}{2}\frac{1}{N_{0}+1}\sum_{n=0}^{m}\left(\frac{N_{0}}{N_ {0}+1}\right)^{n}\] \[+\frac{1}{2}\frac{1}{N_{1}+1}\sum_{n=m}^{\infty}\left(\frac{N_{1} }{N_{1}+1}\right)^{n}, \tag{16}\] with \[m=\textrm{floor}\left\{\frac{\ln\frac{N_{1}+1}{N_{0}+1}}{\ln\left[\frac{N_{1 }(N_{0}+1)}{N_{0}(N_{1}+1)}\right]}\right\}. \tag{17}\] For the \(\alpha=0\) case, Helstrom found optimal discrimination could be performed by photon counting followed by comparison to a threshold of value \(m\)[13]. It follows that for \(\alpha\neq 0\), optimal discrimination can be performed by a receiver that first displaces the received signal by \(\alpha\) and then counts photons and compares to the threshold. In the context of microwaves, the displacement can be realized by homodyne down conversion. In principle, photon counting could be done on the resulting baseband signal. Unfortunately, existing single photon detectors in the microwave regime have low quantum efficiencies [14; 15]. Thus, we next analyze heterodyne detection and thresholding, a currently realizable architecture. Importantly, this approach will be shown to perform close to optimally. ### Detection of Spoofing with Heterodyne Reception Consider a receiver that makes a heterodyne measurement whose outcome is a complex amplitude that is compared to a threshold to discriminate the two hypotheses. Under hypothesis \(H_{k}\), with \(k=0,1\), the heterodyne measurement outcome \(\beta\) is a random variable with probability density [11] \[P(\beta|H_{k})=\mathrm{tr}\left(\frac{\left|\beta\right\rangle\left\langle \beta\right|}{\pi}\hat{\rho}_{k}\right)=\frac{e^{-\left|\beta-\sqrt{\tau} \alpha\right|^{2}/(N_{k}+1)}}{\pi\left(N_{k}+1\right)}, \tag{18}\] We introduce a threshold \(\mu\) such that if \(\left|\beta\right|\leq\mu\) we select hypothesis \(H_{0}\), and conversely if \(\left|\beta\right|>\mu\) we select hypothesis \(H_{1}\). The set of \(\beta\) values satisfying the former condition, which we will refer to as \(Z_{0}\), is a filled circle (a disk) with radius \(\mu\) centered on \(\sqrt{\tau}\alpha\). The set satisfying the latter condition, referred to as \(Z_{1}\), is the rest of the complex plane. The probability of success in choosing the true hypothesis, \(P_{\mathrm{het}}\), is the sum of the probability of choosing \(H_{0}\) when it is true and the probability of choosing \(H_{1}\) when it is true. Mathematically, this is \[P_{\mathrm{het}} =\frac{1}{2}\int_{Z_{0}}d^{2}\beta P(\beta|H_{0})+\frac{1}{2} \int_{Z_{1}}d^{2}\beta P(\beta|H_{1}) \tag{19}\] \[=\frac{1}{2}\left(1-e^{-\mu^{2}/(N_{0}+1)}\right)+\frac{1}{2}e^{ -\mu^{2}/(N_{1}+1)}, \tag{20}\] where, again, an assumption of equal prior probabilities has been made. It follows that the value of the threshold \(\mu\) that optimizes \(P_{\mathrm{het}}\) is equal to the magnitude of \(\beta\) where the curves \(P(\beta|H_{0})\) and \(P(\beta|H_{1})\) intersect. Specifically, the optimal threshold is \[\mu_{\rm opt}=\sqrt{\frac{N_{0}+1}{1-\frac{N_{0}+1}{N_{1}+1}}\ln\left(\frac{N_{1 }+1}{N_{0}+1}\right)}. \tag{21}\] In the next section, we will compare this detection scheme with optimal detection in a specific application. ### An Example As a specific example, we use the parameters of a W-band radar defined in Refs. [16, 17] where \[\tau=\left(\frac{G_{T}}{4\pi R^{2}}\right)\left(\frac{\sigma A_{R}}{4\pi R^{2 }}\right), \tag{22}\] with \(G_{T}=A_{R}/\left(2\pi c/\omega_{0}\right)^{2}\). Here \(G_{T}\) is the radar antenna gain, \(A_{R}=1\) m\({}^{2}\) is its effective area, \(\sigma=0.01\) m\({}^{2}\) is the target cross section, \(\omega_{0}/2\pi=100\) GHz is the pulse center frequency, and \(c\) is the speed of light. The mean noise photon number \(N_{T}^{\prime}=32\), corresponding to a receiver noise temperature of 150 K. To determine the magnitude of the quantization noise, we note that according to Eq. (1) the real and imaginary parts of the mean complex amplitude of the transmitted pulse are zero mean random variables with variance \((2\lambda)^{-1}\). So the average mean photon number in such pulses is \((2\lambda)^{-1}\). The signal under hypothesis \(H_{0}\) passes through the channel \(\mathcal{L}_{\tau,N_{T}}\) before arriving at the receiver. The signal would emerge from this channel with an average mean photon number \(\tau/(2\lambda)\). We assume this signal is quantized at the receiver with \(n\) bits of resolution such that the least significant bit corresponds to a range \(E\approx 2^{-n}\tau/(2\lambda)\) with units of photon number. The variance of the quantization noise is then taken to be \(\xi=E/12\). The value of \((2\lambda)^{-1}\) is chosen by assuming the pulse width \(T=1\mu\)s, and the average power \(P_{\rm ave}=10\) kW, giving an average pulse energy of \(10^{-2}\) J. Under the assumption of narrow bandwidth, the energy per photon is approximately \(\hbar\omega_{0}\). Then the effective mean photon number for quantization noise at the radar receiver is \[\xi\approx\tau\frac{2^{-n}TP_{\rm{ave}}}{12\hbar\omega_{0}} \tag{23}\] A common value for \(n\) in existing microwave technology is 10, giving \(\xi\approx 9\times 10^{4}\) at a range of 1 km. But due to the rapid increase of loss with increasing range, \(\xi\) drops to approximately 1 at 17 km. Beyond this level of resolution, the quantization noise is small compared to the quantum noise in this model. Existing ultra high resolution analog-to-digital converters can have \(n=32\), giving \(\xi\approx 1\) at just 375 m. The quantization noise introduced by the spoofer will necessarily have larger variance than \(\xi\) because the spoofer receives the signal after passing through the less lossy Figure 2: The probability of successfully discriminating between true and spoofed pulses for an optimal receiver with quantization noise due to digitization with bit resolution \(n=32\) (dashed red line) and with no quantization noise (solid blue line). channel \(\mathcal{L}_{\sqrt{\tau},N_{T}}\). By the same reasoning as above \[\xi^{\prime}\approx\sqrt{\tau}\frac{2^{-n}TP_{\rm{ave}}}{12\hbar\omega_{0}}. \tag{24}\] In this case, with \(n=10\), \(\xi^{\prime}\) falls to approximately 1 at the impractical distance of 180,000 km, and with \(n=32\), \(\xi^{\prime}\approx 1\) at 88 km. Beyond this range, the spoofer can be said to be limited chiefly by quantum noise. With all the model parameters now set, we first examine the performance of optimal spoof detection. The optimal probability of successful discrimination, as given by Eq.(17), is shown as a function of range in Figure 2. Since the prior probability of spoofing is 0.5, the probability of successful discrimination before transmitting any signal is also 0.5. Thus, in the figure 0.5 is subtracted from \(P_{\rm{opt}}\) to emphasize the increase due to the gain of information from reception and measurement of a pulse. The blue line is the probability with infinite bit resolution, i.e., \(\xi=\xi^{\prime}=0\). The non-zero value (after subtracting 0.5) indicates that, in principle, quantum noise alone provides a sufficient physical basis for detecting the spoofer. Importantly, since the spoofer is assumed to employ the quantum optimal measurement for estimating the transmitted quantum state, no other measure-and-prepare strategy can be devised to eliminate this physical basis. Thus, exploitation of quantum noise here provides a kind of quantum advantage in spoof detection. The dashed, red line in Fig. 2 is the success probability with a bit resolution \(n=32\) bits. As range increases, the \(n=32\) probability approaches the \(n=0\) probability. This trend illustrates the decreasing relative importance of classical quantization noise versus quantum noise at long ranges. Optimal performance can be compared to that of heterodyne reception and thresholding. For \(n=32\), the probability, \(P_{\rm{opt}}\), is shown (solid blue line) along with the corresponding success probability for heterodyne reception, \(P_{\rm{het}}\), (dashed red line) in Fig. 3. Importantly, the more practical heterodyne detection scheme closely approaches the performance of optimal detection. The inset shows how the former falls just short of the latter. With either detection method, the success probability is very small at most ranges. For example, at a range of 10 km, \(P_{\rm het}-0.5\) for this receiver is approximately \(10^{-8}\). One might conclude that the increase in success probability over the prior probability would be too small to be of practical use in many applications. However, even a very small increase can be exploited by aggregating information from multiple transmissions through a process such as Bayesian inference, as described in the following section [1]. ### Bayesian Inference from Multiple Pulses The small effect of quantum noise added by an adversary can be exploited by aggregating the information collected from multiple pulses, each with a different random amplitude. Previously, Bayesian inference was used to update the prior probabilities in a noise-free, loss-free, spoofing model for a binary phase shift keying signal set [1]. Figure 3: The probability of successfully discriminating between true and spoofed pulses with quantization noise due to digitization with bit resolution \(n=32\) for an optimal receiver (solid blue line) and a heterodyne receiver with threshold detection (dashed red line). Here we apply the same approach to the current model of spoofing with heterodyne reception and threshold detection. Bayesian inference involves updating the prior probability after each new measurement outcome [18]. Let \(P_{0}\) (\(P_{1}\)) be the prior probability of hypothesis \(H_{0}\) (\(H_{1}\)), respectively, after \(M\) measurements. It is shown in Sec. 4 that the difference between the prior probabilities after \(M\gg 1\) trials will on average take the value \[\langle|P_{1}-P_{0}|\rangle\approx\frac{|1-e^{M\Delta_{0}(\Delta_{0}-\Delta_{ 1})}|}{1+e^{M\Delta_{0}(\Delta_{0}-\Delta_{1})}}, \tag{25}\] where \[\Delta_{0}=2e^{-\mu^{2}/(N_{0}+1)}-1, \tag{26}\] Figure 4: Mean difference in prior probabilities as a function of number of pulses received. The prior probabilities are updated after each new pulse is received according to the procedure of Bayesian inference. The range is 1 km and the bit resolution is 32. \[\Delta_{1}=2e^{-\mu^{2}/(N_{1}+1)}-1. \tag{27}\] This approximation to the mean difference in probabilities as a function of \(M\) is shown to approach unity at large \(M\) in Fig. 4 for the example parameters of Sec. 2.4 (and, in particular, \(n=32\)). This result means that certainty is approached by one of the two hypotheses when enough pulses have been received. For example, \(\langle|P_{1}-P_{0}|\rangle>0.95\) after about \(6\times 10^{5}\) pulses. To achieve a desired value of \(\langle|P_{1}-P_{0}|\rangle\) near one, the required number of samples is \[M\approx\frac{1}{\Delta_{0}(\Delta_{0}-\Delta_{1})}\ln\frac{1+\langle|P_{1}-P_ {0}|\rangle}{1-\langle|P_{1}-P_{0}|\rangle}. \tag{28}\] Dividing this number by a pulse repetition rate would give the required dwell time on target to achieve a desired average level of certainty. Figure 5 Figure 5: Dwell time to reach a mean difference in prior probabilities of 0.9 as a function of range with a pulse repetition rate of 500 kHz. time as a function of range for the example parameters assuming a desired \(\langle|P_{1}-P_{0}|\rangle\) of 0.9 and a pulse repetition rate of 500 kHz. At 1 km, the required dwell time is about 120 ms. During such an interval, a target with a velocity as high as \(10^{2}\) m/s would not move by significant fraction of the range of 1 km. ## 3 Discussion In this article, we have shown that a practically realizable receiver could plausibly detect a radar spoofer by observing errors in the spoof pulses due to quantum noise. In practice, information from many pulses would have to be aggregated to reach a meaningful degree of certainty, but in an example application this requirement was shown to be achievable. This exploitation of quantum noise constitutes a kind of quantum advantage in spoof detection. To arrive at these results, we introduced a new model of radar spoofing that includes noise and loss. Key assumptions of the model were (1) the set of signals used by the radar (specifically, coherent states with Gaussian-distributed amplitudes), and (2) the limitation of the spoofer to a measure-and-prepare strategy. Extensions of this work could explore the consequences of modifying either of these assumptions. On the one hand, expanding the set of possible signals which the spoofer must discriminate could enhance the radar operator's ability to detect the spoofer. On the other hand, spoofing strategies that exploit more of the information available in the received quantum state than is extracted by a single measurement might allow for more deceptive spoofing. Our current work is pursuing both of these threads. ## 4 Methods ### Convergence of Bayesian Inference Here we derive Eq.(25) assuming the radar transmits \(M\) pulses, each with an independent, randomly chosen amplitude. Under either hypothesis, the radar operator's measurement has two possible outcomes, a determination that the received pulse is either a true return or a spoof. Let the symbols \(-\) and \(+\) indicate the measurement outcomes corresponding to a true return and a spoof, respectively. In general, if \(H_{i}\) is true (where \(i\) is either 0 or 1), then the probabilities of the two outcomes are \[P(\pm|H_{i})=\frac{1}{2}\pm\frac{\Delta_{i}}{2}=\frac{1}{2}\left(1\pm\Delta_{i }\right), \tag{29}\] where \(-1\leq\Delta_{i}\leq 1\), and the specific value of \(\Delta_{i}\) depends on the particular choice of measurement. These probabilities are known as _likelihood_ functions. For heterodyne detection and thresholding, \[P(\pm|H_{i})=\int_{Z_{i}}P(\beta|H_{i})\;d^{2}\beta, \tag{30}\] from which follows \(\Delta_{0}\) and \(\Delta_{1}\) as given by Eqs.(26) and (27), respectively. Let the prior probability of the hypothesis \(H_{i}\) before the first pulse is received be \(P_{0}(H_{i})\). After the \(n^{\rm th}\) pulse is received and measured, our new state-of-knowledge is obtained by multiplying the prior probabilities by the corresponding likelihood function and normalizing the result [18]. If we leave off the normalization (which we can always do after all \(M\) measurements have been made) the prior probabilities after the \(n^{\rm th}\) pulse is received are \[P_{n}(H_{i})=P(\pm|H_{i})P_{n-1}(H_{i}), \tag{31}\] \[=\frac{1}{2}\left(1\pm\Delta_{i}\right)P_{n-1}(H_{i}), \tag{32}\] \[\approx\frac{1}{2}\exp\left(\pm\Delta_{i}\right)P_{n-1}(H_{i}), \tag{33}\] where the approximation in the last line is valid to the extent that \(\Delta_{i}\ll 1\). Repeating this procedure for \(M\) measurements, and still without normalizing, we have \[P_{M}(H_{i})\approx\frac{1}{2^{M}}\exp\left(\Delta_{i}\sum_{n=1}^{M}x_{n} \right)P_{0}(H_{i}) \tag{34}\] where \(x_{n}=1\) if the outcome of the \(n^{th}\) measurement indicates a spoof and \(x_{n}=-1\) if it indicates a true return. Now assuming equal initial prior probabilities and normalization, the prior probabilities conditioned on the random variable \(X\equiv\sum_{n}x_{n}\) are \[P_{M}(H_{0}|X) =\frac{\exp\left(\Delta_{0}X\right)}{\exp\left(\Delta_{0}X\right) +\exp\left(\Delta_{1}X\right)} \tag{35}\] \[P_{M}(H_{1}|X) =\frac{\exp\left(\Delta_{1}X\right)}{\exp\left(\Delta_{0}X\right) +\exp\left(\Delta_{1}X\right)} \tag{36}\] A measure of our average certainty as to which hypothesis is true is \[\langle|P_{M}(H_{1}|X)-P_{M}(H_{0}|X)|\rangle, \tag{37}\] where the average is over all possible sets of measurement outcomes \(\{x_{n}\}\). To evaluate this average, we need the distribution for \(X\) under each hypothesis. Since \(X\) is the sum of independent random variables it will be Gaussian for large enough \(M\). Under the hypothesis \(H_{i}\) this Gaussian random variable has mean and variance \[m_{i} =\sum_{n=1}^{M}\frac{1}{2}\left(1+\Delta_{i}\right)-\frac{1}{2}\left( 1-\Delta_{i}\right)=\sum_{n=1}^{M}\Delta_{i}=M\Delta_{i} \tag{38}\] \[V_{i} =\sum_{n=1}^{M}\left[\left(1-\Delta_{i}\right)^{2}\frac{1}{2} \left(1+\Delta_{i}\right)-\left(1+\Delta_{i}\right)^{2}\frac{1}{2}\left(1- \Delta_{i}\right)\right]\] \[=M\left(1-\Delta_{i}^{2}\right). \tag{39}\] The Gaussian distribution under hypothesis \(H_{i}\) is then \[P_{i}(x)=\frac{1}{\sqrt{2\pi V_{i}}}\exp\left[-\frac{(x-m_{i})^{2}}{2V_{i}}\right] \tag{40}\] The total distribution for \(X\) is \[P(x) =P_{0}(H_{0})\frac{1}{\sqrt{2\pi V_{0}}}\exp\left[-\frac{(x-m_{0} )^{2}}{2V_{0}}\right]\] \[+P_{0}(H_{1})\frac{1}{\sqrt{2\pi V_{1}}}\exp\left[-\frac{(x-m_{1 })^{2}}{2V_{1}}\right] \tag{41}\] With equal initial prior probabilities, the distribution for \(X\) is \[P(x) =\frac{1}{\sqrt{8\pi V_{0}}}\exp\left[-\frac{(x-m_{0})^{2}}{2V_{0 }}\right]\] \[+\frac{1}{\sqrt{8\pi V_{1}}}\exp\left[-\frac{(x-m_{1})^{2}}{2V_{1 }}\right] \tag{42}\] Thus our certainty measure, defined in Eq. 37, averaged over all possible measurement results is \[\langle|P_{M}(H_{1}|X)-P_{M}(H_{0}|X)|\rangle= \tag{43}\] \[\frac{1}{\sqrt{8\pi}}\int_{-\infty}^{\infty}\frac{e^{\Delta_{1}x }-e^{\Delta_{0}x}}{e^{\Delta_{0}x}+e^{\Delta_{1}x}}\left(\frac{1}{\sqrt{V_{0} }}\exp\left[-\frac{(x-m_{0})^{2}}{2V_{0}}\right]+\frac{1}{\sqrt{V_{1}}}\exp \left[-\frac{(x-m_{1})^{2}}{2V_{1}}\right]\right)dx\] The Gaussian functions in parentheses in the integrand act as sampling functions that pick out the value of the preceding factor at \(x=\Delta_{0}M\) and \(x=\Delta_{1}M\). Then, since \(\Delta_{0}\approx\Delta_{1}\), Eq.(25) follows. The notation in Sec. 2.5 is simplified by using \(P_{i}\) to mean \(P_{M}(H_{i}|X)\).
2308.14869
PROSO Toolbox: a unified protein-constrained genome-scale modelling framework for strain designing and optimization
The genome-scale metabolic model with protein constraint (PC-model) has been increasingly popular for microbial metabolic simulations. We present PROSO Toolbox, a unified and simple-to-use PC-model toolbox that takes any high-quality genome-scale metabolic reconstruction as the input. The toolbox can construct a PC-model automatically, apply various algorithms for computational strain design and simulation, and help unveil metabolism from gene expression data through a state-of-the-art OVERLAY workflow. It also has detailed tutorials and documentation for maximum accessibility to researchers from diverse backgrounds. PROSO Toolbox, tutorials, and documentation are freely available online: https://github.com/QCSB/PROSO-Toolbox.
Haoyang Yao, Laurence Yang
2023-08-28T19:44:55Z
http://arxiv.org/abs/2308.14869v1
PROSO Toolbox: a unified protein-constrained genome-scale modelling framework for strain designing and optimization ###### Abstract The genome-scale metabolic model with protein constraint (PC-model) has been increasingly popular for microbial metabolic simulations. We present PROSO Toolbox, a unified and simple-to-use PC-model toolbox that takes any high-quality genome-scale metabolic reconstruction as the input. The toolbox can construct a PC-model automatically, apply various algorithms for computational strain design and simulation, and help unveil metabolism from gene expression data through a state-of-the-art OVERLAY workflow. It also has detailed tutorials and documentation for maximum accessibility to researchers from diverse backgrounds. PROSO Toolbox, tutorials, and documentation are freely available online: [https://github.com/QCSB/PROSO-Toolbox](https://github.com/QCSB/PROSO-Toolbox). genome-scale modelling, systems biology, synthetic biology + Footnote †: journal: Journal Title Here, 2023, pp. 1–4 doi: DOI: 10.1103/1.1145-3212 ## Introduction For over two decades, genome-scale modelling (GEM) has been studied extensively as the mathematical twin of living microorganisms and cells, and it has drastically deepened our understanding of microbial decision-making. A mid-term goal of the GEM community is to achieve GEM-integrated synthetic biology through the "Design-Build-Test-Learn" (DBTL) cycle [5]. Genome-scale metabolic model (M-model), the most basic GEM framework, is a constraint-based mathematical network, where each metabolite's mass balance constitutes a linear constraint while metabolic reactions constitute a variable. COnstraint-Based Reconstruction and Analysis (COBRA) [7] has therefore been rapidly developing based on the M-model framework, thanks to its computational simplicity. Specifically, this simple constraint-based configuration is valid due to the generally fast time constants of small-molecule-metabolites in the M-model. Yet the same does not apply to macromolecules including mRNA, proteins, and enzymes, all of which have slower concentration dynamics, and thus have not typically reached steady-state at the same time-scales as small-molecule metabolites. A specialized GEM of metabolism and macromolecular expression (ME-model) has to be developed to encompass these missing aspects, at the cost of fewer algorithms compatible and several orders of magnitude higher computational complexity [10]. This is especially problematic when applying GEM to interpret experiments where macromolecular mechanisms are critical for the phenomena studied. Protein-constrained metabolic model (PC-model) is a compromise between the M-model and ME-model, offering static and less detailed macromolecular insight without adding heavy computational burdens, and thus is gaining popularity in the community. Many GEM studies have adopted various PC-model formulations, and it is proven more effective than M-model [19, 14]. However, due to a lack of a unified framework, PC-model studies have limited reusability for new projects, which causes a general shortage of available algorithms for PC-model. In this study, we present a unified GEM framework with PROtein constraints for Strain designing and Optimization (PROSO Toolbox). This toolbox features functions from our previous study, specifically automated protein constraint implementations and the state-of-the-art context-specific modelling algorithm OVERLAY [18]. We have also adapted other algorithms from the COBRA community to fulfill the needs of a computational strain design, making it capable of completing the synthetic biology DBTL cycle. ## Features The PROSO Toolbox consists of many smaller algorithms, such as PC-OptKnock, Minimization Of Proteomic Adjustments (MOPA), PC-dynamicFBA, and OVERLAY (Fig. 1). The workflow takes an existing M-model as the input to formulate a PC-model, which is a metabolic model with protein constraints. This formulation of the PC-model will be a universal input for following strain design algorithms and OVERLAY, the previous of which in turn helps to propose and simulate strain designs. The _in-silico_ design can be built and tested in wet lab experiments, where multi-omic data such as transcriptomic, proteomic, and metabolomic measurements are taken. The pipeline of OVERLAY uses the PC-model and omic data as inputs to generate deep insights into cell metabolism, which in turn can help to replace modelling assumptions with knowledge as well as refine the strain design [18]. Some functions in PROSO Toolbox require Gurobi Optimizer version 9.3 or above (offering free academic license) to solve non-convex bilinear optimizations [6]. ### Protein constraints implementation The PROSO Toolbox features an automated protein constraint implementation as well as an optional and semi-automated curation step, most of which has been explained in our past publication [18]. Protein abundance is first added to the M-model as a variable. The toolbox then uses the gene-protein-reaction information in the metabolic reconstruction to add protein complex variables and couple them with respective metabolic reactions. The user may also choose to curate the protein complex subunit stoichiometry manually. The enzymatic rate constant can be tuned using surface area-based estimation, which results in a PC-model. Figure 1: This is a conceptual schematic of the workflow and outputs offered by the PROSO Toolbox, excluding the wet lab experiment. (**a**) M-model is used to construct a PC-model, of which PC-OptKnock, MOPA, and PC-dynamicFBA are applied for an _in-silico_ strain design. The design can be built and tested in the wet lab with -omic measurements. OVERLAY is then used to learn the underlying metabolism and update the design. This completes the DBTL cycle. (**b**) PC-OptKnock is used to design mutant strains with growth-coupled succinate productions. Production envelopes (succinate production versus biomass) for the wild-type and mutant strains are shown. (**c**) The algorithm of MOPA simulates the immediate effect (MOPA point), short-term effect (migrating routes), and long-term effect (mutant strain OP) of mutations based on several assumptions. The relationship between these different states is plotted on the growth rate, succinate production rate, and proteome distance (proteome PC 1). The production envelopes (PE) for the wild-type and mutant strains are plotted in dotted lines. (**d**) Based on the results from previous steps, the consumption and accumulation of biomass and chemicals over multiple cell states can be simulated using PC-dynamicFBA. PC-dynamicFBA introduces ribosomal PC-FBA, which allows only limited proteome re-allocation over a certain time period according to a ribosomal budget. (**e**) OVERLAY is a multi-step pipeline that takes omic data as input and predicts cellular metabolism. The results from OVERLAY can help to refine the strain design for the next iteration of experiments. ### PC-OptKnock The algorithm of PC-OptKnock is an adaptation of the original OptKnock onto the PC-model platform. In short, OptKnock is a bilevel optimization problem for designing a microbial strain which, through repressing a small number of metabolic reactions, achieves a non-zero target chemical production rate when the microbe is growing optimally, also known as growth-coupling [3]. PC-OptKnock can suggest growth-coupling strategies by repressing not reactions but proteins, which is more practical for wet lab scientists to implement and examine. Setting a larger maximum knockout number (\(K\)) is likely to achieve a higher chemical production rate at a significantly higher computational cost. Thus, we recommend the more computationally efficient method of conducting an iterative local search (i.e., \(K=1+1+1\)) than one larger global search (i.e., \(K=3\)) [11]. The output from PC-OptKnock is comparable with OptORF [9], another growth-coupling strain designer that also encompasses multi-function enzymes and isozymes. Due to PC-OptKnock being based on the PC-model framework, it has higher prediction accuracy and more sensible output at a higher computational cost than the M-model-based OptORF. ### Mopa The function of MOPA, inspired by the minimization of metabolic adjustment [17], finds the most likely cell state right after the mutation operation based on the proteomic proximity. Due to the speed of proteomic adjustment being orders of magnitude slower than the rate of metabolic or metabolomic adjustment, we believe MOPA is formulated on a better assumption that is made possible by our PC-model framework. The toolbox also features a subsequent function to compute the optimal proteomic adjustment path toward the new optimal state (Fig. 1c, migrating route), of which the adjustment rate can be tuned. ### PC-dynamicFBA We design PC-dynamicFBA to integrate the productivity of the designated strain over time while tracking its proteomic profile. At each time step, the simulated cell needs to abide by the substrate availability and have a protein state proximal to the protein state of the previous time step. The maximum allowed rate of protein re-allocation in PC-dynamicFBA, or ribosome budget, determines the rate at which the cell adapts to a new condition, such as the depletion of a preferred substrate (Fig. 1d). This also makes PC-dynamicFBA an enhanced framework for modelling diauxic growth compared to the original dynamic FBA [12]. ### OVERLAY: context-specific modelling The pipeline of OVERLAY has been featured in our previous publication [18]. It is a streamlined workflow of two-step quadratic optimization (QP), debottlenecking, and protein-constrained flux variability analysis (PC-FVA) algorithms in order to unveil the metabolic insight behind gene expression data. Specifically, we used the first QP to fine-tune system-level enzymatic kinetics, and the second QP to implement data-specific protein constraints to PC-model. The subsequent two steps then exploit the plausible range of metabolic operations for the cell subject to the data-specific protein constraints and return to the user graphically. Fig. 1e is a demonstration of the PC-FVA output, showing the predicted flux ranges of each metabolic reaction (bars) and its respective expression levels (lines). The nature of OVERLAY makes it a valuable tool for scientists to learn from the previous iteration of strain design and experiment. ### Other utilities Apart from the functions listed above, the PROSO Toolbox has other utility functions for easy debugging, understanding, and further development. For example, the function _tryCloseRzn.m_ helps the user quickly set up exchange fluxes of an M-model by solving a mixed integer optimization; the function _gb2fa.m_ parses an NCBI GenBank file into a proteomic FASTA file that is compatible with the toolbox; the function _minimalGenome.m_ is a stand-alone algorithm to find the minimal number of active proteins that is growth sustaining; _bicycleUtility_ allows the user to download flat files from the BioCyc database [8] and map items to the PC-model in a semi-automatic fashion. We believe these utility functions will help to serve users and make their access and contribution easier. ### Comparison with other toolboxes The method of constraining metabolic networks with proteomic or enzymatic allocation has been exploited by many other studies. The most well-known workflows are MOMENT [1] and GECKO [16], both featuring extensive details on enzyme kinetics. GECKO 2.0 [4] is the most detailed enzymatic framework to date and is claimed to be comparable to the ME-model. Consequently, they require more data inputs and potentially complex procedures (possibly requiring more domain expertise or computational resources) to estimate additional model parameters. On the other hand, simplified versions of PC-model (or equivalently, ec-model) have been implemented in other studies, i.e., sMOMENT [2], ECM [15], and ECMpy [13]. These packages are easy to use, usually without adding enzymes as variables, yet their condensed form makes them less versatile for further improvements and integrating omic data. In comparison, PROSO Toolbox is automated and accessible without sacrificing its versatility in incorporating details and omic data. We allow users to automatically construct a draft PC-model with no additional input on enzyme or protein information, which has proven to enhance the prediction compared to the base M-model. The user can also choose to supply data on protein sequence, protein complex stoichiometry, effective enzymatic turnover rate, and proteomic weight fraction to further optimize the performance of PC-model. We have adapted many algorithms for the M-model and FBA into our toolbox, demonstrating the versatility of our framework for further development. More importantly, the framework is compatible with OVERLAY to algorithmically interpret transcriptomic or proteomic data through the PC-model, which provides a distinctive functionality in our toolbox. We believe the PROSO Toolbox has closed the DBL cycle, and thus it is especially suitable for the purpose of computational strain design and optimization for synthetic biologists. ### Conclusion In this study, we introduced the PROSO Toolbox, a PC-model framework that unifies automatic protein constraint implementation, strain designing and simulating functions, and OVERLAY as the algorithmic expression data interpreter. As for the PC-model framework, it is uniquely capable of computational strain designs for wet lab implementation, as well as taking the transcriptomic data measured from the experiment as input for further optimization. We believe the PROSO Toolbox will strongly bridge the GEM community with wet lab experiments and help to achieve GEM-integrated synthetic biology. ## Competing interests No competing interest is declared. ## Author contributions statement H.Y. and L.Y. conceptualized the project. H.Y. implemented and tested the algorithms. H.Y. wrote the documentation. H.Y. drafted the manuscript. H.Y. and L.Y. revised the manuscript. L.Y. directed the research. ## Acknowledgments The first author thanks Ziying Wang for her assistance in testing and proofreading. This work was funded by the Government of Canada through Genome Canada and Ontario Genomics (OGI-207), the Government of Ontario through an Ontario Research Fund (ORF), and Queen's University.
2305.03331
Generic and Robust Root Cause Localization for Multi-Dimensional Data in Online Service Systems
Localizing root causes for multi-dimensional data is critical to ensure online service systems' reliability. When a fault occurs, only the measure values within specific attribute combinations are abnormal. Such attribute combinations are substantial clues to the underlying root causes and thus are called root causes of multidimensional data. This paper proposes a generic and robust root cause localization approach for multi-dimensional data, PSqueeze. We propose a generic property of root cause for multi-dimensional data, generalized ripple effect (GRE). Based on it, we propose a novel probabilistic cluster method and a robust heuristic search method. Moreover, we identify the importance of determining external root causes and propose an effective method for the first time in literature. Our experiments on two real-world datasets with 5400 faults show that the F1-score of PSqueeze outperforms baselines by 32.89%, while the localization time is around 10 seconds across all cases. The F1-score in determining external root causes of PSqueeze achieves 0.90. Furthermore, case studies in several production systems demonstrate that PSqueeze is helpful to fault diagnosis in the real world.
Zeyan Li, Junjie Chen, Yihao Chen, Chengyang Luo, Yiwei Zhao, Yongqian Sun, Kaixin Sui, Xiping Wang, Dapeng Liu, Xing Jin, Qi Wang, Dan Pei
2023-05-05T07:22:30Z
http://arxiv.org/abs/2305.03331v1
# Generic and Robust Root Cause Localization for Multi-Dimensional Data in Online Service Systems ###### Abstract Localizing root causes for multi-dimensional data is critical to ensure online service systems' reliability. When a fault occurs, only the measure values within specific attribute combinations (e.g., Province=Beijing) are abnormal. Such attribute combinations are substantial clues to the underlying root causes and thus are called root causes of multi-dimensional data. This paper proposes a generic and robust root cause localization approach for multi-dimensional data, PSqueeze. We propose a generic property of root cause for multi-dimensional data, generalized ripple effect (GRE). Based on it, we propose a novel probabilistic cluster method and a robust heuristic search method. Moreover, we identify the importance of determining external root causes and propose an effective method for the first time in literature. Our experiments on two real-world datasets with 5400 faults show that the F1-score of PSqueeze outperforms baselines by 32.89%, while the localization time is around 10 seconds across all cases. The F1-score in determining external root causes of PSqueeze achieves 0.90. Furthermore, case studies in several production systems demonstrate that PSqueeze is helpful to fault diagnosis in the real world. keywords: Root cause localization, online service system, ripple effect, multi-dimensional data + Footnote †: journal: ## 1 Introduction Large online service systems (_e.g._, online shopping platforms) serve millions of users and require high reliability to ensure user experience. Faults in large online service systems could cause enormous economic loss and damage user satisfaction [1]. For example, the loss of one-hour down-time for Amazon.com on Prime Day in 2018 (its biggest sale event of the year) is up to $100 million [2]. Therefore, it is in urgent demand to diagnose faults rapidly. To ensure quality of software service, operators usually closely monitor some measures (_e.g._, total dollar amount), which reflect the system status [3; 4]. When a fault occurs, the monitoring system can detect the abnormal measure values and raise alerts to operators. A measure record is associated with many attributes, and when a fault occurs, only the measure values of specific attribute combinations are abnormal [3; 4; 5]. For example, when the network service provided by _CMobile_ in _Beijing_ province fails, only the dollar amount of (Province=Beijing, ISP=CMobile) would decrease dramatically. Such _attribute combinations_ can effectively indicate the fault location and serve as substantial clues to the underlying root causes [3; 5]. Thus, we call the set of such attribute combinations as the _root cause_ of the multi-dimensional data [4; 6; 3; 7]. Therefore, following existing work [8; 3; 5; 9; 4; 6; 7], in this paper, we focus on _localizing root causes of multi-dimensional data_ to help operators diagnose faults rapidly. However, it is challenging due to the huge search space. On the one hand, there are many attributes (_e.g._, dozens) and attribute values (_e.g._, thousands) in large online service systems, leading to a combinatorial explosion. On the other hand, faults must be mitigated rapidly to reduce the impact on user experience, and thus it requires high efficiency for the localization. Existing works [10; 11; 8; 4; 7; 5; 3; 9] apply various techniques to overcome the huge search space challenge, but they are not generic or robust enough due to some limitations or not efficient enough (see later in Table 11). For example, MID [5], Dice [8], and ImpAPTr [9] are only applicable to specific types of measures. Apriori [7; 3] and R-Adributor [11] highly relies on parameter fine-tuning. Notably, all the previous approaches do not check _external root causes_, _i.e._, root causes containing some unrecorded or unused attributes. The localization results are always incorrect when there are external root causes, which can mislead the direction of fault diagnosis and waste time [12]. This paper proposes _PSqueeze_, a generic and robust root cause localization approach for multi-dimensional data. Rather than impractical root cause assumptions or properties of specific kinds of measures, the search strategies of _PSqueeze_ are based on a more generic property of root causes of multi-dimensional data, _generalized ripple effect_ (GRE). GRE holds for different measures (see Section 3) and holds in real-world faults (see Section 6), enabling our search strategies' genericness. Based on GRE, we propose a "bottom-up&top-down" method to achieve high efficiency without much loss of genericness and robustness. Specifically speaking, in the bottom-up stage, we firstly group attribute combinations into different clusters, each of which contains those attribute combinations affected by the same root cause only, with a robust probabilistic clustering method based on GRE. In this way, _PSqueeze_ firstly breaks down the problem into simpler sub-problems (_i.e._, single root causes) and reduces search space. Then in the top-down stage, we propose a score function, generalized potential score (GPS), to evaluate how likely a set of attribute combinations is the root cause, and search from each cluster the attribute combinations maximizing it with a efficient heuristic search strategy. Finally, after the search, _PSqueeze_ determines external root causes based on GPS. To evaluate _PSqueeze_, we conduct extensive experimental studies based on two real-world datasets from two companies. Since the real-world faults are not enough for evaluation, we propose a fault simulation method and obtain 5400 simulated faults. The results show that _PSqueeze_ outperforms all baselines by 32.89% in different situations while keeping high efficiency (costs about 10s consistently for each fault). We also inject 73 faults on an open-source benchmark system to prove the effectiveness of _PSqueeze_ in real-world scenarios. For determining external root causes, the F1-score of _PSqueeze_ achieves 0.90 on average. We also present several real-world success stories to demonstrate the efficacy of _PSqueeze_ in real-world systems. The major contributions are summarized as follows: * We propose a novel property about root causes of multidimensional data, which is proved to hold in different situations and hold in real-world faults. * We identify the importance of determining external root causes propose the first effective method for it. * We propose a novel "bottom-up&top-down" localization method, _PSqueeze_, achieving high efficiency without much loss of genericness and robustness. * We evaluate the effectiveness and efficiency _PSqueeze_ in different situations based on 5400 simulated faults and xxx injected faults. We make our dataset and implementation public to help further studies in the field1. Footnote 1: [https://github.com/NetManAIQps/PSqueeze](https://github.com/NetManAIQps/PSqueeze) This paper extends our previous conference paper, Squeeze [6], in four aspects. * New methods. First, this paper proposes the first external-root-cause-determining method in the field (Section 4.4). Second, to reduce the influence of noises, we propose a novel _probabilistic_ clustering method (Section 4.2). Hence we name our new method as _PSqueeze_ (probabilistic Squeeze). * New experiment settings. First, we propose a more reasonable fault simulation strategy for evaluation (Section 5.1.1). Second, we implement and compare two more recent related works, MID [5] and ImpAPTr [9]. Third, we introduce two new datasets based on fault injection on an open-source benchmark system. * New experiment results based on the new experiment settings (Section 5) and new real-world success stories (Section 6). The results show _PSqueeze_ is effective and efficient and outperform the previous approaches including Squeeze. * Enhancement to presentation. First, we clarify the definition of basic concepts formmaly (Section 2). Second, we present more details about our methodology, such as the proof of GRE for productions (Section 3), and the reasons for our methodology's design choices (Section 4). Notably, for a better understanding of GRE, we present a much more straightforward proof in Section 3.2. ## 2 Background In this section, we first describe our problem intuitively. Then we introduce some necessary concepts, notations, and definitions. Finally, we define our problem formally. ### Root Cause Localization for Multi-Dimensional Data As introduced in Section 1, multi-dimensional data are essentially a group of structured logs generated by an online service system. Specifically speaking, by grouping the \begin{table} \begin{tabular}{c c c c c} \hline \hline Order ID & Timestamp & Dollar Amount Province & ISP \\ \hline A001 & 2020.07.15 10:00:01 & \$16 & Beijing & China Mobile \\ A002 & 2020.07.15 10:00:05 & \$21 & Beijing & China Unicom \\ \hline \hline \end{tabular} \end{table} Table 1: Example structured logs for an online shopping platform logs (_e.g._, Table 1) by some _attributes_ (_i.e._, Timestamp, Province and ISP) and aggregating the _measure_ values (_i.e._, Dollar Amount), we transform the original logs into multi-dimensional data (_e.g._, Table 2) [3]. To ensure service quality, operators closely monitor the overall measure values (_e.g._, total dollar amount). When a measure value becomes abnormal (_e.g._, in Table 2, the total dollar amount decreases from 550.8 to 518), a fault occurs in the online service system. A fault usually causes only the measure values under specific attribute combinations abnormal in practice [5]. For example, in Table 2, when a fault happens in the servers in Beijing, only the measure values of (Province=Beijing) (_i.e._, the first two rows) are abnormal. Such attribute combinations indicate the scope of the fault, and thus, are substantial clues to the underlying root causes. We call such attribute combinations _root-cause attribute combinations_ and the set of root-cause attribute combinations as the _root causes for the multi-dimensional data_[4; 3; 6]. For convenience, without other conflicts, "root causes" in this paper refer to root causes for multi-dimensional data. Localizing root causes of multi-dimensional data enables rapid fault diagnosis by directing the investigation. The exact root causes (_e.g._, {(Province=Beijing)} in Table 3) may contain uncollected attributes (_e.g._, Province in Table 3). We call such root causes _external root causes_, which are not rare in practice [12]. On the one hand, many attributes are seldom used in fault diagnosis because they contain many null values, are hard to understand, or are not informative. On the other hand, since the search space grows exponentially with the number of attributes, operators have to choose the most useful ones for root cause localization. When external root causes exist, the localization results would be wrong and misleading. However, all existing approaches do not check external root causes to our best knowledge. ### Necessary Concepts and Notations We denote the set of all _attributes_ of the studied multi-dimensional data \(D\) as \(A\)=\(\{a_{1},a_{2},...,a_{n}\}\), where \(a_{i}\) is the \(i\)-th attribute, and \(n\) is the total number of attributes. Each attribute has a finite number of discrete feasible values, which are called _attribute values_. We denote the set of attribute values of \(a_{i}\) as \(V_{i}\)=\(\{v_{i}^{(1)}\),\(v_{i}^{(2)}\),...,\(v_{i}^{(m_{i})}\}\), where \(v_{i}^{(j)}\) is the \(j\)-th attribute value of \(a_{i}\), and \(m_{i}\) is the number of attribute values of \(a_{i}\). An attribute \(a_{i}\) and one of its attribute values \(v_{i}^{(j)}\) construct a _tuple_, \(t=(a_{i},v_{i}^{(j)})\). For each attribute \(a_{i}\), we denote the set of its tuples as \(T_{i}=\{a_{i}\}\times V_{i}\), where "\(\times\)" refers to Cartesian product. We denote the set of all tuples as \(T=\cup_{i=1}^{n}T_{i}\). Then an _attribute combination_ is a subset of \(T\) that contains at most one tuple from each \(T_{i}\). Therefore _the set of all attribute combinations_ can be denoted as \(E\)=\(\{e\in\mathcal{P}(T)\mid\forall T_{i},|e\cap T_{i}|\)\(\leq\)1}, where \(\mathcal{P}(T)\) refers to the power set of \(T\), and \(|\cdot|\) refers to the cardinality of a set. In practice, a root cause of a fault can contain multiple root-cause attribute combinations, and a fault can have multiple root causes either. Hence _the set of all root cause candidates_ is \(\mathcal{P}(E)\) rather than \(E\). A _leaf attribute combination_\(e\) (a.k.a. _leaf_ for simplicity) is an attribute combination that contains tuples of every attribute, _i.e._, \(\forall T_{i},|e\cap T_{i}|\)=1. An attribute combination \(e_{1}\) is _descended_ from \(e_{2}\) when \(e_{2}\)\(\subset\)\(e_{1}\). For example, (Province=Shanghai/ISP=China Unicom) is descended from (Province=Shanghai). The insight is that if \(e_{1}\) is descended from \(e_{2}\), then the slice of data represented by \(e_{1}\) is a subset of that of \(e_{2}\). We denote the set of all leaf attribute combinations descended from \(e\) as \(LE(e)\)=\(\{e^{\prime}\in E|e\subset e^{\prime}\ \wedge\ \forall T_{i},|e^{\prime}\cap T_{i}|\)=1}. A _cuboid_ is a set of attribute combinations enumerating all attribute values for the involved attributes, as shown in Fig. 1. Given a set of attributes \(A^{\prime}\subset A\), the corresponding cuboid is \(Cuboid_{A^{\prime}}=\{e\in E|\forall a_{i}\in A^{\prime},|e\cap T_{i}|\)=1\(\wedge|e|\)=\(|A^{\prime}|\}\). We call \(|A^{\prime}|\) the _layer_ of \(Cuboid_{A^{\prime}}\) (see Fig. 1). _Fundamental measures_ are those measures directly aggregated from raw logs and are additive [10]. For example, in Table 2, the total dollar amount of (Province=Beijing) is the sum of those of (Province=Beijing\(\times\)ISP=China Mobile) and (Province=Beijing\(\times\)ISP=China Unicom). _Derived measures_ are derived from fundamental measures [10] and are typically non-additive [10]. For example, the overall average dollar amount is not the sum over those of all ISPs. The overall average dollar amount can be either greater than or less than that of (Province=Beijing). Some previous approaches [8; 4; 5] are not applicable on derived measures due to these characteristics. Figure 1: A graph of cuboids (rounded boxes) with 3 attributes \begin{table} \begin{tabular}{c c c c} \hline \hline \(\mathrm{ISP}\) & real value & forecast value \\ \hline **China Mobile** & **56** & **62.8** \\ **China Unicom** & **462** & **488** \\ \hline \(\mathrm{Total}\) & 518 & 550.8 \\ \hline \hline \end{tabular} \end{table} Table 3: External root cause example. The _real value_ (denoted as \(v(e)\)) of an attribute combination \(e\) is the measure value that is actually observed based on the raw transaction logs, and the _forecast value_ (\(f(e)\)) is its expected normal value. We calculate the forecast value by a time-series forecasting algorithm (see Section 4.1). Without loss of genericness, we assume both \(v\) and \(f\) are non-negative since most common measures are non-negative. Note that for a derived measure \(M=h(M_{1},M_{2},...,M_{l})\), \(v_{M}(e)=h(v_{M_{1}}(e),v_{M_{2}}(e),...,v_{M_{l}}(e))\). Furthermore, for convenience, we extend the notations, \(v\) and \(f\), to sets of attribute combinations: supposing that \(S\) is a set of attribute combinations, for fundamental measures, \(v(S)\)=\(\sum_{e^{\prime}\in\bigcup_{e\leq E}{\it{Le}(e)}}v(e^{\prime})\) and so does \(f\). For a derived measure \(M=h(M_{1},M_{2},...,M_{l})\), \(v_{M}(S)=h(v_{M_{1}}(S),v_{M_{2}}(S),...,v_{M_{l}}(S))\) and so does \(f\). ### Problem Definition The input of our problem is a snapshot of multi-dimensional data \(D\) (with both real and forecasting values) at the fault time. The forecasting values are obtained by an appropriate time-series forecasting algorithm, which is out of the scope of this paper. To better evaluate the robustness of _PSqueeze_, in this paper, we use MA (moving average, one of the simplest algorithms) for all scenarios (see later in Section 5.1). Our goal is to localize the _root cause of multi-dimensional data_, which refers to a set of attribute combinations that is: * _Expressive_. An expressive root cause candidate \(S\) indicates the scope of faults in the multi-dimensional data accurately. In other words, the part of \(D\) specified by \(S\) is abnormal, and the other part is normal. * _Interpretable_. An interpretable root cause candidate \(S\) is as concise as possible to make operators focus on the faulty attributes and attribute values. Sometimes there are multiple root causes at the same time, which indicate different underlying failures and have different influence. In such cases, each root cause is a set of attribute combinations that is expressive and interpretable excluding the influence of the other root causes, and we aim to find the union of these root causes. Following existing works [10; 8; 4; 7; 11; 5; 3; 9], causal inference is also out of scope. ## 3 Generalized Ripple Effect ### Background of Ripple Effect _Ripple effect_, first empirically observed by [4] for _fundamental measures_ only, captures the relationship of attribute combinations' abnormal magnitudes caused by the same root cause. The intuition is that all attribute combinations affected by the same root cause will change by the same proportion. We denote the set of attribute combinations affected by a root cause \(S\in\mathcal{P}(E)\) as \(\text{Aff}(S)=\{e\in E\:|\:\exists e_{0}\in S,\:s.t.\:e_{0}\subseteq e\}\). Then ripple effect can be expressed as \[(f(e)-v(e))/f(e)\text{=}(f(S)-v(S))/f(S),\forall e\in\text{Aff}(S) \tag{1}\] For example, in Table 2, the root cause is \(S=\{(\text{Province}=\text{Beijing})\}\) in cuboid \(C_{\text{province}}\). Therefore, if \(e_{1}=(\text{Province}=\text{Beijing}/\text{1SP}=\text{China Unicom})\), then \((f(S)-v(S))/f(S)=((10+20)-(5+10))/(10+20)=0.5\) and \((f(e_{1})-v(e_{1}))/f(e_{1})=(20-10)/20=0.5\). Note that for multiple-root-cause faults, \(S\) only denotes a single root cause rather than the union of multiple root causes. ### Generalizing Ripple Effect for Derived Measures First, we generalize ripple effect to derived measures. We aim to prove that (1) holds for a derived measure when (1) holds for all its underlying fundamental measures. Since most common derived measures are the quotient or product of two fundamental measures (_e.g._, average dollar amount and success rate), without much loss of genericness, we provide the proof for such derived measures only. Compared with our previous conference version, we simplify the equations for clarity. #### 3.2.1 Quotient Consider three measures, \(M_{1},M_{2},M_{3}\), where \(M_{1}\) and \(M_{2}\) are fundamental measures and \(M_{3}\)=\(M_{1}/M_{2}\). Because \(M_{1}\) and \(M_{2}\) are fundamental measures and they follow (1), for both \(M_{i}\) (\(i=1,2\)), \(v_{M_{i}}(e)/f_{M_{i}}(e)=v_{M_{i}}(S)/f_{M_{i}}(S)\). Therefore, \[\begin{split}&\frac{f_{M_{3}}(S)-v_{M_{3}}(S)}{f_{M_{3}}(S)}=( \frac{f_{M_{1}}(S)}{f_{M_{2}}(S)}-\frac{v_{M_{1}}(S)}{v_{M_{2}}(S)})\frac{f_{M_{ 2}}(S)}{f_{M_{1}}(S)}\\ &=1-\frac{v_{M_{1}}(S)}{f_{M_{1}}(S)}\frac{f_{M_{3}}(S)}{v_{M_{2} }(S)}=1-\frac{v_{M_{1}}(e)}{f_{M_{1}}(e)}\frac{f_{M_{2}}(e)}{v_{M_{2}}(e)}\\ &=1-\frac{v_{M_{3}}(e)}{f_{M_{3}}(e)}=\frac{f_{M_{3}}(e)-v_{M_{3}} (e)}{f_{M_{3}}(e)}\end{split} \tag{2}\] #### 3.2.2 Product Similarly, if \(M_{1}\) and \(M_{2}\) are fundamental measures and \(M_{3}\)=\(M_{1}\cdot M_{2}\), then for both \(M_{i}\) (\(i=1,2\)), \(v_{M_{i}}(e)/f_{M_{i}}(e)=v_{M_{i}}(S)/f_{M_{i}}(S)\). Therefore, \[\begin{split}&\frac{f_{M_{3}}(S)-v_{M_{3}}(S)}{f_{M_{3}}(S)}=\frac{f_{ M_{1}}(S)f_{M_{2}}(S)-v_{M_{1}}(S)v_{M_{2}}(S)}{f_{M_{1}}(S)f_{M_{2}}(S)}\\ &=1-\frac{v_{M_{1}}(S)}{f_{M_{1}}(S)}\frac{v_{M_{2}}(S)}{f_{M_{2}} (S)}=1-\frac{v_{M_{1}}(e)}{f_{M_{1}}(e)}\frac{v_{M_{2}}(e)}{f_{M_{2}}(e)}\\ &=1-\frac{v_{M_{3}}(e)}{f_{M_{3}}(e)}=\frac{f_{M_{3}}(e)-v_{M_{3}} (e)}{f_{M_{3}}(e)}\end{split} \tag{3}\] The core idea of our proof is _finite difference_[13], and a similar method can be applied when dealing with other types of derived measures. Though our proof technique can hardly be applied on specific types of derived measures (_e.g._, tail latency), our results can already cover most common measures (_e.g._, the four golden signals from Google SRE [14]), including both fundamental measures and derived measures. ### Generalizing Ripple Effect for Zero Forecast Values Equation (1) does not work for zero forecast values (_i.e._, \(f(S)=0\)) and thus is not robust enough. To tackle it, we replace \(f\) with \(\frac{f+v}{2}\). The intuition is to use \(\frac{f+v}{2}\) to estimate \(f\). Therefore, the formulation of GRE is \[\frac{f(e)-v(e)}{f(e)+v(e)}=\frac{f(S)-v(S)}{f(S)+v(S)},\forall e\in\text{Aff}(S) \tag{4}\] Equation (4) is consistent with (1): * If \(f(e)=v(e)=0\) or \(f(S)=v(S)=0\), the relationship between \(e\) and \(S\) is meaningless since there is actually no data for \(e\) or \(S\). In other situations, (4) is always meaningful. * If \(f(e)\neq 0\) and \(f(S)\neq 0\), it is obvious that (1) is equivalent to (4). * If \(f(e)=0\neq v(e)\) or \(f(S)=0\neq v(S)\), then (1) does not hold. We extend the idea of ripple effect to more generic cases by modifying the calculation of anomaly magnitudes. ### Deviation Score and Expected Abnormal Value In this paper, we utilize GRE by _deviation scores_ and _expected abnormal values_. According to (4), for any attribute combination \(e\) that is affected by the same root cause \(S\), the value of \(\frac{f(e)-v(e)}{f(e)+v(e)}\) keeps invariant. We define it as _deviation score_ of \(e\) (denoted as \(d(e)\)). According to GRE, \[\forall e\in\text{Aff}(S),d(e)=\big{(}f(S)-v(S)\big{)}/\big{(}f(S)+v(S)\big{)} \tag{5}\] Therefore, given a root cause candidate \(S\) and any attribute combination \(e\in\text{Aff}(S)\), if \(S\) is the correct root cause, then \(d(e)\)=\(\frac{f(S)-v(S)}{f(S)+v(S)}\). As a result, the _expected abnormal value_ of \(e\) should be \[a(e)=f(e)\big{(}1-d(e)\big{)}/\big{(}1+d(e)\big{)} \tag{6}\] If \(a(e)\) differs from \(v(e)\) a lot, then the candidate \(S\) breaks GRE and is not the correct root cause. ## 4 Methodology ### Overview The workflow of _PSqueeze_ is illustrated in Fig. 2, where the dashed box highlights the scope of _PSqueeze_. When a fault happens (often indicated by alerts from the monitoring system), _PSqueeze_ is triggered. _PSqueeze_ takes the corresponding multi-dimensional data at the fault time (\(v\)) and its forecast values (\(f\)) as inputs. Then, _PSqueeze_ reports the root causes to operators, and notifies operators whether there can be external root causes to avoid misleading. _PSqueeze_ contains three stages: 1) bottom-up clustering (Section 4.2), 2) top-down localization for each cluster (Section 4.3), and 3) external root cause determining (Section 4.4). Different from all previous work [10; 11; 8; 4; 7; 5; 3], _PSqueeze_ employs a novel "bottom-up then top-down" searching strategy. In the bottom-up stage, _PSqueeze_ groups leaf attribute combinations into different clusters, each of which contains the leaf attribute combinations affected by the same root cause. The bottom-up clustering enables the further design of our efficient in-cluster localization method by simplifying the problem from multiple-root-cause localization to single-root-cause localization. In the top-down step, _PSqueeze_ uses a heuristic method based on our proposed generalized potential score (GPS) to efficiently search for the root cause in each cluster output by the bottom-up step. At the final stage, _PSqueeze_ determines whether there are external root causes. Notably, this paper extends our previous conference version with respect to methodology in two aspects. First, we employ probabilistic clustering for robustness (Section 4.2). Second, we introduce external root causes checking to avoid misleading (Section 4.4). Besides, we fix the issue in (9) and enhance the presentation of the methodology for clarity. ### Bottom-Up Searching through Clustering In this stage, we determine the cluster boundaries with a probabilistic cluster method based on the leaf attribute combinations with large abnormal changes. #### 4.2.1 Forecast Residual-Based Filtering In order to make the following clustering step focus on abnormal leaf attribute combinations (_i.e._, affected by root causes), we need to detect abnormal leaf attribute combinations. Following existing work [15; 16; 17; 4], we use **forecast residuals** (_i.e._, the difference between real and forecast values) to indicate the extent of the changes of attribute combinations and apply a threshold to decide whether the change is abnormal or not. We apply knee-point method on the cumulative distribution function (CDF) of the forecast residuals of leaf attribute combinations for automated threshold selection. It is because given a large number of leaf attribute combinations, the number of abnormal leaf attribute combinations is usually Figure 3: An illustration example of our forecast residual-based filtering Figure 2: The workflow of _PSqueeze_ much less than the normal ones. A knee point refers to a point where the increase of filtered-out leaf attribute combinations is no longer worth the increase of the threshold. In Fig. 3, we present an example CDF of an online service system fault and its knee point (the vertical dashed line). We define a knee point as the point with maximum curvature rather than other ad-hoc definitions for genericness and robustness following existing work [18]. The advantage of the knee-point method is that it is simple, efficient, and completely automated. #### 4.2.2 Calculating the Distribution of Deviation Scores As introduced in Section 3.4, leaf attribute combinations affected by the same root causes have the same deviation scores, based on which we design our clustering method. First, we estimate the distribution of all leaf attribute combinations' deviation scores. Squeeze (our previous conference version [6]) presumes the observed deviation score is correct. However, deviation scores can be significantly affected by noises from natural variation or inaccurate forecasting, especially when the real and forecast values are small. For example, given an attribute combination where \(v\sim Pois(5)\) and \(f=5\), its deviation score is supposed to be 0 since the expectation of its real value is equal to its forecast value. However, its deviation score could be \(\pm 0.222\) or \(\pm 0.182\) (\(v=4,6\) due to noise or natural variation, or \(f=4,6\) due to inaccurate forecast), and thus it can be mistakenly grouped into incorrect clusters. Thus it is required to estimate the distribution of deviation scores in a more robust manner. Since the variation and forecasting errors can hardly be eliminated, we try to explicitly model the noises. Unlike Squeeze, _PSqueeze_ considers what if the deviation scores of an attribute combination are biased and determines the probability that the leaf attribute combination should be grouped into each cluster. As a result, when the deviation score of an attribute combination is largely biased due to noises and thus is grouped into an incorrect cluster by Squeeze, _PSqueeze_ groups it into the correct cluster with a certain probability. In this way, we make _PSqueeze_ more robust to such noises than Squeeze. Specifically speaking, we firstly calculate the probability density function (PDF/PMF) for each abnormal attribute combination's deviation score and then average them to obtain the overall distribution. The choice of PDF and PMF depends on our measure: we calculate PDF for continuous measures (_e.g._, average response time) and PMF for discrete measures (_e.g._, the number of orders). To calculate the PDF/PMF, we need to assume the distribution family according to domain knowledge about the nature of the measure. For example, considering the measure of the number of total orders, we can assume that the number of total orders \(v\) follows _Poisson distribution_, _i.e._, \(v\sim Pois(\lambda=v^{\prime})\), because the speed of order arrivals keeps stable in a short duration. Then the real deviation score is supposed to be \(ds=2\frac{f-v^{\prime}}{f+v^{\prime}}\). Therefore, the probability mass function of \(ds\) is \(P(ds=2\frac{f-v-k}{f+v+k})=Pois(v;\lambda=v+k)\) where \(v\) denotes the observed real value. Compared with that Squeeze considers that \(ds\) follows \(P(ds=2\frac{f-v-k}{f+v+k})=\begin{cases}1,&k=0\\ 0&k\neq 0\end{cases}\), it models the probability that the observed deviation score is biased due to noises. In this way, we calculate all the PDF/PMFs of leaf attribute combinations' deviation scores (_e.g._, the light curves in Fig. 4). Then, by averaging all these probability density/mass functions together, we get the overall distribution of deviation scores of all attribute combinations (_e.g._, the bold solid curve in Fig. 4). In this paper, we use _Poisson distribution_ for all fundamental measures, including #orders and #page views. It is because _Poisson distribution_ is suitable for describing the number of event occurrences with a constant mean rate. For derived measures, we do not use probabilistic clustering, given the difficulty in finding appropriate distribution families for derived measures. #### 4.2.3 Determining the Cluster Boundaries ``` 1:procedureDensityCluster(\(PDF\)) 2:\(centers\leftarrow\mathrm{argelmax}(PDF)\) 3:\(boundaries\leftarrow\mathrm{argelmin}(PDF)\) 4:\(clusters\leftarrow[]\) 5:for\(center\) in\(centers\)do 6:\(l\leftarrow\) maximum in\(boundaries\) s.t. \(l<center\) 7:\(r\leftarrow\) minimum in\(boundaries\) s.t. \(r>center\) 8:\(clusters.\mathrm{append}(\{e\in LE(\emptyset)|d(e)\in[l,r]\})\)return clusters ``` **Algorithm 1** Deviation Score Based Probabilistic Clustering Based on the distribution of deviation scores, we determine the number of clusters and the cluster boundaries. Since the deviation scores of the leaf attribute combinations affected by the same root cause should crowd around a small area, the low-density areas separate the clusters. It is a 1-dimensional clustering problem, which is special because there is no saddle point in 1-dimensional spaces, and thus _relative maximums_ of the PDF/PMF represent all high-density areas. Hence applying a high-dimensional clustering method (_e.g._, DBSCAN [19]) would be unnecessarily costive. Instead, we design a simple clustering method intuitively, as shown in Algorithm 1. First, we select the points where deviation scores crowd, _i.e._, relative maximums of the PDF/PMF (_e.g._, the three solid red vertical lines in Fig. 4), as the centroids of clusters. Figure 4: Illustration of _PSqueeze_’s probabilistic clustering. Hence the number of clusters is equal to the number of relative maximums of the PDF/PMF. Then, since low-density areas separate the clusters, we select the nearest low-density points, _i.e._, _relative minimums_ of the PDF (_e.g._, the dashed purple vertical lines in Fig. 4), as the boundaries of the clusters. Then, the probability of an attribute combination in a cluster is equal to the probability that its deviation score locates between the boundaries. In other words, a cluster contains the leaf attribute combinations whose PDF/PMF intersects with the area between boundaries, and the intersection area denotes the probability that the attribute combination is in the cluster. ### Top-Down Localization in Each Cluster ``` 1:procedureInClusterLocalization(\(cluster\)) 2:\(root\_causes\leftarrow[]\) 3:for\(cuboid\) in all cuboids from top to bottom do 4:\(AC\leftarrow\) sorted attribute combinations in \(cuboid\) by \(r_{descended}\) in descending order 5:for\(split\) in all valid splits do 6:\(score[split]\leftarrow\) GPS(\(AC[:split]\)) 7:\(root\_cause\gets AC[:\operatorname{argmax}_{split}score]\) 8:\(root\_causes\gets root\_causes+[root\_cause]\) 9:if\(root\_cause\)'s \(score\geq\delta\)then 10: Stop search next layer 11: sort \(root\_causes:=\{S_{i}\}\) by \(GPS(S_{i})*C-I(S_{i})\) in descending order 12:return\(root\_causes[0]\) ``` **Algorithm 2** Localization in Each Cluster The output of the bottom-up search is a list of clusters, each of which is a set of leaf attribute combinations that are affected by the same root cause. For convenience, we denote a cluster as \(Cluster\). At this stage, we aim to localize root causes for each cluster. The root cause of a cluster is defined as a set of attribute combinations that is expressive when considering \(Cluster\) and the normal leaf attribute combinations only and is interpretable. To overcome the challenges of huge search space, we propose an efficient heuristic search method for the in-cluster root cause localization, which contains three key techniques. 1) a cuboid-wise top-down search strategy to narrow down the search space, 2) a heuristic strategy to search a cuboid efficiently, and 3) a robust objective function to evaluate the expressiveness and interpretability of a root cause candidate. A summary of our method at this stage is presented in Algorithm 2. ``` 1:procedureInClusterLocalization(\(cluster\)) 2:\(root\_causes\leftarrow[]\) 3:for\(cuboid\) in all cuboids from top to bottom do 4:\(AC\leftarrow\) sorted attribute combinations in \(cuboid\) by \(r_{descended}\) in descending order 5:for\(split\) in all valid splits do 6:\(score[split]\leftarrow\) GPS(\(AC[:split]\)) 7:\(root\_cause\gets AC[:\operatorname{argmax}_{split}score]\) 8:\(root\_causes\gets root\_causes+[root\_cause]\) 9:if\(root\_cause\)'s \(score\geq\delta\)then 10: Stop search next layer 11: sort \(root\_causes:=\{S_{i}\}\) by \(GPS(S_{i})*C-I(S_{i})\) in descending order 12:return\(root\_causes[0]\) ``` **Algorithm 3** Localization in each cluster #### 4.3.1 Cuboid-Wise Search Strategy We search for each cluster's root cause in each cuboid layer by layer (line 3). Taking Fig. 1 as an example, we would search cuboid ISP, Province, and User Agent first, then search cuboid ISP&Province, ISP&User Agent, and Province&User Agent, and finally search cuboid ISP & Province&User Agent. On the one hand, the cuboid-wise search strategy is motivated by the assumption that the root cause of a cluster is only a subset of a cuboid. The assumption is practical due to the following intuitions: 1) the attribute combinations in the cluster are affected by the same root cause; 2) in practice one root cause rarely requires more than one cuboid to describe it according to our analysis on many real-world production faults. On the other hand, we search shallow cuboids first because root cause candidates in shallower cuboids are more interpretable (see Section 2.3) than those in deeper cuboids, and thus we call it a top-down search method. #### 4.3.2 Heuristic Method to Search a Cuboid If an attribute combination \(e\) is part of the root cause, then according to GRE, all of its descent leaf attribute combinations (_i.e._, \(LE(e)\)) should have similar deviation scores, _i.e._, they should all in the cluster \(Cluster\). We call the ratio of descended leaf attribute combinations in the cluster of \(e\) as the _descended ratio_ of \(e\). It is denoted as \[r_{descended}(e)=\frac{\sum_{e^{\prime}\in LE(e)\cap Cluster}p(e^{\prime} \in Cluster)}{\sum_{e^{\prime}\in LE(e)}p(e^{\prime}\in Cluster)} \tag{7}\] , where \(p(e^{\prime}\in Cluster)\) denotes the probability that \(Cluster\) contains \(e^{\prime}\) (see Section 4.2.3). For example, in Table 2, supposing that the cluster contains the first two rows in the table, and we are searching cuboid Province now, the descended ratio of (\(Province=Beijing\)) is 1 and those of others are 0. We sort the attribute combinations of a cuboid by their descended ratios in descending order (line 4). In this way, the attribute combinations at the front of the list are more likely to be part of the root cause than those at the back. #### 4.3.3 Generalized Potential Score Then, we aim to find the top-\(k\) items in the sorted attribute combination list of a cuboid as the root cause candidate of the cuboid (line 7). For this purpose, we propose _generalized potential score_ (GPS), to evaluate how likely a set of attribute combination (\(S\)) is the root cause in the following aspects: 1) it is expressive, _i.e._, the real and forecast values of its descended leaf attribute combinations (_i.e._, \(LE(S)=\bigcup_{e\in S}LE(e)\)) should be different, and those of other leaf attribute combinations should be close; 2) it follows GRE, _i.e._, the real values of \(LE(S)\) should be close to the corresponding expected abnormal values (see Section 3.4). We do not evaluate interpretability here because in the same cuboid, adding extra attribute combinations reduce expressiveness and interpretability simultaneously. Comparing real and expected abnormal values helps to filter the false expressive candidates caused by inaccurate forecasting and noise and make _PSqueeze_ more robust. We measure the difference between the real and forecast values of \(LE(S)\) by normalized Manhattan distance, _i.e._, \(d_{1}(\mathbf{v}(S),\mathbf{f}(S))=\frac{1}{|LE(S)|}\sum_{e\in LE(S)}|v(e)-f(e)|\), and the difference between the real and expected abnormal values by \(d_{1}(\mathbf{v}(S),\mathbf{a}(S))=\frac{1}{|LE(S)|}\sum_{e\in LE(S)}|v(e)-a(e)|\). Similarly, for other leaf attribute combinations (denoted as \(LE(S^{\prime})\)), the difference between the real and forecast values is \(d_{1}(\mathbf{v}(S^{\prime}),\mathbf{f}(S^{\prime}))=\frac{1}{|LE(S^{\prime})|}\sum_{e\in LE (S^{\prime})}|v(e)-f(e)|\). Considering that we are conducting in-cluster localization and there can be other root causes, \(LE(S^{\prime})\) does not contain the leaf attribute combinations in other clusters. Based on these, we define GPS as follows: \[GPS=1-\frac{d_{1}(\mathbf{v}(S),\mathbf{a}(S))+d_{1}(\mathbf{v}(S^{\prime}),\mathbf{f}(S^{ \prime}))}{d_{1}(\mathbf{v}(S),\mathbf{f}(S)))+d_{1}(\mathbf{v}(S^{\prime}),\mathbf{f}(S^{ \prime}))} \tag{8}\] GPS robustly indicates \(S\)'s expressiveness, even if the anomaly magnitude of \(S\) is insignificant. Considering _potential score_[4] and _explanation power_[10; 11], the forecast residuals of \(LE(S^{\prime})\) would accumulate as the size of \(LE(S^{\prime})\) increases, and thus they cannot reflect the expressiveness when the anomaly magnitude of \(S\) is insignificant. For example, in Table 2, the GPS of the first two rows in bold is 0.743, while the potential score is 0.303 and the explanation power is 0.457. #### 4.3.4 Selecting among Candidates from Different Cuboids Finally, we select the root cause with both high expressiveness and interpretability from the candidates found in the cuboids. For this purpose, we quantitively define the interpretability of \(S\) as \(I(S)=\sum_{e\in S}|e|^{2}\). For example, \(I(\{\)(Province=Beijing\(\})\})=1\) and \(I(\{\)(Province=Beijing\(\}\) ISP=CUnicom), (Province=Beijing\(\}\) LSP=CMobile\(\})) = 8. We use a weight \(C\) to trade off expressiveness (measured by GPS) and interpretability (line 11). To calculate \(C\) automatically, we employ an empirical formula, which can achieve good effectiveness: \[\begin{split} g_{cluster}&=\log(num\_cluster+1)/num \_cluster\\ g_{attribute}&=num\_attr/\log(num\_attr+1)\\ g_{coverage}&=-\log(\text{coverage of abnormal leaves})\\ C&=g_{cluster}\times g_{attribute}\times g_{coverage}\end{split} \tag{9}\] The intuition is that if there are fewer clusters or more attributes, or the cluster contains fewer abnormal leaf attribute combinations, then GPS is more important than interpretability. It is refined based on that in our previous conference version [6], and the original version can be negative in some cases and causes errors. Furthermore, for efficiency, if the candidates' GPS scores exceed a given threshold \(\delta\) at a certain layer of cuboids, _PSqueeze_ would stop searching deeper layers (line 9). We set \(\delta=0.9\) by default and discuss its impact in Section 5.5. ### Determine External Root Cause _PSqueeze_ determines external root causes by examining whether the localized root causes are expressive enough, _i.e._, have high GPS scores. When there are external root causes, since it is impossible for algorithms to localize the real root causes, the GPS scores of the localized root causes would be relatively low. Specifically speaking, we check whether the GPS scores of the clusters is less than an automated threshold (denoted by \(\delta_{ExRC}\)). We denote the minimum GPS scores of all per-cluster root cause as \(min\_GPS\). If \(min\_GPS<\delta_{ExRC}\), there is at least one cluster where _PSqueeze_ is not able to find a good enough root cause, which indicates that there probably exist external root causes. Then the operators will be informed that the results of _PSqueeze_ can be misleading due to external root causes. If \(min\_GPS\geq\delta_{ExRC}\), then _PSqueeze_ localizes good enough root causes for all selected clusters, then there are not external root causes, and thus the final result is reliable. The threshold \(\delta_{ExRC}\) is automatically selected by historical data. The key idea is that for those faults without external root causes, their \(min\_GPS\), which represents the minimum GPS of all per-cluster root causes, should be near 1. It is because a good root cause should be localized for each cluster, and thus they would be grouped into a cluster by the density-based clustering method in Section 4.2. Therefore, given those \(min\_GPS\) (defined in Algorithm 3) of historical faults, we firstly cluster them with Algorithm 1, and then use the lower boundary of the cluster with the largest centroid as \(\delta_{ExRC}\). Note that we **do not** need to know which faults encounter external root causes, _i.e._, the automated threshold selection is unsupervised. If there are not enough historical faults, we use a default value 0.8 for \(\delta_{ExRC}\). ``` 1:proceduredetermine_ExRC(\(rc\_list\)) 2:\(minGPS\leftarrow+\infty\) 3:for\(S\) in \(rc\_list\)do 4:\(minGPS\leftarrow\min(GPS(S),minGPS)\)return\(minGPS\geq\delta_{ExRC}\) ``` **Algorithm 3** Determine External Root Cause ## 5 Evaluation In this section, we conduct extensive experiments to evaluate the localization accuracy and efficiency of _PSqueeze_ based on both simulated and injected faults. ### Experiment Settings #### 5.1.1 Datasets We have two real-world datasets collected from two production systems of two companies in several weeks. One is from an online shopping platform (denoted as \(\mathcal{I}_{1}\)), and the other one is from an Internet company (denoted by \(\mathcal{I}_{2}\)). Although these are real-world data, it is hard to obtain enough real-world anomalies and the corresponding root causes as the ground truth. Therefore, we generate simulated faults according to GRE based on these real-world datasets to evaluate _PSqueeze_ as follows: 1. We select a time point from the time series and add different Gaussian noises to the real values of all leaves. It is used to emulate different forecast residuals. 2. We randomly choose \(n\_element\in\{1,2,3\}\) (a.k.a. \(n\_ele\) for short) cuboids _with replacement_ in layer \(cuboid\_layer\), which is randomly chosen from \(\{1,2,3\}\). 3. We randomly choose \(n\_element\) different attribute combinations from every selected cuboid, which are the root-cause attribute combinations for this simulated fault. Note that the datasets with only different forecast residuals (_i.e._, \(\mathcal{B}_{1},\mathcal{B}_{2},\mathcal{B}_{3},\mathcal{B}_{4}\)) share the same time points but have different root-cause attribute combinations. 4. For each selected root-cause attribute combination, we modify the real values of its descended leaf attribute combination by GRE with a random magnitude. Particularly, for \(\mathcal{D}\), the measure of which is success rate, we firstly modify their success rates according to GRE. Then, we randomly generate the total order numbers and successful order numbers according to the success rates. 5. We add extra Gaussian noises (\(\mathcal{N}(0,5\%)\)) to these descended leaf attribute combinations since GRE would not perfectly hold in practical faults. 6. We drop invalid "faults" when 1. there exists another attribute combination that shares the same (or very similar) set of descended leaf attribute combinations with a selected root-cause attribute combinations; 2. the Gaussian noises added in normal leaf attribute combinations (_i.e._, those leaf attribute combinations that are not descended from any selected root-cause attribute combination) are so large that the overall measure value of them is abnormal. The root cause attribute combinations with different deviation scores are simulating independent multiple root causes. Note that the deviation score of different root-cause attribute combinations could be the same sometimes. In such cases, the root-cause attribute combinations with the same deviation score would be considered as one root cause containing multiple root-cause attribute combinations. In all datasets, we add anomalies with random magnitudes, and thus, the anomalies are not guaranteed to be significant. This paper extends the simulation process in our previous version [6] by adding step 6. The process described in step 1\(\sim\)5 could generate inappropriate faults for evaluation. For example, consider that we randomly choose (A=a1\(\wedge\)B=b1\(\wedge\)C=c1) as the root-cause attribute combination, and b1 is related to c1. In such cases, (A=a1\(\wedge\)B=b1\(\wedge\)C=c1) (A=a1\(\wedge\)B=b1) and (A=a1\(\wedge\)C=c1) have almost the same set of leaf attribute combinations. Therefore, it would be unreasonable to take (A=a1\(\wedge\)B=b1\(\wedge\)C=c1) as the root cause rather than (A=a1\(\wedge\)C=c1) or (A=a1\(\wedge\)B=b1). Nevertheless, as we add Gaussian noises in all leaf attribute combinations, thus the attribute combinations that are not intended to simulate faults on would become abnormal as well, as shown in Fig. 5. To tackle this problem, unlike our previous version [6], we remove invalid faults by step 6. **Simulated fault datasets.** By the new simulation method, we get 6 new simulated fault datasets with different simulation parameters. In Table 4, we describe some basic statistics of our datasets. For each combination of \(n\_ele\) and \(cuboid\_layer\), we simulated 100 faults in each dataset. Hence there are 5400 faults in total. In Table 4, \(n\) denotes the number of attributes, \(|LE(\emptyset)|\) denotes the number of all leaf attribute combinations, and \(|\mathcal{P}(E)|\) denotes the number of all root cause candidates. These datasets contain three different measures, including both fundamental (denoted as \(F\) in Table 4) and derived measures (\(D\)), and all of them are of common golden signals [14]. In practice, we do not count the attribute combinations that never occur in data, and thus \(|LE(\emptyset)|\) and \(|\mathcal{P}(E)|\) seem lower than theoretical results. Residual in Table 4 denotes the average forecasting residuals in percent of all normal leaf attribute combinations. EP (explanation power [10; 11]) denotes the fraction of total forecast residual of all abnormal leaf attribute combinations over that of both normal and abnormal ones. **Injected fault datasets.** Furthermore, compared with the previous conference version, this paper also two new datasets, named \(\mathcal{E}\) and \(\mathcal{F}\), based on more realistic fault injection (rather than directly adjusting measure values). More specifically, we deploy Train-Ticket [20], which is one of the largest open-source microservice benchmark systems and is widely used in literature [21; 22; 20; 23; 24], on a Kubernetes cluster with five servers. We utilize Istio [25] and ChaosMesh [26] to inject the following types of faults onto the system: delaying or dropping packets or HTTP requests/responses sent to specific pods/APIs/services or \begin{table} \begin{tabular}{r r r r r r r r} \hline \hline \multicolumn{2}{c}{} & \multicolumn{1}{c}{\(n\)} & \multicolumn{1}{c}{\(LE(\emptyset)\)} & \multicolumn{1}{c}{\(|\mathcal{P}(E)|\)} & \multicolumn{1}{c}{**Source**} & \multicolumn{1}{c}{**Measure**} & \multicolumn{1}{c}{**Residual**} & \multicolumn{1}{c}{**EP**} \\ \hline \(\mathcal{A}\) & 5 & 15324 & 27\({}^{5888}\) & \(\mathcal{I}_{1}\) & F: \#orders & 3.92\% & 90.3\% \\ \(\mathcal{B}_{1}\) & 4 & 21600 & 23\({}^{31338}\) & \(\mathcal{I}_{2}\) & F: \#page views & 3.97\% & 74.2\% \\ \(\mathcal{B}_{2}\) & 4 & 21600 & 23\({}^{31338}\) & \(\mathcal{I}_{2}\) & F: \#page views & 7.96\% & 60.6\% \\ \(\mathcal{B}_{3}\) & 4 & 21600 & 23\({}^{31338}\) & \(\mathcal{I}_{2}\) & F: \#page views & 11.9\% & 56.1\% \\ \(\mathcal{B}_{4}\) & 4 & 21600 & 23\({}^{31338}\) & \(\mathcal{I}_{2}\) & F: \#page views & 15.9\% & 53.7\% \\ \(\mathcal{D}\) & 4 & 13806 & 2\({}^{21534}\) & \(\mathcal{I}_{2}\) & D: success rate & 3.99\% & 59.3\% \\ \(\mathcal{E}\) & 9 & 373 & 2\({}^{373}\) & - & D: average latency & 37.3\% & 89.5\% \\ \(\mathcal{F}\) & 9 & 373 & 2\({}^{373}\) & - & D: stall rate & 45.8\% & 86.9\% \\ \hline \hline \end{tabular} \end{table} Table 4: Summary of Datasets Figure 5: The measure value of attribute combinations that are not intended to simulate faults on is significantly abnormal. containing specific parameters. We injected 73 faults in total. Then, we collect detailed information on every HTTP request between the microservices with Jaeger and Istio, including client/server service name, URL, response time, status code, etc. Based on the request details, we collect two types of derived measures to build the two datasets, i.e., average latency (=total latency of all requests / the number of requests) and stall rate (=the total number of stall requests/the number of requests). In both datasets, the attributes and number of distinct attribute values are as follows: client service name (17), pod (64), method (3), URL prefix (71), station name (10), train type (6), start station (9), end station (10). Compared with the simulated faults (i.e., \(\mathcal{A}\), \(\mathcal{B}_{*}\), and \(\mathcal{D}\)), the diversity of root causes is limited with respect to \(n\_elements\) and \(cuboid\_layer\), but the injected faults do not rely on any assumption and are much more representative of real-world system failures. #### 5.1.2 Evaluation Metrics F1-score is used in this paper to evaluate root cause localization for multi-dimensional data. We denote the root cause reported by the algorithm as \(S\) and the ground truth as \(\hat{S}\). Then \(tp\)=\(|S\cap\hat{S}|\) denotes the number of true positive root-cause attribute combinations, \(fp\)=\(|S-\hat{S}|\) denotes the number of false positives, and \(fn\)=\(|S-S|\) denotes the number of false negatives. Then F1-score is defined as: \[F1\text{-}Score\text{=}(2\times tp)/(2\times tp+fp+fn) \tag{10}\] To extensively study the performance of _PSqueeze_ under various situations, following existing work [4], we evaluate F1-scores separately for different root cause settings, _i.e._, different \(n\_elements\) and \(cuboid\_layer\). To evaluate _PSqueeze_ on determining external root causes, we also use the widely-used F1-score. We denote the set of faults with external root causes as \(\hat{F}\) and the set of faults that are reported by our algorithm to have external root causes as \(F\). Then ExRC_F1-score (**c**external root **c**ause F1-score) is calculated as follows: \[ExRC\_F1\text{-}Score=2\times\frac{precision\times recall}{precision+recall}\] where \(precision=|F\cap\hat{F}|/|F|\) denotes the probability that a determined external root cause is true and \(recall=|F\cap\hat{F}|/|\hat{F}|\) denotes the fraction of external root cause cases that have been determined. Finally, we also evaluate the time efficiency of _PSqueeze_. In the following experiments, we present the average running time of all cases in the corresponding setting. #### 5.1.3 Baseline Approaches We compare _PSqueeze_ with the following baseline approaches, which are summarized in Table 11: * Squeeze [6] (SQ), our previous conference version. * Adributor [10] (ADT) assumes root causes involve only single attributes and mines all attribute combinations with high _explanation power_ and then sorts them by _surprise_. * R-Adributor [11] (RAD) recursively calls Adributor to localize multi-attribute root causes. * Apriori (APR) is a popular frequent pattern mining algorithm [27]. Ahmed et al. [7] and Lin et al. [3] take association rules of abnormal leaf attribute combinations as root causes, and they use Apriori and _confidence_ to mine association rules. * HotSpot+GRE [4] (HS) uses Monte Carlo tree search (MCTS) to search the set of attribute combinations with the highest _potential scores_. We adapt the original HotSpot for derived measures according to GRE. * MID [5] searches for the attribute combinations that maximize their objective function, which is similar to that in iDice [8], and uses a heuristic based on entropy to speed up the search. As we found that their objective function is limited to their scenario (# issue reports) and performs poorly with general multi-dimensional data, we replace their objective function with our GPS. * ImpAPTr [28; 9] (IAP) search for attribute combinations that maximize _impact factor_ and _diversity factor_ with BFS (breath-first search). Since ImpAPTr only ranks attribute combinations rather than decide which are the root-cause attribute combinations, we take the top-\(n\_ele\) ranked attribute combinations as root-cause attribute combinations. Besides, the original impact factor works for decreasing measure values, and thus, we modify it by deciding the sign of impact factor adaptively for each fault. We do not compare with iDice [8] due to its inferior performance in our scope according to our previous version [6] and MID [5]. We set \(\delta=0.9\) for all cases. The parameters of other algorithms are set following the original papers. Note that all approaches except ImpAPTr have no idea about \(n\_ele\) or \(cuboid\_layer\) of the faults. In our evaluation, we always apply MA (moving average) for forecasting. Specifically speaking, we calculate the forecast value of a leaf attribute combination \(e\) at a specific time point \(t_{0}\) by averaging the real values of \(e\) at \(t_{-10},t_{-9},...,t_{-1}\). We choose MA because MA is one of the simplest forecast algorithms and costs little time. We present the time usage for a single leaf attribute combination of several algorithms used by existing work in Table 5. \begin{table} \begin{tabular}{l l l l} \hline \hline **Algorithm** & MA & Period [29] & ARIMA [10] \\ \hline **Time Usage (\(\mu s\))** & 6.21(\(\pm\)2.13) & 28.7(\(\pm\)3.95) & 38672(\(\pm\)32312) \\ \hline \hline \end{tabular} \end{table} Table 5: Time Usage Comparison of Forecast Methods ### RQ1: Effectiveness in Root Cause Localization As shown in Fig. 6, on average of different \(n\_element\) and \(cuboid\_layer\) settings, _PSqueeze_ achieves the highest performance in all datasets and outperforms the baselines significantly. By further calculation, the F1-score of _PSqueeze_ outperforms the baselines (excluding Squeeze) by 32.89% at least on average of all settings. The improvement is significant according to Table 6. Moreover, _PSqueeze_ is more robust and performs well consistently in different situations. As shown in Table 7 and Table 8, despite of different \(n\_elements\), \(cuboid\_layer\), and dataset settings _PSqueeze_ achieves good performance, and _PSqueeze_ outperforms all the other baselines in most (31 out of 54) settings. In Fig. 7, we present the F1-scores with different numbers of root causes on the simulated fault datasets. The results show that _PSqueeze_ can achieve consistently good performance even if there is more than one root cause. Notably, _PSqueeze_ performs well consistently regarding different _anomaly magnitudes_. Formally speaking, the anomaly magnitude of a fault is \(\frac{|\sum_{\varepsilon\in LE(\theta)}(v(e)-f(e))|}{\sum_{\varepsilon\in LE( \theta)}f(e)|}\), and it denotes the relative magnitude of the abnormal fluctuation on the overall measure value. As shown in Fig. 8, _PSqueeze_ and Squeeze achieve high performance in spite of anomaly magnitudes. However, the performance of Adtributor, HotSpot, and ImpAPTr varies largely with different anomaly magnitudes. Though the performance of Apriori, R-Adtributor, and MID is relatively stable, their F1-scores are not high enough. _PSqueeze_, as well as many most other baselines, relies on forecast values. As shown in Table 8, _PSqueeze_ performs worse as the forecast residual goes large (from \(\mathcal{B}_{1}\) to \(\mathcal{B}_{4}\), as shown in Table 4), but _PSqueeze_ consistently outperforms others even in the worst case. The performance of Apriori is not monotonic to the forecasting residuals because its parameter setting is sensitive to the forecasting residual. Some approaches like HotSpot seem affected less by forecasting residuals, but they perform poorly. It worth noting that _PSqueeze_ is much more insensitive to forecasting residuals than our previous version, Squeeze, \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & SQ & ADT & RAD & APR & HS & MID & IAP \\ \hline p-value & 7.9e-03 & 1.1e-41 & 6.2e-65 & 3.8e-04 & 1.9e-12 & 1.7e-18 & 3.3e-18 \\ effect size & 0.52 & 4.28 & 7.55 & 0.71 & 1.53 & 2.05 & 2.03 \\ \hline \hline \end{tabular} \end{table} Table 6: The \(p\)-value (by \(t\)-test) and effect size (by Cohen’s \(d\)[30]) of every baseline across all datasets \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{**F1-Score**} & \multicolumn{6}{c}{**n\_element,cuboid\_layer**} \\ \multicolumn{2}{c}{**Algo. total**} & **1,1** & **1,2** & **1,3** & **2,1** & **2,2** & **2,3** & **3,1** & **3,2** & **3,3** \\ \hline \hline \multirow{5}{*}{\(\mathcal{B}_{1}\)} & PSQ & **0.93** & **1.00** & **1.00** & **0.98** & **0.99** & **0.90** & 0.89 & 0.96 & 0.80 & 0.85 \\ \cline{2-11} & SQ & 0.91 & 0.94 & 1.00 & 0.96 & 0.97 & 0.88 & **0.91** & 0.88 & 0.90 & 0.79 & **0.87** \\ & ADT & 0.13 & 0.29 & 0.00 & 0.00 & 0.43 & 0.00 & 0.00 & 0.45 & 0.00 & 0.00 \\ & RAD & 0.00 & 0.01 & 0.00 & 0.01 & 0.00 & 0.00 & 0.00 & 0.01 & 0.00 & 0.01 \\ & APR & 0.67 & 1.00 & 0.78 & 0.09 & 0.98 & 0.87 & 0.18 & **0.96** & **0.93** & 0.23 \\ \cline{2-11} & HS & 0.50 & 0.97 & 0.72 & 0.14 & 0.77 & 0.50 & 0.10 & 0.78 & 0.47 & 0.47 & 0.05 \\ & MID & 0.38 & 0.94 & 0.82 & 0.27 & 0.01 & 0.56 & 0.23 & 0.00 & 0.43 & 0.19 \\ & IAP & 0.29 & 1.00 & 0.00 & 0.00 & 0.86 & 0.00 & 0.00 & 0.71 & 0.00 & 0.00 \\ \hline \multirow{5}{*}{\(\mathcal{B}_{1}\)} & PSQ & **0.90** & 1.00 & **0.99** & **0.98** & **1.00** & 0.91 & **0.80** & 0.93 & 0.80 & **0.74** \\ \cline{2-11} & SQ & 0.81 & 0.91 & 0.97 & 0.93 & 0.99 & 0.78 & 0.75 & 0.78 & 0.53 & 0.66 \\ \cline{2-11} & ADT & 0.13 & 0.28 & 0.00 & 0.04 & 0.04 & 0.00 & 0.04 & 0.09 & 0.00 & 0.00 \\ \cline{2-11} & RAD & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.01 & 0.01 & 0.00 & 0.01 \\ \cline{2-11} & APR & 0.79 & 0.97 & 0.91 & 0.36 & 1.00 & **0.93** & 0.49 & **0.96** & **0.96** & 0.58 \\ \cline{2-11} & HS & 0.43 & 0.97 & 0.60 & 0.18 & 0.73 & 0.43 & 0.03 & 0.68 & 0.27 & 0.00 \\ \cline{2-11} & MID & 0.32 & 0.96 & 0.86 & 0.26 & 0.01 & 0.42 & 0.11 & 0.01 & 0.18 & 0.05 \\ \cline{2-11} & IAP & 0.29 & **1.00** & 0.00 & 0.00 & 0.86 & 0.00 & 0.00 & 0.76 & 0.00 & 0.00 \\ \hline \multirow{5}{*}{\(\mathcal{B}_{1}\)} & PSQ & **0.83** & 0.88 & **0.93** & **0.86** & 0.95 & 0.85 & **0.74** & 0.93 & 0.74 & **0.59** \\ \cline{2-11} & SQ & 0.70 & 0.61 & 0.93 & 0.84 & 0.83 & 0.75 & 0.69 & 0.66 & 0.49 & 0.53 \\ \cline{2-11} & ADT & 0.13 & 0.28 & 0.00 & 0.00 & 0.41 & 0.00 & 0.00 & 0.50 & 0.00 & 0.00 \\ \cline{2-11} & RAD & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.01 & 0.01 & 0.01 & 0.00 \\ \cline{2-11} & APR & 0.75 & **1.00** & 0.88 & 0.24 & **0.99** & **0.93** & 0.38 & **0.97** & **0.96** & 0.44 \\ \cline{2-11} & HS & 0.43 & 0.98 & 0.62 & 0.08 & 0.76 & 0.39 & 0.02 & 0.71 & 0.27 & 0.00 \\ \cline{2-11} & MID & 0.29 & 0.93 & 0.83 & 0.22 & 0.01 & 0.39 & 0.09 & 0.04 & 0.14 & 0.03 \\ \cline{2-11} & IAP & 0.29 & 0.99 & 0.00 & 0.00 & 0.84 & 0.00 & 0.00 & 0.74 & 0.00 & 0.00 \\ \hline \multirow{5}{*}{\(\mathcal{B}_{1}\)} & PSQ & **0.77** & 0.69 & **0.89** & **0.76** & 0.91 & **0.80** & **0.72** & 0.92 & 0.68 & **0.58** \\ \cline{2-11} & SQ & 0.62 & 0.32 & 0.86 & 0.75 & 0.69 & 0.70 & 0.65 & 0.58 & 0.46 & 0.54 \\ \cline{2-11} & ADT & 0.13 & 0.27 & 0.00 & 0.00 & 0.37 & 0.00 & 0.00 & 0.51 & 0.00 & 0.00 \\ \cline{2-11} & RAD & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.02 & 0.00 & 0.01 \\ \cline{2-11} & APR & 0.61 & **1.00** & 0.72 & which further demonstrates the effectiveness of our extension (the probabilistic clustering method in Section 4.2). In those settings where _PSqueeze_ does not achieve the best performance, either Squeeze (3 out of 54), Apriori (14 out of 54), HotSpot (3 out of 54), or ImpAPTr (2 out of 54) achieve the best performance. However, the baseline approaches are not general and robust enough to perform well in different situations. Adtritpour only localizes root causes of the first-layer cuboids, and thus, it cannot work at all settings where \(\textit{cuboid\_layer}\neq 1\). Its _explanation power_ is sensitive to the impact of attribute combinations (_i.e._, how much data is specified by attribute combinations), and thus it performs badly when anomaly magnitudes are small. Note that faults with root causes in deeper cuboids often have smaller anomaly magnitudes. Hence the performance of Adtritpour decreases as \(\textit{cuboid\_layer}\) increases. Although R-Adtritpour localizes root causes in any cuboids, it is hard for R-Adtritpour to decide when the recursion should terminate, and thus it works poorly in most settings. Although Apriori achieves the best performance in several settings, it also works extremely badly in some settings (_e.g._, all settings on \(\mathcal{A}\), \(\textit{cuboid\_layer}=3\) on \(\mathcal{B}_{1}\)) due to its sensitivity to parameters. The _hierarchical pruning strategy_ could wrongly prune the right search path, and thus it performs poorly when \(\textit{n\_ele}\) or \(\textit{cuboid\_layer}\) is large. Though both MID and ImpAPTr are designed for and limited to particular measures (the number of issue reports and success rate respectively), we have adapted them in our experiments. However, they still suffer from other limitations. The searching strategies of MID and ImpAPTr do no consider multiple root causes, and thus, they perform poorly when \(\textit{n\_ele}>1\). Moreover, MID's heuristic strategy is limited to their scenario, and thus, when root-cause attribute combinations are in deeper cuboids, it is harder for MID to search it. The _contributor power_ in ImpAPTr is sensitive to the impact of attribute combinations like _explanation power_, and thus it is inappropriate for faults with small anomaly magnitudes. While the baseline approaches suffer from these limitations, _PSqueeze_ is able to consistently achieve good performance in different situations. \(\textit{PSqueeze}\) underperforms other baselines, especially Apriori, in some settings when the forecasting residuals are large (_i.e._, in datasets \(\mathcal{B}_{3}\) and \(\mathcal{B}_{4}\)), or \(\textit{n\_elements}\) and \(\textit{cuboid\_layers}\) are large. It is mainly because, in such cases, the bottom-up clustering step is affected by the noises. For example, some normal leaf attribute combinations could be grouped into a cluster due to the large forecasting residuals. To reduce the influence of noises, we apply probabilistic clustering in _PSqueeze_. As a result, as shown in Table 8, the larger the forecasting residuals are, the more _PSqueeze_ outperforms Squeeze. In general, _PSqueeze_ outperforms Squeeze in most settings (50 out of 54) and the F1-score of _PSqueeze_ outperforms Squeeze by over 30% in 11 out of 54 settings, and the improvement can be up to 113.7%. As the forecasting residuals increase from \(\mathcal{B}_{1}\) to \(\mathcal{B}_{4}\), the improvement of _PSqueeze_ over Squeeze increases from 1.96% to 25.22%, and the effect sizes (Cohen's \(d\)[30]) are 0.27, 0.67, 0.95 and 1.10 respectively. Moreover, according to Table 6, the improvement is significant. By further calculation, the F1-score of _PSqueeze_ outperforms Squeeze by 11.73% on average. In Fig. 9, we present the F1-scores on \(\mathcal{E}\) and \(\mathcal{F}\). The results show that _PSqueeze_ outperforms the baselines, including Squeeze, and achieves the best performance. Compared with the simulated faults (i.e., \(\mathcal{A}\), \(\mathcal{B}\) and \(\mathcal{D}\)), the performance of _PSqueeze_ slightly degrades on the inject faults. It is probably because the data generation of \(\mathcal{E}\) and \(\mathcal{F}\) do not assume GRE holds at all. external root causes. If a root cause contains any attribute that is eliminated, it becomes an external root cause. As shown in Table 9, _PSqueeze_ successfully determines external root causes with high F1-scores in almost all settings. In 110 out of 135 settings, the \(ExRC\_F1-Score\) of _PSqueeze_ achieves over 0.80. By further calculation, the \(ExRC\_F1-Score\) of _PSqueeze_ achieves 0.90 on average. To the best of our knowledge, there is not any existing approach determining external root causes, and thus we do not compare with existing approaches. In some cases, the F1-scores are relatively low (_e.g._, 0.60). The main reason is that there could be high-GPS attribute combinations even if the exact root causes are external due to the correlation among attributes. **Finding 4**: _PSqueeze_ can effectively determine external root causes. ### RQ3: Efficiency We evaluate the efficiency of _PSqueeze_ by comparing its average time cost of each fault case with that of others. We run every experiment on a server with 24 \(\times\) Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz (2 sockets) and 64G RAM. All algorithms are implemented with Python utilizing matured libraries like Pandas and NumPy. Experiments of all algorithms are conducted under the same condition. In Fig. 10 we present the running times of \(\mathcal{A},\mathcal{B}_{1},\mathcal{B}_{4},\mathcal{D}\). We do not present results of all datasets because all of \(\{\mathcal{B}_{i},i=1,2,3,4\}\) have similar results. _PSqueeze_ costs only about ten seconds even in the worst cases. It is efficient enough since measures are usually collected every minute or every five minutes. HotSpot is sometimes as efficient as _PSqueeze_, but sometimes it would cost more time. Apriori costs hundreds of seconds, which is so slow that impractical. Others can be fast, but they do not effectively localize root causes. **Finding 5**: _PSqueeze_ is efficient enough in practice to localize root causes for multi-dimensional data. ### RQ4: Performance under Different Configurations All the parameters in Section 4 are automatically configured except \(\delta\), the GPS threshold. We present the F1-scores of _PSqueeze_ under different \(\delta\) in Fig. 11 with \(cuboid\_layer=3,n\_ele=3\) in \(\mathcal{B}_{i}\), \(i=1,2,3,4\). We only choose this setting because it is the hardest setting. _PSqueeze_'s performance does not change a lot as \(\delta\) changes. Results in Table 7, Table 8 and Section 5.4 also show that \(\delta=0.9\) leads to good enough effectiveness and efficiency. Since \(\delta\) is the GPS threshold for early stopping and the distributions of GPS are not supposed to be related to datasets, it is reasonable to set \(\delta\) near 0.9 regardless of the specific dataset. **Finding 6**: _PSqueeze_ is robust to different configiurations. ## 6 Success stories We have successfully applied _PSqueeze_ in several large commercial banks and a top Internet company. The results show that _PSqueeze_ can do great help for operators by rapidly and accurately localizing root causes. In this section, we present some representative success stories. For confidential reasons, some details are omitted or anonymized. Compared with our previous version, we apply _PSqueeze_ in more production systems from different companies and collect more cases (_e.g._, case 4 and 5). ### Case 1: Insignificant Anomaly Magnitude One day night, a fault occurred at a top Internet company. The HTTP error counts suddenly burst, as shown in Fig. 12. The attributes and the number of distinct values are listed as follows: data center (11), province (7), ISP (6), user agent(22). A potential root cause is manually found by the operators, consisting of only one attribute combination (\(AC1\) in Fig. 12), which took them one hour. We retrospectively ran _PSqueeze_ over this system's logs, and in several seconds, we found more root causes: \(AC1\) and \(AC2\), as shown in Fig. 12. It is obvious that \(AC2\) also has severe error bursts. \(AC2\) is mistakenly ignored by the operators because it occupied only a small fraction of the total error counts. Manual analysis apparently has difficulties in localizing root causes of anomalies with insignificant magnitudes. _PSqueeze_ would help the operators to notice such root causes efficiently. Figure 11: F1-Scores over different \(\delta\) with \(cuboid\_layer=3,n\_ele=3\). Figure 10: Running time comparison (for individual faults) of \(\mathcal{A},\mathcal{B}_{1},\mathcal{B}_{4},\mathcal{D}\). ### Case 2: Intra-System localization One day from 9:00 to 11:00, the operators of a bank received many tickets and alerts and noticed that the API call success rate of a system suffered a severe drop. The search space is large, and the attributes and the number of distinct values are listed as follows: province (38), agency (815), server group (16), channel (4), server (339), code (4), status (2), service type (3). After two hours of fruitless manual root cause localization, the operators decided to just roll back the entire system to the last version, which happened to actually fix the issue. After the roll-back, it took another 2 hours for an inexperienced operator on duty to eventually find the root cause (there was a bug in the newly deployed version of the software for Service Type 0200020) based on the 2-hour logs during the fault. Upon the request of the operators, we retrospectively ran _PSqueeze_ over this system's logs during the fault. _PSqueeze_ took a few seconds to report the root cause (Service Type=020020), which indicates exactly the software with a buggy version update. Had _PSqueeze_ been used immediately after the fault happened, operators could have localized the root cause much faster. Fig. 13 shows this case's results of deviation-score-based clustering. We can see that the deviation scores of all descended leaf attribute combinations of the root cause (Service Type=020020) are very close to each other. This to some extent confirms the generalized ripple effect. ### Case 3: Inter-System localization One day, there was a burst of failures in a bank's transaction system. There are many subsystems that communicate with each other by API (application programming interface) calls. The search space is also large, and the attributes and the number of distinct values are listed as follows: source (13), source IP (66), destination (7), destination IP (10), interface (135). The operators located the root cause (destination=ic in Fig. 14) in ten minutes by manually analyzing many faulty traces (a series of API calls on different services to realizing on transaction is a trace [31; 32; 33]). Again, upon the request of the operators, we retrospectively ran _PSqueeze_ over this system's API call logs during the fault. _PSqueeze_ localized the root causes, as shown in Fig. 14, in just several seconds. It also confirms the generalized ripple effect because deviation scores of leaf attribute combinations that are descended from the same root-cause attribute combination are close to each other. Note that _PSqueeze_ reports more root-cause attribute combinations than what the operators find. The operators confirm that these additional root-cause attribute combinations are indeed valid: they are abnormal but are just affected by the _ic_ service due to the dependency among services. Had _PSqueeze_ been actually used immediately after the issue, operators could have localized the root cause much faster (seconds vs minutes) and more accurately. We also run some other algorithms on Case 3 (see Table 10). ### Case 4 and 5: External Root Causes One day at about 00:30, a fault occurred, and thus, a lot of transactions suffered long response latency at a system of a large commercial bank. There are many attributes and attribute values: transaction status (4), host IP (75), return code (275), transaction code (636), and MQ name (5). Therefore, manually localizing root cause attribute combinations is very challenging. We retrospectively ran _PSqueeze_ on this system's logs and successfully localize (return code = 0014) as the root cause (see Fig. 15). Then the operator manually confirms this attribute combination directly indicates the exact underlying root cause. We also compare _PSqueeze_ with other baselines in these cases and present the comparison in Table 10. Note that although Apriori successfully localizes the root cause, it also localizes 55 non-root-cause attribute combinations. Initially, we did not take the two attributes, return code and transaction code, into consideration since there are too many attribute values of them. Then _PSqueeze_ reported there might be an external root cause as \(min\_GPS\) \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline true RC? & _PSqueeze_ & SQ & HS & APR & ADT & RAD & MID & IAP \\ \hline Case 1 & Yes & Yes & No & No & No & No & No \\ Case 3 & Yes & Yes & Yes & Yes & No & No & Yes \\ Case 4 & Yes & No & No & Yes* & Yes & No & No & No \\ Case 5 & Yes & No & No & Yes* & Yes & No & No & No \\ \hline \hline \end{tabular} * Reporting too many false positives to be really helpful. \end{table} Table 10: Qualitative comparison on industrial cases: whether the algorithm can find the true root cause. Some cases are missing due to deployment issues. Figure 14: The root-cause attribute combinations and the histogram of deviation scores of their descent leaves in _Case 3_. Figure 12: Measure values (in log-scale) along time of _Case 1_. \(AC1\) is (user agent=uc\(\times\)idc=inh\(\times\)province=other), and \(AC2\) is (user agent=uc/idc=is/province=other). Figure 13: The histogram of deviation scores in _Case 2_. (refer to Algorithm 3) was only 0.81. Moreover, all these approaches give invalid root causes: _PSqueeze_, HotSpot, Adributor, MID and ImpAPTr localize almost all MQ names, R-Adributor also localizes most MQ names combined with \(transaction\ status=S\), and Apriori gives dozens of root causes. Operators are not able to infer the underlying root cause from such results, and actually they can be misled. Thus, we took all available attributes into consideration and then found the exact root cause. At another system of the same bank, another fault caused the transaction count to increase slightly and deviated from its normal state. We also retrospectively ran _PSqueeze_ and other baselines, and the comparison is presented in Table 10. _PSqueeze_ accurately localizes the root cause attribute combination, "RETTYPE=1006" (see Fig. 16). Note that Apriori localizes too many (17) non-root-cause attribute combinations again. For both case 4 and 5, Squeeze did not localize the exact root-cause attribute combinations due to the large forecasting residuals. Similar to case 4, we did not found the exact root cause until we were notified that there might be external root causes by _PSqueeze_ and considered more attributes. ## 7 Discussion ### Threats to Validity The major threat to validity lies in the lack of real-world datasets. However, given the difficulty in accessing real-world datasets, all the closely related works use simulated faults as well as our previous conference version, except iDice [8] and MID [5], which are both from Microsoft. We think there are two reasons for the infeasibility of real-world datasets. First, on the one hand, there are a limited number of faults in real-world production systems. Among them, the faults whose root causes can be indicated by root-cause attribute combinations is even less. On the other hand, in many cases, due to the lack of automated root-cause attribute combinations localization tools, the faults are diagnosed with the help of other monitoring data, such as logs and traces, and thus, the root-cause attribute combinations are missed in the failure tickets. Therefore, valid real-world cases that have ground-truth root-cause attribute combinations are somewhat rare. It could take years for engineers to collect hundreds of valid cases. Second, multi-dimensional data are usually highly confidential. Unlike infrastructural metrics, such as CPU utilization and network throughput, the fields in the request logs that we use to build multi-dimensional data are usually sensitive, such as the dollar amount and the user's location. The high risk also hinders companies from sharing real-world multi-dimensional data publicly. To mitigate this threat to validity, first, we provide several real-world fault cases. Second, compared with the previous conference version, we added two new datasets (i.e., \(\mathcal{E}\) and \(\mathcal{F}\)), which were generated by realistic fault injection. ### Limitations _Root causes of multi-dimensional data_ are not exact root causes of faults but only clues to them. Nevertheless, localizing root causes for multi-dimensional data is important and helpful since it can direct further investigation right after faults occur. _PSqueeze_, as well as the previous approaches, relies on time-series forecasting. To reduce the influence brought by inaccurate forecasting, especially when the real value is small, we introduce probabilistic clustering. The experiment result shows that _PSqueeze_ is robust to forecasting residuals. _PSqueeze_ focuses on only categorical attributes and cannot leverage numerical attributes directly, as well as most previous approaches [7; 4; 9; 11; 8; 5; 10; 3]. We observe that numerical attributes are much less prevalent in practice (_e.g._, in the five companies studied in this paper). According to our interviews with some engineers, they choose not to record them because they are not sure how to use them for diagnosis. We will work on supporting numerical attributes in the future. _PSqueeze_ focuses on only numerical measures, as well as most previous approaches [10; 11; 8; 4; 5; 9] except Apriori [7; 3]. However, operators usually aggregate only numerical measures as time-series for monitoring and fault discovery in current industrial practice since categorical measures are not suitable for this. Thus operators are concerned about only numerical measures. ## 8 Related Work Recently, many approaches focus on fault diagnosis in various contexts. Most are different from ours [34; 35; 36; 1; 37; 31; 38; 39; 33; 40; 21]. On the one hand, we focus on root cause localization on multi-dimensional data. On the other hand, some of these works use intuitive domain-knowledge-based empirical methods, while we propose a generic algorithm. There are also several approaches focusing on localizing root causes for multi-dimensional data [10; 11; 8; 4; 1 Figure 16: Case 5. If the return type is 1006, then the transaction count increases a lot; otherwise the average latency keeps steady. Figure 15: Case 4. If the return code is 0014, then the average latency increases a lot; otherwise the average latency keeps steady. 7, 5, 3, 9]. We compare them in Table 11 in three aspects, _i.e._, genericness, robustness, and efficiency. Some approaches are focusing on a specific scenario rather than generic multi-dimensional data. For example, Adtributor [10] only cares about single-attribute root causes. iDice [8], MID [5] and ImpAPTr [9] utilize the special properties of specific types of measures (_i.e._, #issue reports and success rate) and thus are not generic. Many approaches [4, 8, 9] filter attribute combinations by their impact (_e.g._, the percent of issue reports or transactions under them) and thus cannot handle insignificant anomalies. Due to the design of termination conditions or pruning strategies, some approaches rely on parameter fine-tuning [11, 7, 3], and the running time of some approaches [4, 8, 5, 7, 3] varies in different faults. Time series forecasting has been extensively studied, and there are many approaches. Statistical approaches [41, 42, 29, 43] make some statistical assumptions on the time series. Supervised ensemble approaches [16] try to ensemble statistical approaches in a supervised manner. Recently, unsupervised deep-learning-based approaches [15, 17] are making great progress. The selection of appropriate forecasting algorithms is usually based on the nature of data [16]. There are many other studies on the analysis of multi-dimensional data. A series of studies [44, 45, 46] focus on identifying interesting patterns (a.k.a. insights) in multi-dimensional data. Lumos [47] diagnoses metric regressions by ranking attributes by their importance after regression. Castelluccio et al. [48] focus on mining contrasting sets by statistical tests, which are attribute combinations with distinctive supports in different groups, to find attribute combinations related to a specific group of crashes. Liu et al. [38] debug high response time by mining distinctive conditions for high response time with the help of the decision tree algorithm. ## 9 Conclusion Given the importance of root cause localization for multi-dimensional data, many approaches are proposed recently. However, they are not generic or robust enough due to some limitations. In this paper, we propose a more generic and robust approach, _PSqueeze_. _PSqueeze_ employs a novel "bottom-up&top-down" searching strategy based on our proposed generalized ripple effect to achieves high efficiency without much loss of genericness and robustness. Notably, this paper further extends our previous studies by a probabilistic clustering method and a method for determining external root causes. We conduct extensive experiments on both simulated and injected faults. The results show that the F1-score of _PSqueeze_ outperforms previous approaches by 32.89% on average and consistently costs only about 10s. Besides, the F1-score in determining external root causes reaches 0.90 on average. Furthermore, case studies in several large commercial banks and an Internet company show that _PSqueeze_ can localize root causes much more rapidly and accurately than traditional manual analysis in practice. ## 10 Acknowledgment The authors gratefully thank Suning.com Co., Ltd. for providing parts of the data used in this paper's evaluation. We thank Juexing Liao and Chuanxi Zheng for proofreading this paper. This work is supported by the National Key R&D Program of China 2019YFB1802504, and the State Key Program of National Natural Science of China under Grant 62072264.
2307.12445
SCRAPS: Speech Contrastive Representations of Acoustic and Phonetic Spaces
Numerous examples in the literature proved that deep learning models have the ability to work well with multimodal data. Recently, CLIP has enabled deep learning systems to learn shared latent spaces between images and text descriptions, with outstanding zero- or few-shot results in downstream tasks. In this paper we explore the same idea proposed by CLIP but applied to the speech domain, where the phonetic and acoustic spaces usually coexist. We train a CLIP-based model with the aim to learn shared representations of phonetic and acoustic spaces. The results show that the proposed model is sensible to phonetic changes, with a 91% of score drops when replacing 20% of the phonemes at random, while providing substantial robustness against different kinds of noise, with a 10% performance drop when mixing the audio with 75% of Gaussian noise. We also provide empirical evidence showing that the resulting embeddings are useful for a variety of downstream applications, such as intelligibility evaluation and the ability to leverage rich pre-trained phonetic embeddings in speech generation task. Finally, we discuss potential applications with interesting implications for the speech generation and recognition fields.
Ivan Vallés-Pérez, Grzegorz Beringer, Piotr Bilinski, Gary Cook, Roberto Barra-Chicote
2023-07-23T22:18:47Z
http://arxiv.org/abs/2307.12445v2
# SCRAPS: Speech Contrastive Representations of Acoustic and Phonetic Spaces ###### Abstract Numerous examples in the literature proved that deep learning models have the ability to work well with multimodal data. Recently, _CLIP_ has enabled deep learning systems to learn shared latent spaces between images and text descriptions, with outstanding zero- or few-shot results in downstream tasks. In this paper we explore the same idea proposed by _CLIP_ but applied to the speech domain, where the phonetic and acoustic spaces usually coexist. We train a _CLIP_-based model with the aim to learn shared representations of phonetic and acoustic spaces. The results show that the proposed model is sensible to phonetic changes, with a 91% of score drops when replacing 20% of the phonemes at random, while providing substantial robustness against different kinds of noise, with a 10% performance drop when mixing the audio with 75% of _Gaussian_ noise. We also provide empirical evidence showing that the resulting embeddings are useful for a variety of downstream applications, such as intelligibility evaluation and the ability to leverage rich pre-trained phonetic embeddings in speech generation task. Finally, we discuss potential applications with interesting implications for the speech generation and recognition fields. ## 1 Introduction Recent advances in deep learning have catapulted computer vision, improving the domain across multiple axes [5]. One of the most important breakthroughs is _CLIP_ (Contrastive Language-Image Pretraining) [24], a deep learning model trained on vast amounts of multimodal data. In this case, by means of contrastive learning [19], _CLIP_ showed the ability to learn to match images with written descriptions, and vice-versa. The representation power of this model at zero-shot regimes is such that it is able to surpass supervised models specifically trained for a determined task. The embeddings generated with _CLIP_ have also been proved useful for other downstream tasks involving image and/or descriptions, in form of pretrained embeddings with rich latent representations [6, 27, 16, 31]. In these settings, reusing _CLIP_ embeddings becomes a cornerstone for problems with small datasets. The extraordinary success of _CLIP_ and the simplicity of its approach motivates exploring other applications to other fields where multimodal data is essential. The field of speech technologies has come a long way in recent years, with deep learning models achieving impressive performance on a wide range of tasks [32, 28, 10]. However, there are still many challenges and opportunities in this domain [11], particularly when it comes to exploiting large amounts of data. On the speech generation side, one of the main difficulties is to build a model that correctly aligns the phonetic and acoustic sequences, leading to a natural prosody with fluent speech and high intelligibility, while still capturing the prosody variations [25]. On the opposite side, automatic speech recognition systems struggle with long-tail words recognition [35], and speech vs background disentanglement [18]. This work explores the use of _CLIP_ models for learning shared phonetic-acoustic latent spaces that can be later exploited for further downstream tasks. The methodology described in this study, which we name _SCRAPS_ (Speech Contrastive Representation of Acoustic and Phonetic Spaces) offers a promising approach for learning such common representations. By learning phonetic-acoustic representations, we aim to enable the use of rich latent spaces in a variety of downstream tasks, such as cheap intelligibility evaluation, rich pre-trained embeddings, a differential intelligibility loss and on-the-fly training data filtering. This last example is specially relevant at the moment given the vast amounts of found data available online. We believe that this approach has the potential to help advancing the state of the art in speech generation and recognition towards learning from large-scale data, looking at the impact that _CLIP_ model has had in the computer vision domain. In the following sections, we will provide a detailed description of our proposed methodology, as well as experiments and results that backup our hypotheses, demonstrating its effectiveness. We also discuss potential downstream applications and use cases where the _SCRAPS_ model may be useful, together with some initial experiments, and suggest future research lines. ## 2 Related work There are various examples in the literature with notable similarity to the work presented in this paper, one of the most relevant being _CLAP_[8]. _CLAP_ stands for Contrastive Language-Audio Pretraining, and it is a _CLIP_ based model that is trained on text description-audio pairs with the aim of producing a shared multimodal representation of acoustic and semantic domains. Although arguably similar to this work, _CLAP_ uses the text from a semantic perspective, as the dataset used to train this model contents pairs of audio samples with corresponding textual descriptions. _SCRAPS_ is trained over pairs of audio clips containing speech utterances and their corresponding transcribed phonetic sequences. The objective of _SCRAPS_ is to learn a shared latent space between the phonetics and its acoustic realizations, hence the nature of the problem is orthogonal to _CLAP_. Another example of _CLIP_-based approach that operates in the speech domain is _SpeechCLIP_[29]. In that work, the authors build a system that learns shared representations of images with spoken captions. By means of _CLIP_, the authors establish a third alignment with text descriptions. In _M-SpeechCLIP_[2], the authors study the usefulness of English pre-trained _SpeechCLIP_ to other languages and also propose multilingual versions of it. _Wav2CLIP_[36] is another work with the same idea as _SpeechCLIP_, with the difference that the authors use the _CLIP_ image encoder to distill an audio encoder that produces latent representations compatible with the images. _AudioCLIP_[13] is another example where the original _CLIP_ model gets extended to the audio domain. The authors first pre-train the audio encoder using the frozen pre-trained _CLIP_ encoders, and later unfreeze all the modules to fine-tune the whole system together. These approaches differ from ours in the following points: 1. The approaches described above operate with text descriptions and at word level to infer semantics (as they rely on _CLIP_ text encoder), while _SCRAPS_ operates in the phonemes space, which brings the problem closer to the speech generation/recognition field, and makes it more practical for those use-cases. 2. The models found in the literature (aside from CLAP) rely on the pre-trained _CLIP_ models (_SpeechCLIP_ at inference time and _Wav2CLIP_ and _Audio2CLIP_ at training time), while our solution is an end-to-end model, and all the models are directly optimized to work with phonetic and acoustic spaces. 3. _SpeechCLIP_ and _Wav2CLIP_ do not directly optimize the speech-text shared space, but use images as a common intermediate space. In practice, this means that the error inherent in the original _CLIP_ model (which operates on the image-text multimodality) becomes effectively irreducible, and it is added to the image-audio model error. To the best of our knowledge, our work is the first attempt to use a _CLIP_-based strategy to learn joint phonetic and acoustic spaces. For this reason, we believe this work is of high interest for the speech generation/recognition community. ## 3 Methods ### Clip _CLIP_ is composed of two encoders, one for the images and one for the text. Each one receives the corresponding input (_i.e._ the image is fed to the visual encoder and its description to the text encoder) and generates a real valued vector, one derived from the image \(\vec{u}\in\mathbb{R}^{D}\), another derived from the text \(\vec{v}\in\mathbb{R}^{D}\), both with the same dimensionality \(D\). The objective of the network is to make both vector spaces compatible, such that corresponding image-text pairs produce, ideally, the same vectors - e.g. \((\vec{u}_{1},\vec{v}_{1})\) - and non-corresponding pairs produce dissimilar vectors \((\vec{u}_{1},\vec{v}_{2})\). This is achieved by maximizing the dot product between corresponding pairs of vectors, which matrix is represented as \(\mathbf{L}\), with shape \(N\times M\), _i.e._\(N\) images times \(M\) descriptions, and minimizing it for non-corresponding pairs. In the original form, \(M=N\) as the positive and negative pairs are constructed using all the pairwise combinations of the elements in the _minibatch_. This is done by using a symmetric cross-entropy loss (cross-entropy across text and across images), as described in Equation 2, where \(\mathbf{L}\) represents the logits matrix resulting from the dot product of the outputs of the two encoders, and \(\mathbf{P_{1}}\) and \(\mathbf{P_{2}}\) are the two _softmax_-normalized matrices across the two possible axes (\(T\) represents the _softmax_ temperature). The pairing of images and descriptions is done using the elements of the _minibatch_. \[\mathbf{P_{1}}(i,j)=\frac{e^{\mathbf{L}(i,j)/T}}{\sum_{k=1}^{N}e^{\mathbf{L}( k,j)/T}},\ \mathbf{P_{2}}(i,j)=\frac{e^{\mathbf{L}(i,j)/T}}{\sum_{k=1}^{M}e^{\mathbf{L}(i,k )/T}}. \tag{1}\] \[\mathcal{L}=\sum_{j=1}^{M}\sum_{k=1}^{N}\mathbf{I}(k,j)\log\mathbf{P_{1}}(k,j )+\sum_{i=1}^{N}\sum_{k=1}^{M}\mathbf{I}(i,k)\log\mathbf{P_{2}}(i,k). \tag{2}\] ### Scraps _SCRAPS_ is an adaption of the original _CLIP_ idea to the speech domain. Consequently, the description encoder included in the original _CLIP_ architecture is replaced by a phonetic encoder, which receives a sequence of phonemes as input, and produces a vector as output. The image encoder is replaced by an acoustic encoder, which receives a mel-spectrogram as input and produces a vector as output. The architecture used for this study is represented in Figure 1. As it can be seen in the figure, the model is built of two blocks: a phonetic encoder (left hand side) and an acoustic decoder (right hand side). Both modules take their corresponding input and produce a \(D\) dimensional vector (time independent): \(\mathbb{R}^{D}\). As shown in the diagram, the blocks contain two modules: a transformer backbone (inspired on transformer TTS [20, 33]), and an _LSTM_[14] on top. The role of the _LSTM_ is to integrate the representations coming from the transformers whose inputs have dynamic lengths. It produces a forward-rolling integration of the outputs vectors from the transformers. The last state of the _LSTM_ is used as the final _SCRAPS_ vector. The _LSTMs_ of the two encoders share weights to propagate the latent spaces compatibility back to the output of the transformer outputs (see section 4.7 as an empirical proof of this assumption). That helps the time-dependent vectors live in the same space for both encoders, while still maintaining a time-dependent latent representation which may be used for future downstream tasks where the either of the _SCRAPS_ encoders is used as a pre-trained module. As it is shown in the middle of Figure 1, the _SCRAPS_ model is trained with a contrastive loss where the objective is to maximize the scores of the matching pairs, while minimizing the scores of the non-matching pairs. This is achieved by computing the dot product of \(\mathbf{T}\in\mathbb{R}^{B\times D}\), i.e. _SCRAPS_ vectors of the \(B\) phonetic sequences of the _minibatch_, and \(\mathbf{A}\in\mathbb{R}^{B\times D}\), i.e. _SCRAPS_ vectors of the \(B\) mel-spectrograms in the _minibatch_: \(\mathbf{L}=\mathbf{T}\mathbf{A}^{T}\);\(\mathbf{L}\in\mathbb{R}^{N\times N}\). \(\mathbf{L}\) is referred as the _SCRAPS_ scores matrix, hereafter. _SCRAPS_ is trained with the same loss as in the original _CLIP_ model (Equation 2). Figure 1: _SCRAPS_ model architecture, with a phonetic encoder in the left hand side, and an acoustic encoder in the right hand side. Experiments ### Setup We have trained a _SCRAPS_ model on a large proprietary dataset containing 60,000 hours of de-identified speech data in US English. The data is composed of recordings of untrained speakers with variety of background noises, unnormalized pauses, and in some cases, even some samples with concurrent speech where one of the speakers dominates over the others. Each recording is accompanied with its corresponding transcription. The phonetic sequences are derived from the text transcriptions by using a simplified grapheme to phoneme processor (G2P) based on a dictionary of pronunciations. The _X-Sampa_ phoneme set [34] has been used along all the experiments. 80-mel spectrograms have been extracted from the recordings (using 50ms window length, 12.5ms skips and 1024-point FFTs), and then they have been standardized. We held out a set composed of 5000 utterances, for development purposes. Along the following experiments, we describe various test sets. These test sets are always disjoint from the training set described above. We have trained that model for 2.5M training steps using _Adam_ optimizer [17] with default hyper-parameters, a learning rate of \(5\cdot 10^{-4}\), a _minibatch_ size of 128 and using a machine with 8 _Nvidia V100 16GB Tensor Core GPUs_. Both the phonetic and the acoustic encoder transformer backbones are built with 3 layers of 8 attention heads each and a dropout rate of 0.1. The encoders operate with hidden and output spaces of size \(\mathbb{R}^{256}\). The _LSTM_ integrator head, which is shared between both encoders, is a single layer _LSTM_ with 1024 units, which corresponds to the size of the output embeddings (similar to _CLIP_). The last output state of the _LSTM_ is used as the final _SCRAPS_ vector. We used a temperature of \(T=1.0\) during all the training and concluded that our training process is stable with that setting. Contrastive models are difficult to evaluate in an objective manner [26]. In our case, just taking an unseen set of utterances, computing the _SCRAPS_ vectors from a random sample of the spectrograms and phonetic sequences and measuring how well the model discriminates between matching and non-matching pairs is not informative, due to the fact that the examples in the _minibatches_ are very dissimilar among them (i.e. when the samples are chosen at random to build the _minibatch_ it may be too easy to discriminate between positive and negative pairs). We have conducted this exercise with a held-out test set of around 6000 samples, and achieved an Equal-Error-Rate (EER) in the order of \(10^{-4}\) and Area Under the Receiver Operating Curve (_AUC-ROC_) in the order of \(1-10^{-7}\). Handcrafting a test set by enrolling challenging negative samples is a possible alternative [4], but it relies on a criterion of difficulty that is not easy to define and that is prone to subjective biases. This difficulty motivates the exploration of more creative methods for understanding the performance of our model. We propose a sensitivity and robustness analysis to study how does the model react against perturbation of the input data (referred hereafter as corruption). In this analysis, we corrupt either the spectrogram or the phonetic sequence and measure how the results vary due to that perturbation. We use the following methods to corrupt spectrograms or phonetic sequences: * **Random phonemes substitution**: substitute a portion of phonemes in the phonetic sequence at random. * **Spectrogram _Gaussian_ noise**: add different amounts of _Gaussian_ noise to spectrogram. The noise is added according to Equation 3, where \(\mathbf{M}\) is the spectrogram, \(\mathbf{N}\) is the noise and \(\alpha\in[0,1]\) is the varying parameter that controls the amount of noise. * **Spectrogram mix**: analogous to the _Gaussian_ noise method, but replacing the noise with another random sampled spectrogram from the _minibatch_. Equation 3 is used to mix the original and the noise sources. \[\mathbf{\hat{M}}=(1-\alpha)\cdot\mathbf{M}+\alpha\cdot\mathbf{N}.\] (3) ### Sensitivity We take a random sample of 1152 matching phonetic sequences and spectrograms from the test set, and calculate their _SCRAPS_ score. We apply the corruption methods described above, one by one, and calculate their corresponding _SCRAPS_ scores. Finally, for each of the tested corruption proportions we calculate the percentage of cases where the _SCRAPS_ score of the corrupted examples _drops_ or _lifts_ by any amount, with respect to the original score (without corruption). High sensitivity to phonetic corruption is desirable to spot if utterances and transcriptions match. Hence, a good model would have an early raise of % of drops, and none/very few lifts. However, for acoustic corruption, high sensitivity is bad. Corrupting the acoustics to change the content is not an easy task. We instead noised the utterances with Gaussian noise, or by mixing in another utterance. With those methods a good model would have low sensitivity to acoustic corruption (i.e., ability to match phonetic and acoustic content in adverse conditions). The results are represented visually in Figure 2-top and 2-middle. Table 1 shows the percentage of _drops_ and _lifts_ in _SCRAPS_ scores after corruption for a set of specific corruption amounts. In Table 1 we observe that in the \(91.06\%\) of the examples where at least \(20\%\) of the phonemes are corrupted, the _SCRAPS_ model reacts by decreasing the score of the match, while in \(2.62\%\) of the cases the model score increases. Finally, we also study the effect that the number of phonemes in the sequences has in the _SCRAPS_ score (see figure 3), observing that the distribution of _SCRAPS_ score remains constant when the number of phonemes is greater than 20. ### Robustness Another important feature to test on the _SCRAPS_ model is how robust it is against perturbations. In the worst case scenario, this model would only work on conditions that are very similar to those of the training data, and completely fail if the data characteristics change slightly, which is undesirable. To measure the robustness, we have calculated the _AUC-ROC_ score over the _SCRAPS_ probabilities of each _minibatch_, considering the matching pairs as positive examples and the non-matching pairs as negative examples. We have calculated that score after corrupting the inputs in different quantities. These measurements have been performed over 1152 random sampled examples from the test set. The results of this experiment for the two types of noise are represented in Figure 2-bottom where the amount of noise has been represented in the \(x\) axis and the _AUC-ROC_ in the \(y\) axis. Table 1 provides the _AUC-ROC_ scores achieved for a set of manually selected noise amounts (\(\alpha\) in Equation 3). As we can observe, the _SCRAPS_ model is robust against different levels of noise depending on its nature. For _Gaussian_ noise, the performance degrades a \(10\%\) or more when adding noise with \(\alpha\geq 0.75\), However, for spectrogram mixing, the performance drops at least a \(10\%\) when \(\alpha\geq 0.52\). An interesting observation arises when looking at Figure 2-bottom: the _Gaussian_ noise curve (green) never reaches the random performance level (even for \(\alpha=1.0\), point in which the spectrograms have no information at all, the _AUC-ROC_ is significantly greater than 0.5). We hypothesize that this is due to the fact that despite not providing explicit information to the acoustic transformer, it extracts information from the positional encoding to later determine if the length of the acoustic input is feasible for the phonetic sequence provided. ### Covariate shift Looking at the results of subsection 4.2, one could think that the observed effects may be due to covariate shift induced by the corruption methods described. Four additional experiments have been conducted. These experiments involved more sophisticated corruption techniques such as word swaps, cut-offs, accent switching, and denoising. In all cases, the _SCRAPS_ scores were affected by the corruptions, hence the likelihood of covariate shift being the cause of the initial drop in scores was determined to be minimal. Appendix A, which we included in the preprint version of this manuscript due to length constraints, describes the findings of these experiments more in detail. ### Open source evaluation baselines With the aim of facilitating future benchmarking and reproducibility, we have included two extra evaluations using the _Common Voice_ test set. In the first experiment we measure the correlation between _SCRAPS_ scores and _Whisper_ base perplexity using the invalidated subset of the _Common Voice_ dataset, observing a Pearson correlation of -0.60. This indicates a significant level of agreement between _SCRAPS_ and _Whisper_, which establishes a first reproducible benchmark. The second experiment replicates the sensitivity analysis against natural noise (denoising) described previously (Appendix A). This time, the open source _Common Voice_ test set is used, which contains approximately 16K utterances. After denoising, the _SCRAPS_ scores increase by 2.94 points, with 72.79% of the utterances showing a score increase, confirming that _SCRAPS_ scores significantly increase after denoising the utterances. Due to length restrictions, these results have been included in Appendix B of the preprint. \begin{table} \begin{tabular}{r|c c|c c|c c|c c c} \hline \hline \multicolumn{2}{c|}{Space} & \multicolumn{4}{c|}{Acoustic} & \multicolumn{4}{c}{Phonetic} \\ \hline Method & \multicolumn{3}{c|}{Mix spectrograms} & \multicolumn{4}{c|}{_Gaussian_ noise} & \multicolumn{4}{c}{Random substitution} \\ \hline Metric & _AUC-ROC_ & _Drop (\%)_ & _Lift_ (\%)_ & _AUC-ROC_ & _Drop (\%)_ & _Lift_ (\%) & _AUC-ROC_ & _Drop (\%)_ & _Lift_ (\%) \\ Corruption (\%) & & & & & & & & & \\ \hline 5.0 & 1.00\(\pm\)0.00 & 46.01\(\pm\)2.88 & 53.99\(\pm\)2.88 & 1.00\(\pm\)0.00 & 43.49\(\pm\)2.86 & 56.51\(\pm\)2.86 & 1.00\(\pm\)0.00 & 52.78\(\pm\)2.88 & 8.94\(\pm\)1.65 \\ 10.0 & 1.00\(\pm\)0.00 & 49.31\(\pm\)2.89 & 50.69\(\pm\)2.89 & 1.00\(\pm\)0.00 & 45.14\(\pm\)2.87 & 54.86\(\pm\)2.87 & 1.00\(\pm\)0.00 & 76.91\(\pm\)2.43 & 7.20\(\pm\)1.49 \\ 20.0 & 1.00\(\pm\)0.00 & 53.21\(\pm\)2.88 & 46.79\(\pm\)2.88 & 1.00\(\pm\)0.00 & 47.83\(\pm\)2.88 & 52.17\(\pm\)2.88 & 1.00\(\pm\)0.00 & 91.06\(\pm\)1.65 & 2.60\(\pm\)0.92 \\ 40.0 & 1.00\(\pm\)0.00 & 79.43\(\pm\)2.33 & 20.57\(\pm\)2.33 & 1.00\(\pm\)0.00 & 63.54\(\pm\)2.78 & 36.46\(\pm\)2.78 & 0.97\(\pm\)0.01 & 96.44\(\pm\)1.07 & 1.22\(\pm\)0.63 \\ 60.0 & 0.76\(\pm\)0.02 & 95.23\(\pm\)12.38 & 4.77\(\pm\)1.23 & 1.00\(\pm\)0.00 & 92.10\(\pm\)1.56 & 7.90\(\pm\)1.56 & 0.85\(\pm\)0.02 & 98.44\(\pm\)0.72 & 0.87\(\pm\)0.54 \\ 80.0 & 0.54\(\pm\)0.03 & 99.57\(\pm\)0.38 & 0.43\(\pm\)0.38 & 0.80\(\pm\)0.02 & 99.22\(\pm\)0.51 & 0.78\(\pm\)0.51 & 0.71\(\pm\)0.03 & 99.48\(\pm\)0.42 & 0.35\(\pm\)0.34 \\ 90.0 & 0.52\(\pm\)0.03 & 99.65\(\pm\)0.34 & 0.35\(\pm\)0.34 & 0.65\(\pm\)0.03 & 99.13\(\pm\)0.54 & 0.87\(\pm\)0.54 & 0.62\(\pm\)0.03 & 99.57\(\pm\)3.38 & 0.35\(\pm\)0.34 \\ 95.0 & 0.51\(\pm\)0.03 & 99.65\(\pm\)0.34 & 0.35\(\pm\)0.34 & 0.57\(\pm\)0.03 & 99.39\(\pm\)0.45 & 0.61\(\pm\)0.45 & 0.59\(\pm\)0.03 & 99.83\(\pm\)0.24 & 0.17\(\pm\)0.24 \\ \hline \end{tabular} \end{table} Table 1: Robustness (_AUC-ROC_) and sensitivity (_drop_ and _lift_) results of _SCRAPS_ at different levels of corruption. The second row of the table indicates the method used to corrupt the inputs in each case. All the results are represented as the mean \(\pm\) the 95% confidence interval. Figure 3: Distribution of _SCRAPS_ scores with respect to the lengths of the phonetic sequences. Figure 2: Sensitivity and robustness results. Top and middle charts represent the percentage of _drops_ (top) and _lifts_ (middle) in _SCRAPS_ score as a function of different levels of corruption in the input phonemes score. Bottom chart shows the robustness evaluation results using _Gaussian_ noise and spectrogram mix methods. Dashed lines represent the 95% confidence intervals. ### Examples Aside from quantitative results, qualitative ones are often informative. This section provides a set of cherry-picked examples where we use _SCRAPS_ to retrieve the spectrograms with highest score in our test set given an input transcription. For that, we score a random sample of the test set containing 7124 pairs (spectrogram, phonetic sequence) and calculate the pairwise dot product between acoustic and phonetic vectors. The resulting matrix has shape \(7124\times 7124\). Then we normalize that matrix and do a lookup to find the closest spectrograms given a sentence. In the paragraphs below, the first sentence highlighted in bold represents the query sentence, and the ones below are the transcriptions corresponding with the top 3 spectrograms with highest probability. The numerical probability value, rounded to the 2nd decimal place, is indicated between parentheses before the sentences. In the cases where one of the top 3 closest spectrograms corresponds to the actual matching spectrogram for the reference sentence, we have highlighted its transcription with a star (*). **Astro dance.** \(\ast\)(0.97) Astro dance. \(\ast\)(0.03) Astro, dance. \(\ast\)(0.00) Astro dance for me. **Alexens and this call on my phone.** \(\ast\)(1.00) Alexa, end this call on my phone. \(\ast\)(0.00) Alexa, impair my fire phone. \(\ast\)(0.00) Alexa, end this meeting for all. \(\ast\)**(e).ko, keep reading.** \(\ast\)(0.74) Echo, keep reading. \(\ast\)(0.25) Hey Alexa keep reading. \(\ast\)(0.01) Echo, keep going. **Alexens, kinconnect my fire phone.** \(\ast\)(0.88) Alexa, disconnect my fire phone. \(\ast\)(0.04) Alexa disconnect the phone. \(\ast\)(0.03) Alexa disconnect my cell phone. **Alexas, do i have any appointments on June nineteenth.** \(\ast\)(0.98) Alexa, do i have any appointments on June nineteenth. \(\ast\)(0.01) Alexa do i have any appointments on February twenty fifth. \(\ast\)(0.00) Alexa, do i have any appointments at seven fifty five AM. One of the most important observations by looking at the examples above is that in some cases the commas in the transcriptions (corresponding to pauses in the associated spectrograms, given that the commas were annotated through forced alignment [22]) have a notable effect in the _SCRAPS_ probability. We have analysed that effect in more detail and observed that when removing the commas from a pair that originally had a comma, the _SCRAPS_ score _drops_ a \(5.49\%\pm 0.38\%\) with respect to the original value (as per the 95% confidence interval over the mean, calculated with 768 samples). Through an informal listening test, we have confirmed that the commas usually correspond to long pauses in the spectrogram. ### Downstream application 1: pretrained phonetic embeddings for speech generation For this application, we have trained an autoregressive multispeaker text-to-speech model with attention, similar to Transformer TTS. We used a proprietary dataset with 180,000 hours of de-identified en-US recordings. Around 2/3 of the dataset is automatically annotated by an ASR model, the rest of utterances have been manually annotated. The evaluation set, used to generate the figure 4, is composed of 10,000 sentences, randomly held out from the training set. The baseline architecture consists of three modules: a phonetic encoder, an acoustic decoder and a speaker embedding. The acoustic decoder is autoregressive and attends to the output of the phonetic encoder and the speaker embedding. Then we have substituted the phonetic encoder by a _SCRAPS_ pretrained phonetic encoder (only the transformer backbone, not the _LSTM_ integrator). The test metrics evolution during training are represented in Figure 4. In the figure, we observe that both architectures get a very similar final performance, but when using _SCRAPS_, the model converges much faster (the initial error drop happens 250,000 steps earlier in the case of the model that uses the pretrained _SCRAPS_). Apart from the clear faster convergence, this observation allows us concluding that the _SCRAPS_ transformer latent space is rich enough to be used as a pretrained module to speed up Text-To-Speech (TTS) model training. ### Downstream application 2: text-less intelligibility evaluation for voice conversion systems Intelligibility performance is a key metric for Voice Conversion (VC) systems [21]. Usually, WER is used to assess such performance. However, it is relatively costly: apart from ground-truth texts, one needs to transcribe VC audio using humans or ASR. In case of using an ASR system, it needs to be trained on the target language with sufficient amount of data in order to be robust enough to bypass the need for human annotations. For these reasons, there is a need for cheaper alternatives that, if possible, do not require any text. In this application we study the use of _SCRAPS_ as a text-less intelligibility metric. For that, we used a VC model trained over a proprietary dataset with 60k hours of de-identified en-US recordings, manually annotated by humans. Each of these recordings are paired with a synthetically generated audio of a target speaker, to train the many-to-one voice conversion model. These syntheses have been generated using a similar technique of the one described in this paper [9]. As mentioned in the section 4.6 of the manuscript, the evaluation set is composed of 3500 utterances, which were held out from the training set. Although _SCRAPS_ is trained to match a sequence of phonemes to the corresponding audio, at inference time it can also be used to compute correspondence between two audio files without requiring any text. In this scenario, the _SCRAPS_ score is computed between vectors of synthetic audio (VC) and source audio (pre-conversion). We refer to this metric as _SCRAPS(VC, source)_. We evaluate usability of _SCRAPS(VC, source)_ as an intelligibility metric by testing how it relates to WER computed over human annotations (h-WER). We run an en-US any-to-one VC system (similar to [7]) over a test set of 3532 recordings. Then, we compute h-WER by gathering human transcriptions on VC audio and comparing it to Figure 4: Word Error Rate (WER) of the TTS models, relative to the WER over the test set recordings. The two lines represent how the intelligibility of the model improves as the model trains for the architectures with pretrained SCRAPS and with E2E phonetic encoder. The WER has been measured with a pretrained ASR model over a set of 512 sentences unseen during training, synthesized every 25,000 training steps. The WER represented in the figure is relative to the recordings WER. ground-truths. Finally, _SCRAPS(VC, source)_ is computed using only VC and source (pre-conversion) audios. We compute the _Spearman_ correlation between _SCRAPS(VC, audio)_ and h-WER (Table 2). For comparison, two alternative metrics are computed: (1) ASR-WER, where we use a proprietary ASR system to replace human transcriptions, (2) _SCRAPS(VC, text)_ that uses ground-truth text instead of input audio. The proposed metric obtains a negative correlation of \(-0.1938\), which is significantly lower (in absolute values) than _SCRAPS(VC, text)_ (\(-0.3232\)) and ASR (\(0.6197\)). Since h-WER has a bounded distribution at 0, with a large proportion of samples (\(69\%\)) having exactly h-WER=0, while _SCRAPS_ scores are unbounded, we also compute correlation using only samples with h-WER\(>\)0 (third column). As can be observed, the differences in correlation between metrics are then smaller, with no perceivable difference between _SCRAPS(VC, source)_ and _SCRAPS(VC, text)_ scores. This suggests that _SCRAPS(VC, source)_ more often incorrectly assigns low score to high-intelligibility (h-WER=0) samples than _SCRAPS(VC, text)_, which could originate from _SCRAPS_ not being trained with pairs of audio and being less robust to slight differences between them, even if content in both is the same. This experiment shows there is a statistically significant correlation between SCRAPS scores and h-WER, although using a robust ASR is still the strongest automatic solution, despite the higher computational cost. We also test if _SCRAPS(VC, source)_ can be used to find high-WER VC samples, which could serve as a post-filter to remove low-intelligibility VC outputs. The results can be seen in Figure 5, where four h-WER thresholds were used to binarize samples into "low" and "high" h-WER samples. While _SCRAPS(VC, source)_ is not robust enough to classify h-WER\(>\)0 samples without supervision (AUC=\(0.580\)), it might be sufficient enough to act as a filter for removing the worst cases, _i.e._ h-WER\(>\)0.75 (AUC=\(0.830\)), thus reducing the computational cost of the filtering process. ### Ablation study In this subsection, we question some of the design decisions of the _SCRAPS_ architecture and test the impact of ablating them in terms of sensitivity and robustness. The aspects included in this ablation study are the following ones: * Head parameters sharing: this feature was included to try to bias the acoustic and phonetic transformer towards producing compatible representations. However, there is a risk that by doing that, the representation power of the encoders decreases. * _LSTM_ integrator modules: they may be superfluous. In such case, the output of the acoustic and phonetic encoders is the vector corresponding to the sequence length at the output of the transformer. To do these experiments we trained two new models with the described ablated components, for the same number of training steps as the full model described before. The results, included in Table 3, show that in the majority of metrics there is no significant difference between the baseline and the ablated versions. More specifically, from the table we can conclude that sharing parameters of the _LSTM_ integrators is safe, as it does not have a significant effect in the performance, compared with the baseline. However, the _SCRAPS_ version without integrators seems to be significantly less robust to _Gaussian_ noise than the rest, showing large differences in _AUC-ROC_ score. This may imply that the transformer is not large enough to handle noisy inputs and that the _LSTM_ module at the output of the transformer helps making the output more robust. ## 5 Discussion The _SCRAPS_ model has shown the ability to model a rich common latent space between phonetic and acoustic spaces, which is interesting for speech generation and recognition tasks. This task is not trivial, as it involves implicitly learning the underlying allophonic and phonetic joint probability distribution, which is the root difficulty of speech technologies, while learning to ignore non-related information such as prosody, background, non-verbal sounds and other stationary aspects such as speaker id or different accents. We have proved that the _SCRAPS_ model has high sensitivity to small changes in phonetic sequences, and at the same time it is robust against high levels of noise, independently of its nature (stationary and non-stationary). One of the most surprising findings has been _SCRAPS_ robustness in the face of concurrent speech, showing an AUC-ROC of \(1.00\) up to 60%-40% overlap of primary and secondary spectrograms, respectively (see Figure 2-bottom). The _SCRAPS_ model can be seen as an implicit speech generation-recognition system. Given a mel-spectrogram, one can search for the sentence that better matches that spectrogram, transforming the system in an automatic speech recognizer. On the other side, given a sentence, one could look for the spectrogram that better matches the input sentence, thus converting the system into a speech generator. Obviously the search processes makes these approaches very computationally expensive, but this interpretation of _SCRAPS_ helps devise further applications. Finally, on the practical side, _SCRAPS_ brings its value when used as a tool for a downstream application. There are many potential use cases for this model, two of which we explored in sections 4.7 and 4.8. Further research lines could explore other applications. Below, we state some of the most interesting examples, but there may be many more. \begin{table} \begin{tabular}{l|l l} \hline \hline **Metric** & **Correlation** & **Correlation** \\ & **(all)** & **(h-WER\(>\)0)** \\ \hline ASR-WER & 0.6197 & 0.7064 \\ _SCRAPS(VC, text)_ & -0.3232 & -0.4812 \\ _SCRAPS(VC, source)_ & -0.1938 & -0.4740 \\ \hline \hline \end{tabular} \end{table} Table 2: Spearman correlation between automatic intelligibility metrics and human WER (h-WER). Results on all utterances (second column) and utterances with h-WER\(>\)0 (third column) are shown. Figure 5: ROC of h-WER\(>\)T classification using SCRAPS(VC, source) scores for different thresholds T. * Transcription quality assurance: machine learning models are sensitive to the quality of the training data. In speech generation and recognition applications, the datasets often contain transcription problems at different levels: from mistaken words to poor annotation (e.g. not containing hesitations or background events). _SCRAPS_ can be trained with high quality data and used to score other datasets to spot transcription issues. * Graphene to phoneme mapping enhancement: speech datasets are usually composed of pairs of audio clips and sentences. The sentences are mapped to the phonetic sequences using grapheme to phoneme (G2P) systems, which often rely on a word-level dictionary of pronunciations [30]. Although this approach usually works in practice, it is prone to errors when a speaker in the data set uses non-normative pronunciations, or in the presence of homographs [23]. _SCRAPS_ could potentially be used to find the most probable phoneme, that better matches the allophones present in the associated recording, allowing to have a higher quality automatic annotation and hence a potentially better model. * Intelligibility optimization: intelligibility is an important metric for speech synthesis systems, although it is not the only one (other examples are signal quality, speaker id match or naturalness). Usually, these models do not directly optimize these objectives, but others such as the maximum likelihood [3], that indirectly optimize the previously mentioned ones. It has been repeatedly proven that the those indirect objectives do not always correlate with the perceptual ones [12]. _SCRAPS_ model, as a consequence of being a fully differentiable system, offers the possibility of being used as a loss function to directly optimize the match between the target transcription and the content of the spectrogram, or in other words, to directly maximize intelligibility. This could be achieved by plugging the _SCRAPS_ encoders (frozen) in a speech generation architecture and maximizing the dot product between the target phonetic sequence and the synthesized spectrogram. This could be done in combination with the traditional objectives. Alternatively, one could also use _SCRAPS_ to monitor the evolution of intelligibility during training in order to, for instance, implement early stopping mechanisms. ## 6 Conclusions We showed how by using a _CLIP_-like architecture we are able to model a latent space that connects phonetic and acoustic domains. The experiments provide evidence that the embeddings are robust to different kinds of noise, with no degradation up to 60% of corruption when using _Gaussian_ noise and 40% when dealing with concurrent speech. At the same time, we observed that _SCRAPS_ reacts decreasing the score, in \(91.06\%\) of the examples where at least \(20\%\) of the phonemes are corrupted. Finally, we provided two examples of downstream applications that show that _SCRAPS_ is a useful component in several downstream applications, and discussed future research lines.
2305.13122
Policy Representation via Diffusion Probability Model for Reinforcement Learning
Popular reinforcement learning (RL) algorithms tend to produce a unimodal policy distribution, which weakens the expressiveness of complicated policy and decays the ability of exploration. The diffusion probability model is powerful to learn complicated multimodal distributions, which has shown promising and potential applications to RL. In this paper, we formally build a theoretical foundation of policy representation via the diffusion probability model and provide practical implementations of diffusion policy for online model-free RL. Concretely, we character diffusion policy as a stochastic process, which is a new approach to representing a policy. Then we present a convergence guarantee for diffusion policy, which provides a theory to understand the multimodality of diffusion policy. Furthermore, we propose the DIPO which is an implementation for model-free online RL with DIffusion POlicy. To the best of our knowledge, DIPO is the first algorithm to solve model-free online RL problems with the diffusion model. Finally, extensive empirical results show the effectiveness and superiority of DIPO on the standard continuous control Mujoco benchmark.
Long Yang, Zhixiong Huang, Fenghao Lei, Yucun Zhong, Yiming Yang, Cong Fang, Shiting Wen, Binbin Zhou, Zhouchen Lin
2023-05-22T15:23:41Z
http://arxiv.org/abs/2305.13122v1
# Policy Representation via Diffusion Probability Model for Reinforcement Learning 1 ###### Abstract Popular reinforcement learning (RL) algorithms tend to produce a unimodal policy distribution, which weakens the expressiveness of complicated policy and decays the ability of exploration. The diffusion probability model is powerful to learn complicated multimodal distributions, which has shown promising and potential applications to RL. In this paper, we formally build a theoretical foundation of policy representation via the diffusion probability model and provide practical implementations of diffusion policy for online model-free RL. Concretely, we character diffusion policy as a stochastic process induced by stochastic differential equations, which is a new approach to representing a policy. Then we present a convergence guarantee for diffusion policy, which provides a theory to understand the multimodality of diffusion policy. Furthermore, we propose the DIPO, which implements model-free online RL with **D**ffusion **PO**licy. To the best of our knowledge, DIPO is the first algorithm to solve model-free online RL problems with the diffusion model. Finally, extensive empirical results show the effectiveness and superiority of DIPO on the standard continuous control Mujoco benchmark. ###### Contents * 1 Introduction * 1.1 Our Main Work * 1.2 Paper Organization * 2 Reinforcement Learning * 3 Motivation: A View from Policy Representation * 3.1 Policy Representation for Reinforcement Learning * 3.2 Diffusion Model is Powerful to Policy Representation * 4 Diffusion Policy * 4.1 Stochastic Dynamics of Diffusion Policy * 4.2 Exponential Integrator Discretization for Diffusion Policy * 4.3 Convergence Analysis of Diffusion Policy * 5 DIPO: Implementation of Diffusion Policy for Model-Free Online RL * 5.1 Training Loss of DIPO * 5.2 Playing Action of DIPO * 5.3 Policy Improvement of DIPO * 6 Related Work * 6.1 Diffusion Models for Reinforcement Learning * 6.2 Generative Models for Policy Learning * 7 Experiments * 7.1 Comparative Evaluation and Illustration * 7.2 State-Visiting Visualization * 7.3 Ablation Study * 8 Conclusion * A Review on Notations * B Auxiliary Results * B.1 Diffusion Probability Model (DPM). * B.2 Transition Probability for Ornstein-Uhlenbeck Process * B.3 Exponential Integrator Discretization * B.4 Fokker-Planck Equation * B.5 Donsker-Varadhan Representation for KL-divergence * B.6 Some Basic Results for Diffusion Policy * C Implementation Details of DIPO * C.1 DIPO: Model-Free Learning with Diffusion Policy * C.2 Loss Function of DIPO * C.3 Playing Actions of DIPO * D Time Derivative of KL Divergence Between Diffufusion Policy and True Reverse Process * D.1 Time Derivative of KL Divergence at Reverse Time \(k=0\) * D.2 Auxiliary Results For Reverse Time \(k=0\) * D.3 Proof for Result at Reverse Time \(k=0\) * D.4 Proof for Result at Arbitrary Reverse Time \(k\) * E Proof of Theorem 4.3 * F Additional Details * F.1 Proof of Lemma F.1 * F.2 Proof of Lemma D.7 * G Details and Discussions for multimodal Experiments * G.1 Multimodal Environment * G.2 Plots Details of Visualization * G.3 Results Report * H Additional Experiments * H.1 Hyper-parameters for MuJoCo * H.2 Additional Tricks for Implementation of DIPO * H.3 Details and Additional Reports for State-Visiting * H.4 Ablation Study on MLP and VAE Introduction Existing policy representations (e.g., Gaussian distribution) for reinforcement learning (RL) tend to output a unimodal distribution over the action space, which may be trapped in a locally optimal solution due to its limited expressiveness of complex distribution and may result in poor performance. Diffusion probability model (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021) is powerful to learn complicated multimodal distributions, which has been applied to RL tasks (e.g., (Ajay et al., 2023; Reuss et al., 2023; Chi et al., 2023)). Although the diffusion model (or diffusion policy) shows its promising and potential applications to RL tasks, previous works are all empirical or only consider offline RL settings. This raises some fundamental questions: How to character diffusion policy? How to show the expressiveness of diffusion policy? How to design a diffusion policy for online model-free RL? Those are the focuses of this paper. ### Our Main Work In this paper, we mainly consider diffusion policy from the next three aspects. **Charactering Diffusion Policy as Stochastic Process.** We formulate diffusion policy as a stochastic process that involves two processes induced by stochastic differential equations (SDE), see Figure 1, where the forward process disturbs the input policy \(\pi\) to noise, then the reverse process infers the policy \(\pi\) according to a corresponding reverse SDE. Although this view is inspired by the score-based generative model (Song et al., 2021), we provide a brand new approach to represent a policy: via a stochastic process induced by SDE, neither via value function nor parametric function. Under this framework, the diffusion policy is flexible to generate actions according to numerical SDE solvers. **Convergence Analysis of Diffusion Policy.** Under mild conditions, Theorem 4.3 presents a theoretical convergence guarantee for diffusion policy. The result shows that if the score estimator is sufficiently accurate, then diffusion policy efficiently infers the actions from any realistic policy that generates the training data. It is noteworthy that Theorem 4.3 also shows that diffusion policy is powerful to represent a multimodal distribution, which leads to sufficient exploration and better reward performance, Section 3 and Appendix G provide more discussions with numerical verifications for this view. **Diffusion Policy for Model-free Online RL.** Recall the standard model-free online RL framework, see Figure 2, where the policy improvement produces a new policy \(\pi^{{}^{\prime}}\succeq\pi\) according to the data \(\mathcal{D}\). However, Theorem 4.3 illustrates that the diffusion policy only fits the distribution of the policy \(\pi\) but does not improve the policy \(\pi\). We can not embed the diffusion policy into the standard RL training framework, i.e., the policy improvement in Figure 2 can not be naively replaced by diffusion policy. To apply diffusion policy to model-free online RL task, we propose the DIPO algorithm, see Figure 3. The proposed DIPO considers a novel way for policy improvement, we call it **action gradient** that updates each \(\mathbf{a}_{t}\in\mathcal{D}\) along the gradient field (over the action space) of state-action value: \[\mathbf{a}_{t}\leftarrow\mathbf{a}_{t}+\eta\nabla_{\mathbf{a}}Q_{\pi}( \mathbf{s}_{t},\mathbf{a}_{t}),\] where for a given state \(\mathbf{s}\), \(Q_{\pi}(\mathbf{s},\mathbf{a})\) measures the reward performance over the action space \(\mathcal{A}\). Thus, DIPO improves the policy according to the actions toward to better reward performance. To the best of our knowledge, this paper first presents the idea of action gradient, which provides an efficient way to make it possible to design a diffusion policy for online RL. ### Paper Organization Section 2 presents the background of reinforcement learning. Section 3 presents our motivation from the view of policy representation. Section 4 presents the theory of diffusion policy. Section 5 presents the practical implementation of diffusion policy for model-free online reinforcement learning. Section 7 presents the experiment results. Figure 3: Framework of DIPO: Implementation for Model-free Online RL with **D**Iffusion **PO**licy. Figure 2: Standard Training Framework for Model-free Online RL. ## 2 Reinforcement Learning Reinforcement learning (RL)[Sutton and Barto, 2018] is formulated as _Markov decision process_\(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathbb{P}(\cdot),r,\gamma,d_{0})\), where \(\mathcal{S}\) is the state space; \(\mathcal{A}\subset\mathbb{R}^{p}\) is the continuous action space; \(\mathbb{P}(\mathbf{s}^{{}^{\prime}}|\mathbf{s},\mathbf{a})\) is the probability of state transition from \(\mathbf{s}\) to \(\mathbf{s}^{{}^{\prime}}\) after playing \(\mathbf{a}\); \(r(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a})\) denotes the reward that the agent observes when the state transition from \(\mathbf{s}\) to \(\mathbf{s}^{{}^{\prime}}\) after playing \(\mathbf{a}\); \(\gamma\in(0,1)\) is the discounted factor, and \(d_{0}(\cdot)\) is the initial state distribution. A policy \(\pi\) is a probability distribution defined on \(\mathcal{S}\times\mathcal{A}\), and \(\pi(\mathbf{a}|\mathbf{s})\) denotes the probability of playing \(\mathbf{a}\) in state \(\mathbf{s}\). Let \(\{\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t+1},r(\mathbf{s}_{t+1}|\mathbf{ s}_{t},\mathbf{a}_{t})\}_{t\geq 0}\sim\pi\) be the trajectory sampled by the policy \(\pi\), where \(\mathbf{s}_{0}\sim d_{0}(\cdot)\), \(\mathbf{a}_{t}\sim\pi(\cdot|\mathbf{s}_{t})\), \(\mathbf{s}_{t+1}\sim\mathbb{P}(\cdot|\mathbf{s}_{t},\mathbf{a}_{t})\). The goal of RL is to find a policy \(\pi\) such that \(\pi_{*}=:\arg\max_{\pi}\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r( \mathbf{s}_{t+1}|\mathbf{s}_{t},\mathbf{a}_{t})\right].\) ## 3 Motivation: A View from Policy Representation In this section, we clarify our motivation from the view of policy representation: diffusion model is powerful to policy representation, which leads to sufficient exploration and better reward performance. ### Policy Representation for Reinforcement Learning Value function and parametric function based are the main two approaches to represent policies, while diffusion policy expresses a policy via a stochastic process (shown in Figure 1) that is essentially difficult to the previous representation. In this section, we will clarify this view. Additionally, we will provide an empirical verification with a numerical experiment. #### 3.1.1 Policy Representation via Value Function A typical way to represent policy is _\(\epsilon\)-greedy policy_[Sutton and Barto, 1998] or _energy-based policy_[Sallans and Hinton, 2004, Peters et al., 2010], \[\pi(\mathbf{a}|\mathbf{s})=\begin{cases}\arg\max_{\mathbf{a}^{{}^{\prime}} \in\mathcal{A}}Q_{\pi}(\mathbf{s},\mathbf{a}^{{}^{\prime}})&\text{w.p. }1-\epsilon;\\ \text{randomly play }\mathbf{a}\in\mathcal{A}&\text{w.p. }\epsilon;\end{cases}\text{ or }\pi(\mathbf{a}|\mathbf{s})=\frac{\exp\left\{Q_{\pi}(\mathbf{s}, \mathbf{a})\right\}}{Z_{\pi}(\mathbf{s})}, \tag{1}\] where \[Q_{\pi}(\mathbf{s},\mathbf{a})=:\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty} \gamma^{t}r(\mathbf{s}_{t+1}|\mathbf{s}_{t},\mathbf{a}_{t})|\mathbf{s}_{0}= \mathbf{s},\mathbf{a}_{0}=\mathbf{a}\right],\] the normalization term \(Z_{\pi}(\mathbf{s})=\int_{\mathbb{R}^{p}}\exp\left\{Q_{\pi}(\mathbf{s}, \mathbf{a})\right\}\mathrm{d}\mathbf{a}\), and "w.p." is short for "with probability". The representation (1) illustrates a connection between policy and value function, which is widely used in _value-based methods_ (e.g., SASRA [Rummery and Niranjan, 1994], Q-Learning [Watkins, 1989], DQN [Mnih et al., 2015]) and _energy-based methods_ (e.g., SQL [Schulman et al., 2017a, Haarnoja et al., 2017, 2018a], SAC [Haarnoja et al., 2018b]). #### 3.1.2 Policy Representation via Parametric Function Instead of consulting a value function, the parametric policy is to represent a policy by a parametric function (e.g., neural networks), denoted as \(\pi_{\boldsymbol{\theta}}\), where \(\boldsymbol{\theta}\) is the parameter. Policy gradient theorem [Sutton et al., 1999, Silver et al., 2014] plays a center role to learn \(\boldsymbol{\theta}\), which is fundamental in modern RL (e.g., TRPO [Schulman et al., 2015], DDPG [Lillicrap et al., 2016], PPO [Schulman et al., 2017b], IMPALA [Espeholt et al., 2018], et al). #### 3.1.3 Policy Representation via Stochastic Process It is different from both value-based and parametric policy representation; the diffusion policy (see Figure 1) generates an action via a stochastic process, which is a fresh view for the RL community. The diffusion model with RL first appears in (Janner et al., 2022), where it proposes the _diffuser_ that plans by iteratively refining trajectories, which is an essential offline RL method. Ajay et al. (2023); Reuss et al. (2023) model a policy as a return conditional diffusion model, Chen et al. (2023); Wang et al. (2023); Chi et al. (2023) consider to generate actions via diffusion model. The above methods are all to solve offline RL problems. To the best of our knowledge, our proposed method is the first diffusion approach to online model-free reinforcement learning. ### Diffusion Model is Powerful to Policy Representation This section presents the diffusion model is powerful to represent a complex policy distribution. by two following aspects: 1) fitting a multimodal policy distribution is efficient for exploration; 2) empirical verification with a numerical experiment. The Gaussian policy is widely used in RL, which is a unimodal distribution, and it plays actions around the region of its mean center with a higher probability, i.e., the red region \(A\) in Figure 4. The unimodal policy weakens the expressiveness of complicated policy and decays the agent's ability to explore the environment. While for a multimodal policy, it plays actions among the different regions: \(A_{1}\cup A_{2}\cup A_{3}\). Compared to the unimodal policy, the multimodal policy is powerful to explore the unknown world, making the agent Figure 4: Unimodal Distribution vs Multimodal Distribution. Figure 5: Policy representation comparison of different policies on multimodal environment. understand the environment efficiently and make a more reasonable decision. We compare the ability of policy representation among SAC, TD3 (Fujimoto et al., 2018), PPO and diffusion policy on the "multi-goal" environment (Haarnoja et al., 2017) (see Figure 5), where the \(x\)-axis and \(y\)-axis are 2D states, the four red dots denote the states of the goal at \((0,5)\), \((0,-5)\), \((5,0)\) and \((-5,0)\) symmetrically. A reasonable policy should be able to take actions uniformly to those four goal positions with the same probability, which characters the capacity of exploration of a policy to understand the environment. In Figure 5, the red arrowheads represent the directions of actions, and the length of the red arrowheads represents the size of the actions. Results show that diffusion policy accurately captures a multimodal distribution landscape, while both SAC, TD3, and PPO are not well suited to capture such a multimodality. From the distribution of action direction and length, we also know the diffusion policy keeps a more gradual and steady action size than the SAC, TD3, and PPO to fit the multimodal distribution. For more details about 2D/3D plots, environment, comparisons, and discussions, please refer to Appendix G. ## 4 Diffusion Policy In this section, we present the details of diffusion policy from the following three aspects: its stochastic dynamic equation (shown in Figure 1), discretization implementation, and finite-time analysis of its performance for the policy representation. ### Stochastic Dynamics of Diffusion Policy Recall Figure 1, we know diffusion policy contains two processes: forward process and reverse process. We present its dynamic in this section. **Forward Process.** To simplify the expression, we only consider \(g(t)=\sqrt{2}\), which is parallel to the general setting in Figure 1. For any given state \(\mathbf{s}\), the forward process produces a sequence \(\{(\bar{\mathbf{a}}_{t}|\mathbf{s})\}_{t=0:T}\) that starting with \(\bar{\mathbf{a}}_{0}\sim\pi(\cdot|\mathbf{s})\), and it follows the Ornstein-Uhlenbeck process (also known as Ornstein-Uhlenbeck SDE), \[\mathrm{d}\bar{\mathbf{a}}_{t}=-\bar{\mathbf{a}}_{t}\mathrm{d}t+\sqrt{2} \mathrm{d}\mathbf{w}_{t}. \tag{2}\] Let \(\bar{\mathbf{a}}_{t}\sim\bar{\pi}_{t}(\cdot|\mathbf{s})\) be the evolution distribution along the Ornstein-Uhlenbeck flow (2). According to Proposition B.1 (see Appendix B.2), we know the conditional distribution of \(\bar{\mathbf{a}}_{t}|\bar{\mathbf{a}}_{0}\) is Gaussian, \[\bar{\mathbf{a}}_{t}|\bar{\mathbf{a}}_{0}\sim\mathcal{N}\left( \mathrm{e}^{-t}\bar{\mathbf{a}}_{0},\left(1-\mathrm{e}^{-2t}\right)\mathbf{I }\right). \tag{3}\] That implies the forward process (2) transforms policy \(\pi(\cdot|\mathbf{s})\) to the Gaussian noise \(\mathcal{N}(\mathbf{0},\mathbf{I})\). **Reverse Process.** For any given state \(\mathbf{s}\), if we reverse the stochastic process \(\{(\bar{\mathbf{a}}_{t}|\mathbf{s})\}_{t=0:T}\), then we obtain a process that transforms noise into the policy \(\pi(\cdot|\mathbf{s})\). Concretely, we model the policy as the process \(\{(\hat{\bar{a}}_{t}|\mathbf{s})\}_{t=0:T}\) according to the next Ornstein-Uhlenbeck process (running forward in time), \[\mathrm{d}\hat{\bar{a}}_{t}=\left(\hat{\bar{\mathbf{a}}}_{t}+2 \boldsymbol{\nabla}\log p_{T-t}(\hat{\bar{\mathbf{a}}}_{t})\right)\mathrm{d}t+ \sqrt{2}\mathrm{d}\mathbf{w}_{t}, \tag{4}\] where \(p_{t}(\cdot)\) is the probability density function of \(\bar{\pi}_{t}(\cdot|\mathbf{s})\). Furthermore, according to (Anderson, 1982), with an initial action \(\hat{\bar{\mathbf{a}}}_{0}\sim\bar{\pi}_{T}(\cdot|\mathbf{s})\), the reverse process \(\{\hat{\bar{a}}_{t}\}_{t=0:T}\) shares the same distribution as the time-reversed version of the forward process \(\{\bar{\mathbf{a}}_{T-t}\}_{t=0:T}\). That also implies for all \(t=0,1,\cdots,T\), \[\hat{\pi}_{t}(\cdot|\mathbf{s})=\bar{\pi}_{T-t}(\cdot|\mathbf{s}), \quad\text{if }\hat{\mathrm{a}}_{0}\sim\bar{\pi}_{T}(\cdot|\mathbf{s}). \tag{5}\] **Score Matching.** The score function \(\boldsymbol{\nabla}\log p_{T-t}(\cdot)\) defined in (4) is not explicit, we consider an estimator \(\mathbf{\hat{S}}(\cdot,\mathbf{s},T-t)\) to approximate the score function at a given state \(\mathbf{s}\). We consider the next problem, \[\mathbf{\hat{S}}(\cdot,\mathbf{s},T-t)= :\arg\min_{\hat{\boldsymbol{s}}(\cdot)\in\mathcal{F}}\mathbb{E}_ {\mathbf{a}\sim\bar{\pi}_{t}(\cdot|\mathbf{s})}\left[\left\|\hat{\mathbf{s}}( \mathbf{a},\mathbf{s},t)-\boldsymbol{\nabla}\log p_{T-t}(\mathbf{a})\right\|_ {2}^{2}\right]\] (6) \[\stackrel{{\eqref{eq:def_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eqeq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eq_eq_eq_eq_eq_eq ### Convergence Analysis of Diffusion Policy In this section, we present the convergence analysis of diffusion policy, we need the following notations and assumptions before we further analyze. Let \(\rho(\mathbf{x})\) and \(\mu(\mathbf{x})\) be two smooth probability density functions on the space \(\mathbb{R}^{p}\), the Kullback-Leibler (KL) divergence and relative Fisher information (FI) from \(\mu(\mathbf{x})\) to \(\rho(\mathbf{x})\) are defined as follows, \[\mathrm{KL}(\rho\|\mu)=\int_{\mathbb{R}^{p}}\rho(\mathbf{x})\log\frac{\rho( \mathbf{x})}{\mu(\mathbf{x})}\mathrm{d}\mathbf{x},\ \mathrm{FI}(\rho\|\mu)=\int_{\mathbb{R}^{p}}\rho( \mathbf{x})\left\|\mathbf{\nabla}\log\left(\frac{\rho(\mathbf{x})}{\mu(\mathbf{x}) }\right)\right\|_{2}^{2}\mathrm{d}\mathbf{x}.\] **Assumption 4.1** (Lipschitz Score Estimator and Policy).: _The score estimator is \(L_{s}\)-Lipschitz over action space \(\mathcal{A}\), and the policy \(\pi(\cdot|\mathbf{s})\) is \(L_{p}\)-Lipschitz over action space \(\mathcal{A}\), i.e., for any \(\mathbf{a}\), \(\mathbf{a}^{{}^{\prime}}\in\mathcal{A}\), the following holds,_ \[\|\hat{\mathbf{S}}(\mathbf{a},\mathbf{s},t)-\hat{\mathbf{S}}(\mathbf{a}^{{}^{ \prime}},\mathbf{s},t)\|\leq L_{s}\|\mathbf{a}-\mathbf{a}^{{}^{\prime}}\|,\ \|\mathbf{\nabla}\log\pi(\mathbf{a}|\mathbf{s})-\mathbf{\nabla}\log\pi(\mathbf{a}^{{}^{ \prime}}|\mathbf{s})\|\leq L_{p}\|\mathbf{a}-\mathbf{a}^{{}^{\prime}}\|.\] **Assumption 4.2** (Policy with \(\nu\)-LSI Setting).: _The policy \(\pi(\cdot|\mathbf{s})\) satisfies \(\nu\)-Log-Sobolev inequality (LSI) that defined as follows, there exists constant \(\nu>0\), for any probability distribution \(\mu(\mathbf{x})\) such that_ \[\mathrm{KL}(\mu\|\pi)\leq\frac{1}{2\nu}\mathrm{FI}(\mu\|\pi).\] Assumption 4.1 is a standard setting for Langevin-based algorithms (e.g., (Wibisono and Yang, 2022; Vempala and Wibisono, 2019)), and we extend it with RL notations. Assumption 4.2 presents the policy distribution class that we are concerned, which contains many complex distributions that are not restricted to be log-concave, e.g. any slightly smoothed bound distribution admits the condition (see (Ma et al., 2019, Proposition 1)). **Theorem 4.3** (Finite-time Analysis of Diffusion Policy).: _For a given state \(\mathbf{s}\), let \(\{\bar{\pi}_{t}(\cdot|\mathbf{s})\}_{t=0:T}\) and \(\{\bar{\pi}_{t}(\cdot|\mathbf{s})\}_{t=0:T}\) be the distributions along the flow (2) and (4) correspondingly, where \(\{\bar{\pi}_{t}(\cdot|\mathbf{s})\}_{t=0:T}\) starts at \(\bar{\pi}_{0}(\cdot|\mathbf{s})=\pi(\cdot|\mathbf{s})\) and \(\{\bar{\pi}_{t}(\cdot|\mathbf{s})\}_{t=0:T}\) starts at \(\bar{\pi}_{0}(\cdot|\mathbf{s})=\bar{\pi}_{T}(\cdot|\mathbf{s})\). Let \(\hat{\pi}_{k}(\cdot|\mathbf{s})\) be the distribution of the iteration (9) at the \(k\)-th time \(t_{k}=hk\), i.e., \(\hat{\mathbf{a}}_{t_{k}}\sim\hat{\pi}_{k}(\cdot|\mathbf{s})\) denotes the diffusion policy (see Algorithm 1) at the time \(t_{k}=hk\). Let \(\{\widehat{\pi}_{k}(\cdot|\mathbf{s})\}_{k=0:K}\) be starting at \(\hat{\pi}_{0}(\cdot|\mathbf{s})=\mathcal{N}(\mathbf{0},\mathbf{I})\), under Assumption 4.1 and 4.2, let the reverse length \(K\geq T\cdot\max\left\{\tau_{0}^{-1},\mathsf{T}_{0}^{-1},12L_{s},\nu\right\}\), where constants \(\tau_{0}\) and \(\mathsf{T}_{0}\) will be special later. Then the KL-divergence between diffusion policy \(\hat{\pi}_{K}(\cdot|\mathbf{s})\) and input policy \(\pi(\cdot|\mathbf{s})\) is upper-bounded as follows,_ \[\mathrm{KL}\big{(}\hat{\pi}_{K}(\cdot|\mathbf{s})\|\pi(\cdot| \mathbf{s})\big{)}\leq\underbrace{\mathrm{e}^{-\frac{9}{4}\nu hK}\mathrm{KL} \big{(}\mathcal{N}(\mathbf{0},\mathbf{I})\|\pi(\cdot|\mathbf{s})\big{)}}_{ \text{convergence of forward process (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: on other parameters. The third error is sourced from score matching (7), which represents how close the score estimator \(\hat{\mathbf{S}}\) is to the score function \(\boldsymbol{\nabla}\log p_{T-t}(\cdot)\) defined in (4). That implies for the practical implementation, the error from score matching could be sufficiently small if we find a good score estimator \(\hat{\mathbf{S}}\). Furthermore, for any \(\epsilon>0\), if we find a good score estimator that makes the score matching error satisfy \(\epsilon_{\text{score}}<\frac{1}{20}\epsilon\), the step-size \(h=\mathcal{O}\left(\frac{\epsilon\sqrt{\epsilon}}{pL_{s}}\right)\), and reverse length \(K=\frac{9}{4\nu h}\log\frac{3\text{KL}(\mathcal{N}(\mathbf{0},\mathbf{I})\| \pi(\cdot|\mathbf{s}))}{\epsilon}\), then Theorem 4.3 implies the output of diffusion policy \((\hat{\pi}_{K}(\cdot|\mathbf{s})\) makes a sufficient close to the input policy \(\pi(\cdot|\mathbf{s})\) with the measurement by \(\text{KL}(\hat{\pi}_{K}(\cdot|\mathbf{s})\|\pi(\cdot|\mathbf{s}))\leq\epsilon\). ## 5 DIPO: Implementation of Diffusion Policy for Model-Free Online RL In this section, we present the details of DIPO, which is an implementation of **DI**ffusion **PO**licy for model-free reinforcement learning. According to Theorem 4.3, diffusion policy only fits the current policy \(\pi\) that generates the training data (denoted as \(\mathcal{D}\)), but it does not improve the policy \(\pi\). It is different from traditional policy-based RL algorithms, we can not improve a policy according to the policy gradient theorem since diffusion policy is not a parametric function but learns a policy via a stochastic process. Thus, we need a new way to implement policy improvement, which is nontrivial. We have presented the framework of DIPO in Figure 3, and shown the key steps of DIPO in Algorithm 2. For the detailed implementation, please refer to Algorithm 3 (see Appendix C). ### Training Loss of DIPO It is intractable to directly apply the formulation (7) to estimate the score function since \(\boldsymbol{\nabla}\log p_{t}(\cdot)=\boldsymbol{\nabla}\log\bar{\pi}_{t}( \cdot|\mathbf{s})\) is unknown, which is sourced from the initial distribution \(\bar{\mathbf{a}}_{0}\sim\pi(\cdot|\mathbf{s})\) is unknown. According to denoising score matching (Vincent, 2011; Hyvarinen, 2005), a practical way is to solve the next optimization problem (10). For any given \(\mathbf{s}\in\mathcal{S}\), \[\min_{\boldsymbol{\phi}}\mathcal{L}(\boldsymbol{\phi})=\min_{\bar{\mathbf{s}}_ {\boldsymbol{\phi}}\in\mathcal{F}}\int_{0}^{T}\omega(t)\mathbb{E}_{\bar{ \mathbf{a}}_{0}\sim\pi(\cdot|\mathbf{s})}\mathbb{E}_{\bar{\mathbf{a}}_{t}| \bar{\mathbf{a}}_{0}}\left[\left\|\hat{\mathbf{s}}_{\boldsymbol{\phi}}(\bar{ \mathbf{a}}_{t},\mathbf{s},t)-\boldsymbol{\nabla}\log\varphi_{t}(\bar{\mathbf{ a}}_{t}|\bar{\mathbf{a}}_{0})\right\|_{2}^{2}\right]\mathrm{d}t, \tag{10}\] where \(\omega(t):[0,T]\rightarrow\mathbb{R}_{+}\) is a positive weighting function; \(\varphi_{t}(\bar{\mathbf{a}}_{t}|\bar{\mathbf{a}}_{0})=\mathcal{N}\left( \mathrm{e}^{-t}\bar{\mathbf{a}}_{0},\left(1-\mathrm{e}^{-2t}\right)\mathbf{I}\right)\) denotes the transition kernel of the forward process (3); \(\mathbb{E}_{\bar{\mathbf{a}}_{t}|\bar{\mathbf{a}}_{0}}[\cdot]\) denotes the expectation with respect to \(\varphi_{t}(\bar{\mathbf{a}}_{t}|\bar{\mathbf{a}}_{0})\); and \(\boldsymbol{\phi}\) is the parameter needed to be learned. Then, according to Theorem C.1 (see Appendix C.2), we rewrite the objective (10) as follows, \[\mathcal{L}(\boldsymbol{\phi})=\mathbb{E}_{k\sim\mathcal{U}(\{1,2,\cdots,K\}), \mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I}),(\mathbf{s},\mathbf{a})\sim \mathcal{D}}\left[\left\|\mathbf{z}-\boldsymbol{\epsilon}_{\boldsymbol{\phi}} \left(\sqrt{\bar{\alpha}_{k}}\mathbf{a}+\sqrt{1-\bar{\alpha}_{k}}\mathbf{z}, \mathbf{s},k\right)\|_{2}^{2}\right], \tag{11}\] where \(\mathcal{U}(\cdot)\) denotes uniform distribution, \[\boldsymbol{\epsilon}_{\boldsymbol{\phi}}\left(\cdot,\cdot,k\right)=-\sqrt{1- \bar{\alpha}_{k}}\hat{\mathbf{s}}_{\boldsymbol{\phi}}\left(\cdot,\cdot,T-t_{ k}\right),\] and \(\bar{\alpha}_{k}\) will be special. The objective (11) provides a way to learn \(\boldsymbol{\phi}\) from samples; see line 14-16 in Algorithm 2. ### Playing Action of DIPO Replacing the score estimator \(\hat{\mathbf{S}}\) (defined in Algorithm 1) according to \(\hat{\mathbf{\epsilon}}_{\mathbf{\phi}}\), after some algebras (see Appendix C.3), we rewrite diffusion policy (i.e., Algorithm 1) as follows, \[\hat{\mathbf{a}}_{k+1}=\frac{1}{\sqrt{\alpha_{k}}}\left(\hat{\mathbf{a}}_{k}- \frac{1-\alpha_{k}}{\sqrt{1-\tilde{\alpha}_{k}}}\mathbf{\epsilon}_{\mathbf{\phi}}(\hat{ \mathbf{a}}_{k},\mathbf{s},k)\right)+\sqrt{\frac{1-\alpha_{k}}{\alpha_{k}}} \mathbf{z}_{k}, \tag{12}\] where \(k=0,1,\cdots,K-1\) runs forward in time, the noise \(\mathbf{z}_{k}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). The agent plays the last (output) action \(\hat{\mathbf{a}}_{K}\). ``` 1:initialize\(\mathbf{\phi}\), critic network \(Q_{\mathbf{\psi}}\); \(\{\alpha_{i}\}_{i=0}^{K}\); \(\bar{\alpha}_{k}=\prod_{i=1}^{k}\alpha_{i}\); step-size \(\eta\); 2:repeat 3: dataset \(\mathcal{D}\leftarrow\emptyset\); initialize \(\mathbf{s}_{0}\sim d_{0}(\cdot)\); 4:#update experience 5:for\(t=0,1,\cdots,T\)do 6: play \(\mathbf{a}_{t}\) follows (12); \(\mathbf{s}_{t+1}\sim\mathbb{P}(\cdot|\mathbf{s}_{t},\mathbf{a}_{t})\); \(\mathcal{D}\leftarrow\mathcal{D}\cup\{\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s }_{t+1},r(\mathbf{s}_{t+1}|\mathbf{s}_{t},\mathbf{a}_{t})\}\); 7:#update value function 8:repeat\(N\) times 9: sample \((\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t+1},r(\mathbf{s}_{t+1}|\mathbf{s }_{t},\mathbf{a}_{t}))\sim\mathcal{D}\) i.i.d; take gradient descent on \(\ell_{\mathrm{Q}}(\mathbf{\psi})\) (14); 10:#action gradient 11:for\(t=0,1,\cdots,T\)do 12: replace each action \(\mathbf{a}_{t}\in\mathcal{D}\) follows \(\mathbf{a}_{t}\leftarrow\mathbf{a}_{t}+\eta\nabla_{\mathbf{a}}Q_{\mathbf{\psi}}( \mathbf{s}_{t},\mathbf{a})|_{\mathbf{a}=\mathbf{a}_{t}}\); 13:#update policy 14:repeat\(N\) times 15: sample \((\mathbf{s},\mathbf{a})\) from \(\mathcal{D}\) i.i.d, sample index \(k\sim\mathcal{U}(\{1,\cdots,K\})\), \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\); 16: take gradient decent on the loss \(\ell_{\mathrm{d}}(\mathbf{\phi})=\|\mathbf{z}-\mathbf{\epsilon}_{\mathbf{\phi}}(\sqrt{ \bar{\alpha}_{k}}\mathbf{a}+\sqrt{1-\bar{\alpha}_{k}}\mathbf{z},\mathbf{s},k) \|_{2}^{2}\); 17:until the policy performs well in the environment. ``` **Algorithm 2** (DIPO) Model-Free Reinforcement Learning with **D**Ifusion **PO**licy ### Policy Improvement of DIPO According to (11), we know that only the state-action pairs \((\mathbf{s},\mathbf{a})\in\mathcal{D}\) are used to learn a policy. That inspires us that if we design a method that transforms a given pair \((\mathbf{s},\mathbf{a})\in\mathcal{D}\) to be a "better" pair, then we use the "better" pair to learn a new diffusion policy \(\pi^{{}^{\prime}}\), then \(\pi^{{}^{\prime}}\succeq\pi\). About "better" state-action pair should maintain a higher reward performance than the originally given pair \((\mathbf{s},\mathbf{a})\in\mathcal{D}\). We break our key idea into two steps: **1)** first, we regard the reward performance as a function with respect to actions, \(J_{\pi}(\mathbf{a})=\mathbb{E}_{\mathbf{s}\sim d_{0}(\cdot)}[Q_{\pi}(\mathbf{s },\mathbf{a})]\), which quantifies how the action \(\mathbf{a}\) affects the performance; **2)** then, we update all the actions \(\mathbf{a}\in\mathcal{D}\) through the direction \(\mathbf{\nabla}_{\mathbf{a}}J_{\pi}(\mathbf{a})\) by gradient ascent method: \[\mathbf{a}\leftarrow\mathbf{a}+\eta\mathbf{\nabla}_{\mathbf{a}}J_{\pi}(\mathbf{a}) =\mathbf{a}+\eta\mathbb{E}_{\mathbf{s}\sim d_{0}(\cdot)}[\mathbf{\nabla}_{ \mathbf{a}}Q_{\pi}(\mathbf{s},\mathbf{a})], \tag{13}\] where \(\eta>0\) is step-size, and we call \(\nabla_{\mathbf{a}}J_{\pi}(\mathbf{a})\) as **action gradient**. To implement (13) from samples, we need a neural network \(Q_{\mathbf{\psi}}\) to estimate \(Q_{\pi}\). Recall \(\{\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t+1},r(\mathbf{s}_{t+1}|\mathbf{s }_{t},\mathbf{a}_{t})\}_{t\geq 0}\sim\pi\), we train the parameter \(\mathbf{\psi}\) by minimizing the following Bellman residual error, \[\ell_{\mathrm{Q}}(\mathbf{\psi})=\big{(}r(\mathbf{s}_{t+1}|\mathbf{s}_{t},\mathbf{a }_{t})+\gamma Q_{\mathbf{\psi}}(\mathbf{s}_{t+1},\mathbf{a}_{t+1})-Q_{\mathbf{\psi}}( \mathbf{s}_{t},\mathbf{a}_{t})\big{)}^{2}. \tag{14}\] Finally, we consider each pair \((\mathbf{s}_{t},\mathbf{a}_{t})\in\mathcal{D}\), and replace the action \(\mathbf{a}_{t}\in\mathcal{D}\) as follows, \[\mathbf{a}_{t}\leftarrow\mathbf{a}_{t}+\eta\boldsymbol{\nabla}_{\mathbf{a}}Q_{ \boldsymbol{\psi}}(\mathbf{s}_{t},\mathbf{a})|_{\mathbf{a}=\mathbf{a}_{t}}. \tag{15}\] ## 6 Related Work Due to the diffusion model being a fast-growing field, this section only presents the work that relates to reinforcement learning, a recent work (Yang et al., 2022) provides a comprehensive survey on the diffusion model. In this section, first, we review recent advances in diffusion models with reinforcement learning. Then, we review the generative models for reinforcement learning. ### Diffusion Models for Reinforcement Learning The diffusion model with RL first appears in (Janner et al., 2022), where it proposes the diffuser that plans by iteratively refining trajectories, which is an essential offline RL method. Later Ajay et al. (2023) model a policy as a return conditional diffusion model, Chen et al. (2023), Wang et al. (2023), Chi et al. (2023) consider to generate actions via diffusion model. SE(3)-diffusion fields (Urain et al., 2023) consider learning data-driven SE(3) cost functions as diffusion models. Pearce et al. (2023) model the imitating human behavior with diffusion models. Reuss et al. (2023) propose score-based diffusion policies for the goal-conditioned imitation learning problems. ReorientDiff (Mishra and Chen, 2023) presents a reorientation planning method that utilizes a diffusion model-based approach. StructDiffusion (Liu et al., 2022) is an object-centric transformer with a diffusion model, based on high-level language goals, which constructs structures out of a single RGB-D image. Brehmer et al. (2023) propose an equivariant diffuser for generating interactions (EDGI), which trains a diffusion model on an offline trajectory dataset, where EDGI learns a world model and planning in it as a conditional generative modeling problem follows the diffuser (Janner et al., 2022). DALL-E-Bot (Kapelyukh et al., 2022) explores the web-scale image diffusion models for robotics. AdaptDiffuser (Liang et al., 2023) is an evolutionary planning algorithm with diffusion, which is adapted to unseen tasks. The above methods are all to solve offline RL problems, to the best of our knowledge, the proposed DIPO is the first diffusion approach to solve online model-free RL problems. The action gradient plays a critical way to implement DIPO, which never appears in existing RL literature. In fact, the proposed DIPO shown in Figure 3 is a general training framework for RL, where we can replace the diffusion policy with any function fitter (e.g., MLP or VAE). ### Generative Models for Policy Learning In this section, we mainly review the generative models, including VAE (Kingma and Welling, 2013), GAN (Goodfellow et al., 2020), Flow (Rezende and Mohamed, 2015), and GFlowNet (Bengio et al., 2021, 2021) for policy learning. Generative models are mainly used in cloning diverse behaviors (Pomerleau, 1988), imitation learning (Osa et al., 2018), goal-conditioned imitation learning (Argall et al., 2009), or offline RL (Levine et al., 2020), a recent work (Yang et al., 2023) provides a foundation presentation for the generative models for policy learning. **VAE for Policy Learning.** Lynch et al. (2020), Ajay et al. (2021) have directly applied auto-encoding variational Bayes (VAE) (Kingma and Welling, 2013) and VQ-VAE (Van Den Oord et al., 2017) model behavioral priors. Mandlekar et al. (2020) design the low-level policy that is conditioned on latent from the CVAE. Pertsch et al. (2021) joint the representation of skill embedding and skill prior via a deep latent variable model. Mees et al. (2022), Rosete-Beas et al. (2023) consider seq2seq CVAE (Lynch et al., 2020, Wang et al., 2022) to model of conditioning the action decoder on the latent plan allows the policy to use the entirety of its capacity for learning unimodal behavior. **GAN for Imitation Learning.** GAIL (Ho and Ermon, 2016) considers the Generative Adversarial Networks (GANs) (Goodfellow et al., 2020) to imitation learning. These methods consist of a generator and a discriminator, where the generator policy learns to imitate the experts' behaviors, and the discriminator distinguishes between real and fake trajectories, which models the imitation learning as a distribution matching problem between the expert policy's state-action distribution and the agent's policy (Fu et al., 2018, Wang et al., 2021). For several advanced results and applications, please refer to (Chen et al., 2023, Deka et al., 2023, Rafailov et al., Taranovic et al., 2023). **Flow and GFlowNet Model for Policy Learning.**Singh et al. (2020) consider normalizing flows (Rezende and Mohamed, 2015) for the multi-task RL tasks. Li et al. (2023) propose diverse policy optimization, which consider the GFlowNet (Bengio et al., 2021, 2021) for the structured action spaces. Li et al. (2023) propose CFlowNets that combines GFlowNet with continuous control. Stochastic GFlowNet (Pan et al., 2023) learns a model of the environment to capture the stochasticity of state transitions. Malkin et al. (2022) consider training a GFlowNet with trajectory balance. **Other Methods.** Decision Transformer (DT) (Chen et al., 2021) model the offline RL tasks as a conditional sequence problem, which does not learn a policy follows the traditional methods (e.g., Sutton (1988), Sutton and Barto (1998)). Those methods with DT belong to the task-agnostic behavior learning methods, which is an active direction in policy learning (e,g., (Cui et al., 2023, Brohan et al., 2022, Zheng et al., 2022, Konan et al., 2023, Kim et al., 2023)). Energy-based models (LeCun et al., 2006) are also modeled as conditional policies (Florence et al., 2022) or applied to inverse RL (Liu et al., 2021). Autoregressive model (Vaswani et al., 2017, Brown et al., 2020) represents the policy as the distribution of action, where it considers the distribution of the whole trajectory (Reed et al., 2022, Shafiullah et al., 2022). ## 7 Experiments In this section, we aim to cover the following three issues: How does DIPO compare to the widely used RL algorithms (SAC, PPO, and TD3) on the standard continuous control benchmark? How to show and illustrate the empirical results? How does the diffusion model compare to VAE (Kingma and Welling, 2013) and multilayer perceptron (MLP) for learning distribution? How to choose the reverse length \(K\) of DIPO for the reverse inference? ### Comparative Evaluation and Illustration We provide an evaluation on MuJoCo tasks (Todorov et al., 2012). Figure 6 shows the reward curves for SAC, PPO, TD3, and DIPO on MuJoCo tasks. To demonstrate the robustness of the proposed DIPO, we train DIPO with the same hyperparameters for all those 5 tasks, where we provide the hyperparameters in Table 3, see Appendix H.1. For each algorithm, we plot the average return of 5 independent trials as the solid curve and plot the standard deviation across 5 same seeds as the transparent shaded region. We evaluate all the methods with \(10^{6}\) iterations. Results show that the proposed DIPO achieves the best score across all those 5 tasks, and DIPO learns much faster than SAC, PPO, and TD3 on the tasks of Ant-3v and Walker2d-3v. Although the asymptotic reward performance of DIPO is similar to baseline algorithms on other 3 tasks, the proposed DIPO achieves better performance at the initial iterations, we will try to illustrate some insights for such empirical results of HalfCheetah-v3 in Figure 8, for more discussions, see Appendix H. ### State-Visiting Visualization From Figure 6, we also know that DIPO achieves the best initial reward performance among all the 5 tasks, a more intuitive illustration has been shown in Figure 7 and 8, where we only consider Ant-v3 and HalfCheetah-v3; for more discussions and observations, see Appendix H.3. We show the state-visiting region to compare both the exploration and final reward performance, where we use the same t-SNE [22] to transfer the high-dimensional states visited by all the methods for 2D visualization. Results of Figure 7 show that the DIPO explores a wider range of state-visiting, covering TD3, SAC, and PPO. Furthermore, from Figure 7, we also know DIPO achieves a more dense state-visiting at the final period, which is a reasonable result since after sufficient training, the agent identifies and Figure 8: State-visiting visualization for comparison between DIPO and SAC on HalfCheetah-v3. Figure 6: Average performances on MuJoCo Gym environments with \(\pm\) std shaded, where the horizontal axis of coordinate denotes the iterations (\(\times 10^{6}\)), the plots smoothed with a window of 10. Figure 7: State-visiting visualization by each algorithm on the Ant-v3 task, where states get dimension reduction by t-SNE. The points with different colors represent the states visited by the policy with the style. The distance between points represents the difference between states. avoids the "bad" states, and plays actions transfer to "good" states. On the contrary, PPO shows an aimless exploration in the Ant-v3 task, which partially explains why PPO is not so good in the Ant-v3 task. From Figure 8 we know, at the initial time, DIPO covers more regions than SAC in the HalfCheetah-v3, which results in DIPO obtaining a better reward performance than SAC. This result coincides with the results of Figure 5, which demonstrates that DIPO is efficient for exploration, which leads DIPO to better reward performance. While we also know that SAC starts with a narrow state visit that is similar to the final state visit, and SAC performs with the same reward performance with DIPO at the final, which implies SAC runs around the "good" region at the beginning although SAC performs a relatively worse initial reward performance than DIPO. Thus, the result of Figure 8 partially explains why DIPO performs better than SAC at the initial iterations but performs with same performance with SAC at the final for the HalfCheetah-v3 task. ### Ablation Study In this section, we consider the ablation study to compare the diffusion model with VAE and MLP for policy learning, and show a trade-off on the reverse length \(K\) for reverse inference. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Reverse length & Ant-v3 & HalfCheetah-v3 & Hopper-v3 & Humanoid-v3 & Walker2d-v3 \\ \hline \(K=100\) & \(\mathbf{5622.30\pm 487.09}\) & \(\mathbf{10472.31\pm 654.96}\) & \(3123.14\pm 636.23\) & \(\mathbf{4878.41\pm 822.03}\) & \(\mathbf{4409.18\pm 469.06}\) \\ \hline \(K=50\) & \(4877.41\pm 1010.35\) & \(9198.20\pm 1738.25\) & \(\mathbf{3214.83\pm 491.15}\) & \(4513.39\pm 1075.94\) & \(4199.34\pm 1062.31\) \\ \hline \(K=20\) & \(5288.77\pm 970.35\) & \(9343.69\pm 986.82\) & \(2511.63\pm 837.03\) & \(4294.79\pm 1583.48\) & \(\mathbf{4467.20\pm 368.13}\) \\ \hline \end{tabular} \end{table} Table 1: Average return over final 6E5 iterations with different reverse lengths \(K=20,50,100\), and maximum value is bolded for each task. Figure 10: Learning curves with different reverse lengths \(K=20,50,100\). Figure 9: Reward Performance Comparison to VAE and MLP with DIPO, SAC, PPO and TD3. #### 7.3.1 Comparison to VAE and MLP Both VAE and MLP are widely used to learn distribution in machine learning, a fundamental question is: why must we consider the diffusion model to learn a policy distribution? what the reward performance is if we use VAE and MLP to model a policy distribution? We show the answer in Figure 9, where the VAE (or MLP) is the result we replace the diffusion policy of DIPO (see Figure 3) with VAE (or MLP), i.e., we consider VAE (or MLP)+action gradient for the tasks. Results show that the diffusion model is more powerful than VAE and MLP for learning a distribution. This implies the diffusion model is an expressive and flexible family to model a distribution, which is also consistent with the field of the generative model. #### 7.3.2 Comparison with Different Reverse Lengths Reverse length \(K\) is an important parameter for the diffusion model, which not only affects the reward performance but also affects the training time, we show the results in Figure 10 and Table 1. The results show that the reverse time \(K=100\) returns a better reward performance than other cases (except Hopper-v3 task). Longer reverse length consumes more reverse time for inference, we hope to use less time for reverse time for action inference. However, a short reverse length \(K=20\) decays the reward performance among (except Walker2d-v3 task), which implies a trade-off between reward performance and reverse length \(K\). In practice, we set \(K=100\) throughout this paper. ## 8 Conclusion We have formally built a theoretical foundation of diffusion policy, which shows a policy representation via the diffusion probability model and which is a new way to represent a policy via a stochastic process. Then, we have shown a convergence analysis for diffusion policy, which provides a theory to understand diffusion policy. Furthermore, we have proposed an implementation for model-free online RL with a diffusion policy, named DIPO. Finally, extensive empirical results show the effectiveness of DIPO among the Mujoco tasks.
2310.01790
Complex magnetic interactions and critical behavior analysis in quaternary CoFeV$_{0.8}$Mn$_{0.2}$Si Heusler alloy
We investigate the magnetic behavior and critical exponents of quaternary CoFeV$_{0.8}$Mn$_{0.2}$Si Heusler alloy to understand the interactions across the Curie temperature ($T_{\rm C}$). The Rietveld refinement of the x-ray diffraction pattern with the space group F$\bar{4}$3m confirms a single-phase cubic Y-type crystal structure. The magnetic susceptibility $\chi (T)$ data show a ferromagnetic nature with a second-order phase transition from paramagnetic to ferromagnetic at $446\pm1$~K. The saturation magnetization at 5~K is determined to be 2.2~$\mu_B$/f.u., which found to be close to the Slater--Pauling rule and indicates its half-metallic nature. The values of asymptotic critical exponents ($\beta$, $\gamma$, and $\delta$) and the $T_{\rm C}$ are extracted through detailed analytical analysis including the Modified Arrott plot, the Kouvel-Fisher (K--F) method, and the Widom scaling relation. Interestingly, the estimated values of $\beta$ = 0.369 and $\gamma$ = 1.445 closely approximate the theoretical values of the 3D Heisenberg model and second-order thermodynamic phase transition across the $T_{\rm C}$. The obtained exponents lead to the collapse of renormalized isotherms, represented by the relationship between the magnetization (m) and the applied magnetic field (h), into two distinct branches above and below the $T_{\rm C}$, which validates the reliability of the analysis. Furthermore, these exponents suggest that the spin interaction follows a decay pattern of $J(r) \sim r^{-4.99}$, indicating a short-range magnetic ordering, akin to the itinerant-electron 3D Heisenberg model.
Guru Dutt Gupt, P. D. Babu, R. S. Dhaka
2023-10-03T04:39:45Z
http://arxiv.org/abs/2310.01790v1
Complex magnetic interactions and critical behavior analysis in quaternary CoFeV\({}_{0.8}\)Mn\({}_{0.2}\)Si Heusler alloy ###### Abstract We investigate the magnetic behavior and critical exponents of quaternary CoFeV\({}_{0.8}\)Mn\({}_{0.2}\)Si Heusler alloy to understand the interactions across the Curie temperature (\(T_{\rm C}\)). The Rietveld refinement of the x-ray diffraction pattern with the space group F43m confirms a single-phase cubic Y-type crystal structure. The magnetic susceptibility \(\chi(T)\) data show a ferromagnetic nature with a second-order phase transition from paramagnetic to ferromagnetic at \(446\pm 1\) K. The saturation magnetization at 5 K is determined to be 2.2 \(\mu_{B}\)/f.u., which found to be close to the Slater-Pauling rule and indicates its half-metallic nature. The values of asymptotic critical exponents (\(\beta\), \(\gamma\), and \(\delta\)) and the \(T_{\rm C}\) are extracted through detailed analytical analysis including the Modified Arrott plot, the Kouvel-Fisher (K-F) method, and the Widom scaling relation. Interestingly, the estimated values of \(\beta=0.369\) and \(\gamma=1.445\) closely approximate the theoretical values of the 3D Heisenberg model and second-order thermodynamic phase transition across the \(T_{\rm C}\). The obtained exponents lead to the collapse of renormalized isotherms, represented by the relationship between the magnetization (m) and the applied magnetic field (h), into two distinct branches above and below the \(T_{\rm C}\), which validates the reliability of the analysis. Furthermore, these exponents suggest that the spin interaction follows a decay pattern of \(J(r)\sim r^{-4.99}\), indicating a short-range magnetic ordering, akin to the itinerant-electron 3D Heisenberg model. ## I Introduction In 2008, X. L. Wang [1] suggested a new class of materials reported as spin-gapless semiconductors (SGS) having no band gap in the band structure of one spin channel, but at the same time a finite band gap in another spin channel similar to a semiconductor, at the Fermi level (E\({}_{\rm F}\)) [1]. This has attracted huge attention of scientific community as it is predicted to show an unusual electronic band structure and serves as an intermediary state between the half-metal and gapless semiconductor states in large number of materials and in particular Heusler alloys [2; 3; 4; 5]. For example, the first experimental evidence of SGS was confirmed in Mn\({}_{2}\)CoAl [3], and then many more are predicted to show interesting properties such as Fe\({}_{2}\)CoSi [6], Cr\({}_{2}\)CoAl [7; 8], including quaternary Heusler alloys CoFeMnSi [9; 10; 11; 12]. More interestingly, many of these Heusler alloys have attracted a lot of attention in recent years for a wide range of applications in spintronics and heat engines because of their unique properties such as half metallicity, spin gapless, compensated ferrimagnetism, and magneto-caloric properties [13; 7; 14]. In this direction, the quaternary Heusler alloys (XX\({}^{\prime}\)YZ), where X, X\({}^{\prime}\) and Y are transition metals, and Z is group element, can be synthesized in LiMgPdSn-type structure or also known as Y-type structure associated with the space group F43m [2; 9; 15; 16]. In this crystal structure the elements in four interpenetrating _f.c.c._ sub-lattices occupy all the atomic positions and the preferred occupation of the sites depend on the respective electro-negativity and atomic sizes of the constituent elements [10; 17]. The calculated site preference for the quaternary Heusler alloys found three non-degenerate possible atomic configurations (type-I, type-II, and type-III), which reveals the phase stability [18; 9; 10; 4; 12; 19] and energetically most favorable crystal structure [9; 10]. At the same time, a new possible disorder named L2\({}_{1}\)B was reported to affect the SGS and the half-metallicity in quaternary Heusler alloys [15]. The theoretical calculations suggest equi-atomic quaternary Heusler alloy such as CoFeVSi having a Y-type crystal structure shows a transition from SGS to half-metallic nature with the substitution of Mn atoms at V site [10; 11; 15; 17; 20; 21]. Recently, the spin band structure and density of states of CoFeVSi and CoFeMnSi Heusler alloys were examined considering the Y-type crystal structure [11; 12; 22]. A detailed analysis of magnetic behavior revealed a positive linear magneto-resistance in CoFeVSi, which induced changes in the minority spin band that resulted in a spin gapless structure [10; 11]. In a thin film of CoFeV\({}_{1-x}\)Mn\({}_{x}\)Si, a ferromagnetic behavior was reported consisting a Curie temperature (\(T_{\rm C}\)) varying between approximately 271 K (CoFeVSi) to around 620 K (CoFeMnSi) [10; 12]. However, an experimental value of \(T_{\rm C}=657\) K was reported for the CoFeV\({}_{0.5}\)Mn\({}_{0.5}\)Si Heusler alloy [17], which exceeded the value observed for the CoFeMnSi sample [12]. This makes the CoFeV\({}_{1-x}\)Mn\({}_{x}\)Si alloys interesting as the Mn substitution enhances the magnetic moment and Curie temperature [10; 17]. Also, the magnetic Heusler alloys are considered to be either perfect local systems or itinerant ferromagnetic systems, and their exchange couplings can be adequately characterized by a Heisenberg Hamiltonian [23; 24; 25]. However, sev eral theoretical studies suggest that these Heusler alloys exhibit intricate exchange interactions of the localized magnetic moments [26; 27; 28]. The presence of long-range magnetism in these alloys is believed to play a crucial role in two types of couplings among neighboring spins: the direct exchange in the short-range coupling occurs between nearest-neighbor spins, and the long-range coupling between the spins mediated by the interaction of Ruderman-Kittel-Kasuya-Yosida (RKKY) type [28]. More recently, the intrinsic critical behavior in Cobalt based ferromagnetic Heusler alloys is considered promising method to understand the magnetic interactions near the \(T_{\rm C}\)[13; 14; 24; 26; 29]. The hypothesis of universality class states that certain signature of continuous phase transitions, such as critical exponents and scaling functions, are determined by global properties rather than microscopic details of the systems, such as space dimensionality, range of interaction, and order parameter symmetry [30; 31]. The analysis of data should be performed in the asymptotic critical region, defined as \(|\epsilon|\leq 10^{-2}\), to obtain true asymptotic values of critical exponents and draw conclusions about singularity at the transition temperature [32; 33]. Therefore, the critical behavior analysis in isotropic systems can be described using different universality classes near the \(T_{\rm C}\) and the Neel temperature in ferromagnets and antiferromagnets, respectively [13; 34; 33]. It has been well established that the values of critical exponents extracted from the detailed analysis of magnetic isotherms closely resemble the standard models [33; 35; 36; 37; 38] such as mean field, 3D Heisenberg, 3D Ising, etc. On the other hand, experimental determination of the critical exponents are crucial to correlate with the universality classes and possibility of any deviation from established classical standard models [39; 40]. Also, the exchange interaction distance \(J(r)\) was calculated to investigate magnetic ordering and spin interaction in full Heusler alloys [13; 14]. However, to the best of our knowledge these studies are still not available in detail for the quaternary Heusler alloys. Therefore, in this paper we report a detailed investigation of critical phenomena using magnetization measurements for the CoFeV\({}_{0.8}\)Mn\({}_{0.2}\)Si quaternary Heusler alloy across the paramagnetic to ferromagnetic transition temperature (\(T_{\rm C}\) = 446\(\pm\)1 K). The Rietveld refinement of x-ray diffraction pattern confirms the single phase cubic Y-type structure with 28% disorder between V and Si present in the sample. The magnetic susceptibility and critical behavior analysis reveal a second-order paramagnetic to ferromagnetic phase transition at around 446 K. More interestingly, the extracted values of critical exponents (\(\beta\) = 0.369 and \(\gamma\) = 1.445) suggest for the 3D Heisenberg type spin interactions in the sample. The Kouvel-Fisher approach and the Widom scaling relation are employed to validate the reliability of this investigation. Furthermore, the critical exponents indicate that the spin interaction decays as \(J(r){\sim}\)\(r^{(-4.99)}\), which suggests for the short-range 3D Heisenberg type magnetic ordering in the sample across the \(T_{\rm C}\). ## II Experimental A polycrystalline CoFeV\({}_{0.8}\)Mn\({}_{0.2}\)Si sample was synthesized using an arc melting system (CENTORR Vacuum Industries, USA) where high-purity constituent elements, namely Co, Fe, V, Mn, and Si, obtained from Alfa Aesar and/or Sigma Aldrich with a high purity 99.99%, were melted in a water-cooled copper hearth. The melting process is done in the presence of a protective environment of dry argon gas and an additional 5 wt.% of Mn was added to compensate for any evaporation losses due to its low vapor pressure. The weight loss during melting was observed below 0.9%. The ingot sample was wrapped in Mo foil to prevent the reaction with the quartz tube at high temperatures and sealed in a quartz tube under a vacuum of approximately \(10^{-3}\) mbar. Subsequently, the sample underwent annealing at 1073 K for a duration of 14 days to enhance its homogeneity. The phase purity and crystal structure of the sample were analyzed using x-ray diffraction (XRD) measurements at room temperature using a PANalytical conventional x-ray diffractometer with a Cu K\(\alpha\) (\(\lambda\) = 1.5406 A) source of radiation. The Rietveld refinement of XRD pattern was performed using the method implemented in the FULLPROF package. The sample composition was verified using energy-dispersive X-ray technique. The temperature and magnetic field-dependent magnetization data were recorded using a physical property measurement system (PPMS) from Quantum Design, USA. We use the low temperature system for the data in the temperature range of 2-300 K and magnetic field range of \(\pm\)3 Tesla, and the high temperature set-up for the data in the temperature range of 300-700 K and magnetic field range of 0-9 Tesla. ## III Results and Discussion The Rietveld refined x-ray diffraction (XRD) pattern of the CoFeV\({}_{0.8}\)Mn\({}_{0.2}\)Si sample, recorded in the 2\(\theta\) angle range 20\({}^{\rm o}\) to 90\({}^{\rm o}\), is presented in Figs. 1, which shows the single phase cubic Y-type structure with F43m (no. 216) space group [12; 41]. The possible crystal structures were defined as Co, Fe, V/Mn and Si atoms randomly occupying Wyckoff positions 4d (3/4,3/4,3/4), 4c (1/4,1/4,1/4), 4b (1/2,1/2,1/2), and 4a (0,0,0) for all three configurations sites [9; 42]. Note that when both the (111) and (200) superlattice peaks appear in the diffraction pattern, the Heusler alloy is regarded as a fully ordered \(L2_{1}\) structure. However, these alloys often consist of disordered structures such as B2 and A2-types, where in the B2-type structure, (111) superlattice peak is absent, while both (111) and (200) superlattice peaks disappear in the A2-type structure [2; 43]. We have performed the Rietveld refinement for the superlattice reflections (111) and (200) with three different configurations by using the corresponding atomic positions in the crystallographic axis. These refinements suggest the CoFeV\({}_{0.8}\)Mn\({}_{0.2}\)Si sample is stable in the type-I configuration, and the re liable parameter \(\chi^{2}\) value of the fitting is the lowest for type-I rather than other configurations such as type-II and type-III, as presented in Figs. 1(b-d). The theoretical calculations suggest the disorder V-Fe and Co-Fe atoms, and the structure is so-called the L2\({}_{1}\)B-type structure [15]; however, it is difficult to find the disorder between Co-Fe and V-Fe with conventional XRD pattern because these elements have similar x-ray scattering cross-section [11; 15]. In order to find the best fit, we modeled the refinement considering the anti-site disorder between V and Si due to the significant difference in the scattering cross section [2]. In Fig. 1(a), the XRD pattern of the sample belongs to the most stable type-I, where the Wyckoff position of the atom is assigned as Co at 4d, Fe at 4c, V/Mn at 4b, and Si at 4a [9]. In our sample, the estimated anti-site disorder is \(\approx\)28% between V-Si where the atoms can exchange their position \(\approx\)28% of the total occupancy in the stable type-I. The reliable parameter \(\chi^{2}\) improved from 2.23 to 2.03 after considering the anti-site disorder, resulting in the best fit is presented in inset of Fig. 1(a). The obtained lattice parameter 5.621 A is good agrees with the refs. [11; 16]. Further, we have investigated the composition of the sample and found to be \(\approx\)27:24:21:4:24, which is close to the stoichiometry ratio. The dc-susceptibility (\(\chi\) versus temperature) data are measured at applied magnetic field of 500 Oe in both the modes zero-field-warming (ZFW) and field-cooled-warming (FCW) in the temperature range of 2-300 K and ZFW and field-cooled-cooling (FCC) in 300-700 K, as shown on the left axis of Fig. 2(a). As we use two different instruments for low and high temperature range, there is a slight mismatch in the magnetization values at 300 K; therefore, the low temperature data are shifted upwards by 2 emu/mol-Oe. Here, we find that there is no distinct bifurcation between the ZFW and FCC curves at low temperatures, which suggests for the ordered magnetism, i.e., there is no magnetic frustration in this sample. In order to find the paramagnetic (PM) to ferromagnetic (FM) transition temperature (\(T_{\rm C}\)), we plot the derivative of magnetization (dM/dT) as a function of temperature in the inset of Fig. 2(a), which clearly confirms the \(T_{\rm C}\) value of 446 K. Furthermore, we plot the inverse of suscepti Figure 2: (a) The temperature dependent magnetic susceptibility (\(\chi\)) recorded at 500 Oe magnetic field along with the derivative of \(M\) in the inset. On the right axis, the \(\chi^{-1}\) is plotted and fitted (solid blue line) using the Curie-Weiss law between 610–700 K and extrapolated up to zero. (b) The magnetic isotherms (\(M-H\)) measured at 5 K, and the data at 350 K are matching well from two different systems used. Figure 1: (a) The Rietveld refined x-ray diffraction patterns of CoFeV\({}_{0.8}\)Mn\({}_{0.2}\)Si sample, the difference (solid blue line) between the observed (red open circle) and simulated (solid black line) patterns and Bragg positions (vertical green bars). The refinement of three different configurations of Y-type structure without anti-site disorder is shown in (b) for type–I, (c) for type–II, and (d) for type–III. The inset of (a) shows the fitting with the disorder between V and Si. bility and used the Curie-Weiss (C-W) law \(\chi\) = C/(T - \(\Theta_{\rm P}\)) to establish a linear relation (solid blue line) in high-temperature range (610-700 K) to estimate the C-W temperature (\(\Theta_{\rm P}\)), as shown in Fig. 2(a) on the right axis. The extracted value of \(\mu_{eff}\) is 3.2 \(\mu_{\rm B}\)/f.u. and \(\Theta_{\rm P}\) is determined to be about 530 K, which is greater than the \(T_{\rm C}\) value due to persistence of ferromagnetic interactions above the transition temperature [34]. Fig. 2(b) presents the magnetic isotherms (\(M-H\)) at 5 K and 350 K temperature in the magnetic field range of \(\pm\)3 Tesla. To check the consistency, we have recorded the \(M-H\) data using two different instruments at 350 K using low temperature module (350 K\({}_{1}\), green color) and high temperature set-up (350 K\({}_{2}\), red color), which are found to be overlap with each other. The absence of a hysteresis in an increasing and decreasing legs of magnetic field indicates the soft ferromagnetic nature of the sample. Although it is reported that the coercive field increases with an increase in the concentration of Mn element at V sites at 300 K [11]. The saturation magnetization are turn out to be 2.2 \(\mu_{\rm B}\)/f.u. and 1.4 \(\mu_{\rm B}\)/f.u. at 5 K and 350 K temperature, respectively. The experimental value of saturation magnetization was reported 1.5 \(\mu_{\rm B}\)/f.u. for 300 K in CoFeV\({}_{0.8}\)Mn\({}_{0.2}\)Si thin film [10; 11] and thus in our sample the saturation value approximately consistent with both the Y-type and L2\({}_{1}\)B type structure, which has been theoretically calculated in Ref. [15]. The half-metallic ferromagnetic Heusler alloys follow the Slater-Pauling (S-P) rule, given by M\({}_{\rm t}\) = (Z\({}_{\rm t}\)-24) \(\mu_{\rm B}\)/f.u., where M\({}_{\rm t}\) represents the overall magnetic moment and Z\({}_{\rm t}\) denotes the total count of valence electrons within the unit cell [2]. Here, the saturation magnetization experimentally observed at 5 K temperature (solid blue line) concurs well with the value of \(M_{S}\) (2.4 \(\mu_{\rm B}\)/f.u.) from the S-P rule and is shown with the solid magenta line in Fig. 2(b) [44]. Moreover, the isothermal virgin (\(M-H\)) curves around the ferromagnetic transition temperature (\(T_{\rm C}\)) are recorded in the range of 330-550 K up to 9 T external applied magnetic field in the high temperature set-up, as shown in Fig. 3(a). The \(M-H\) curve at a high-temperature (550 K) shows the non-linear behavior, owing to the formation of ferromagnetic cluster significantly higher than \(T_{\rm C}\), a similar behavior was reported up to 2\(T_{\rm C}\) in Ref. [34]. Further, we present the Arrott plot between \(M^{2}\) and \(H/M\) in Fig. 3(b), discussed later with detailed analysis. First, in order to gain insights into the nature of magnetism whether localized or itinerant [45], we employed the Rhodes-Wohlfarth ratio (RWR) [46; 47; 48], which is defined as the ratio of the paramagnetic moment (P\({}_{\rm C}\)) to the saturation magnetization (P\({}_{\rm S}\)) at low temperature (5 K), which is considered as the ground state as 5 K is much less than the \(T_{\rm C}\)[49]. The value of P\({}_{\rm C}\) is calculated using the formula: \(\mu_{eff}=\sqrt{\rm P_{C}(P_{C}+2)}\), where \(\mu_{eff}\) is derived from the C-W fitting of the magnetization data and used to determine P\({}_{\rm C}\). A RWR value close to 1 suggests the presence of localized moments, while a value greater than 1 indicates a more traditional itinerant system [34; 47]. In our sample, RWR was found to be 1.1, which is relatively low compared to other conventional itinerant ferromagnetic materials described in the literature [47; 50; 51], such as FeMnVAl with a RWR of 1.51 [48]. However, in our case, the RWR is close to unity, which is expected for itinerant magnetic systems. Furthermore, the self-consistent renormalization (SCR) theory, provides a comprehensive framework for studying itinerant electron systems [51; 52]. This theory incorporates the conservation of both thermal and zero-point spin fluctuations, as reported in refs. [51; 52]. Notably, the magnetization versus magnetic field (\(M-H\)) curves in the vicinity of the Curie temperature (T\({}_{\rm C}\)) is primarily influenced by the thermal spin fluctuation [47]. To analyze this behavior, we plot the \(M_{S}{}^{2}\) versus \(T^{2}\) below \(T_{\rm C}\) in Fig. 4(a), and the \(M^{4}\) versus (H/M) in Fig. 4(b), Figure 3: (a) The isothermal magnetization (\(M-H\)) curves are recorded in the temperature range of 330–550 K having 1 K step in 440–452 K range across the transition temperature (\(T_{\rm C}\)). (b) The Arrott plot (\(M^{2}\) versus \(H/M\)) in the temperature range of 425–460 K where the blue curve corresponds to the \(T_{\rm C}\) = 446 K. at critical isotherms, which are expected to follow linear behavior. However, in our case it shows a small deviation from linearity as the linearity is visible only in the large field region, as indicated in Fig. 4(b), which suggests for the non-dominant character of itinerant magnetism in the sample [52; 53]. To quantify this effect, the SCR theory suggests that \(M-H\) curve at \(T_{\rm C}\) follows the Eq. 1, as given below [51]: \[M^{4}=\frac{1}{4.671}\left(\frac{T_{\rm C}^{2}}{T_{A}^{3}}\right)\left(\frac{H} {M}\right), \tag{1}\] where \(M\) and \(H\) quantities are expressed in the units of \(\mu_{\rm B}\)/f.u. and Oe, respectively. The parameter \(T_{A}\) represents the dispersion of the spin fluctuation spectrum in wave-vector space, measured in Kelvin (K). A linear relationship between magnetization and temperature (\(T_{\rm C}\)) is commonly observed in many itinerant ferromagnetic alloys, as reported in refs. [48; 51]. However, in the present case, the curve deviates from linearity indicating that the magnetism in this material is not only primarily governed by itinerant behavior, but may instead exhibit characteristics of localized Mn-spin moments [45; 48; 51]. The linear fit of the data using the Eq. 1 yields a slope of the line equal to 1.578(1)\(\times\)10\({}^{-5}\) (\(\mu_{\rm B}\)/f.u.)\({}^{5}\)/Oe. Using the obtained value of \(T_{\rm C}\) and the slope, we calculate the value of \(T_{A}\) to be 1392 K. Additionally, the expression for the \(T_{\rm C}\) is governed by SCR theory, as described in ref. [51], is given by the equation \(T_{\rm C}=(60{\rm c})^{-3/4}T_{A}^{3/4}P_{S}^{3/2}T_{0}^{1/4}\), where \(c\) is a constant term with a value of 0.3353 [51]. By using this equation and the values of \(T_{A}\) and \(P_{S}\), we can determine the width of the dynamical spin fluctuation (\(T_{0}\)) to be around 1053.5 K. According to the SCR theory, the degree of localization or itinerancy of the spin moment is determined by the ratio \(T_{\rm C}/T_{0}\). When \(T_{\rm C}/T_{0}\) is much smaller than 1, magnetic materials exhibit strong itinerant character. On the other hand, if \(T_{\rm C}/T_{0}\) is approximately equal to 1, the materials show local moment magnetism [54]. In our sample, the calculated ratio of \(T_{\rm C}/T_{0}\) found to be 0.42, indicating itinerant ferromagnetic behavior [51; 32; 54]. Now we move back to perform the detailed analysis of isothermal magnetization to understand the magnetic interactions across the \(T_{\rm C}\) by extracting the values of critical exponents (\(\beta\), \(\gamma\), and \(\delta\)). For this purpose, the magnetic isotherm (\(M-H\)) curves measured in the temperature range of 440-452 K with 1 K step across the \(T_{\rm C}\) are shown in Fig. 3(a), where the asymptotic region is defined as \(|\epsilon|\)\(<\)0.01 according to the criteria mentioned in the refs. [32; 33]. We observe a steep rise in the magnetization at low fields and non-saturating behavior in the high field range up to 9 T, which indicates the itinerant behavior of ferromagnetism in this sample [55; 50]. For any spin interaction system, the transition from paramagnetic (PM) to ferromagnetic (FM) behavior can be determined by constructing a plot between M\({}^{1/\beta}\) and (H/M)\({}^{1/\gamma}\). According to the Landau mean-field theory, the conventional Arrott plot [56] exhibits a linear relationship representing the long-range ferromagnetic interaction near the critical temperature \(T_{\rm C}\), with critical exponents \(\beta=0.5\) and \(\gamma=1\). The resulting plot should consist of multiple parallel straight lines, with one line passing through the origin [13; 32]. The slope of these lines is related to the system's critical exponent \(\beta\), which measures how magnetization varies with the temperature in the proximity of the transition temperature. To construct an Arrott plot, the quantity (\(M^{2}\)) is plotted in Fig. 3(b) as a function of \(H/M\) for various temperature values across the \(T_{\rm C}\). These curves show a substantial curvature, which indicates the deviation from the Landau mean-field theory and simply extrapolation approach may give rise to large uncertainty [32; 33]. This non-linear behavior of the Arrott plot suggests that the itinerant ferromagnetism within the framework of the conventional Landau mean-field model, which should possess the spin fluctuation and electronic correlation, can be ne Figure 4: (a) The \(M_{\rm S}{}^{2}\) versus \(T^{2}\) below \(T_{\rm C}\) and dotted red line is the linear fit. (b) The \(M^{4}\) versus \(H/M\) of isotherms at \(T_{\rm C}\) and solid black line represents linear fit in the high magnetic field regime and dotted line is extrapolated. glected in the present case [26; 34]. Therefore, we need to perform detailed critical analysis to understand the magnetic interactions across \(T_{\rm C}\) using Banerjee's criteria [57] where the magnetic equation of state specifies the second-order magnetic phase transition from PM to FM by employing the critical exponents \(\alpha\), \(\beta\), and \(\gamma\), which are all mutually related to the magnetic behavior of the sample. The values of these critical exponents can be utilized thoroughly to investigate the magnetic interactions across the second-order phase transition [57]. To investigate the universal scaling laws associated with the spontaneous magnetization \(M_{S}\) and initial susceptibility \(\chi_{0}^{-1}\) near a second-order phase transition, the divergence of the correlation length is expressed as \(\xi=\xi_{0}|(T-T_{\rm C}/T_{\rm C})|^{-\nu}\)[14; 58]. Here, \(\xi\) represents the correlation length, \(\xi_{0}\) is a constant, \(T_{\rm C}\) denotes the Curie temperature and the exponent \(\nu\) characterizes the critical behavior of the system. Here, we speculate the magnetization isotherms by calculating the true critical exponents using the modified Arrott plots (MAPs). Therefore, the Arrott-Nokes equation of state [56] in the asymptotic region \(|\epsilon|\leq 0.01\) is given as follows: \[(H/M)^{1/\gamma}=A\epsilon+BM^{1/\beta} \tag{2}\] where A and B are constant terms and \(\epsilon\) is the reduced temperature, which is defined as \(\epsilon=(T-T_{\rm C})/T_{\rm C}\), and \(\beta\), \(\gamma\) are the critical exponents. We performed a strict iterative process with proper values of \(\beta\), \(\gamma\) and \(T_{\rm C}\)) to generate the set of parallel lines and to eliminate non-physical and systematic errors in exponent values. The intercept on the \(M^{1/\beta}\) and \((H/M)^{1/\gamma}\) axes is obtained via linear extrapolation of high field region, yielding the starting values of \(M_{S}(T)\) and \({\chi_{0}}^{-1}(T)\), which are utilized to generate the MAPs by using equation (2). Although there are two approaches to find the best model for correct \(\beta\) and \(\gamma\) values: (1) the straight line must be present in the high field region, and (2) the slope of the line must be the same, which implies that these must be parallel in the MAPs. In Figs. 5(a-e), we present the constructed MAPs using the critical exponents from standard model forefold for the three-dimensional (3D) systems such as 3D Heisenberg (\(\beta=0.365\), \(\gamma=1.386\)), 3D XY (\(\beta=0.345\), \(\gamma=1.316\)), 3D Ising (\(\beta=0.325\), \(\gamma=1.24\)), tricritical mean field (\(\beta=0.25\), \(\gamma=1\)) and 2D Ising (\(\beta=0.125\), \(\gamma=1.75\)) models [30; 36; 35; 39]. We find that the lines in the high field regions are not straight and not parallel to each other; therefore, as the second criterion we calculate the normalized slope (NS) at each temperature near the \(T_{\rm C}\) using the expression: \(\rm NS=S(T)/S(T_{\rm C})\)[13; 26], where the slope of \(M^{1/\beta}\) versus \((H/M)^{1/\gamma}\) at a temperature \(T\) is denoted by \(\rm S(T)\), and the \(\rm S(T_{\rm C})\) is the slope at Curie temperature. The NS value must follow the unity for the best fit universality model and following this criteria the plots in Fig. 5(f) clearly indicate that the 3D Heisenberg model follows minimum deviation from the unity below and above the \(T_{\rm C}\). Thus, it can be Figure 5: The isotherms plots \(M^{1/\beta}\) versus \(H/M^{1/\gamma}\) in the temperature range of 425–460 K using the (a) 3D Heisenberg, (b) 3D XY, (c) 3D Ising, (d) Tricritical mean field, and (e) 2D Ising models. (f) The normalized slope; \(\rm NS=S(T)/S(T_{\rm C})\) determined by the linear fitting of data in (a–e) at each temperature. concluded at this stage that the 3D Heisenberg type interactions better defines the critical behavior across the \(T_{\rm C}\) in the sample. Further, to confirm the applicability of this model, which reveals the ordering near \(T_{\rm C}\) by calculating the \(\beta\) (for \(M_{S}\)), \(\gamma\) (for \(\chi_{0}^{-1}\)), and \(\delta\) (magnetization at critical isotherms) in the asymptotic region using the following simple power-law of the magnetic equation of state [36; 59; 60]. \[M_{\rm S}(0,T)=M_{0}(-\epsilon)^{\beta},\quad\epsilon<0,\ T<T_{C}, \tag{3}\] \[\chi_{0}^{-1}(0,T)=\frac{h_{0}}{M_{0}}(\epsilon)^{\gamma},\quad\epsilon>0,\ T>T_{\rm C}, \tag{4}\] \[M(H,T_{C})={\rm DH}^{1/\delta},\qquad\epsilon=0,\ T=T_{\rm C} \tag{5}\] where \(\epsilon\) is called the reduced temperature, and the critical amplitudes are M\({}_{0}\), \(h_{0}/M_{0}\), and D. The equations 3 and 4 are applicable only when the temperatures are very close to the \(T_{\rm C}\); means in the limit when \(|\epsilon|\) tends to zero [32]. Note that the \(M_{S}\) and \(\chi_{0}^{-1}\) values can be used to calculate the critical exponents \(\beta\) and \(\gamma\) by fitting the curves using equations 3 and 4 over the temperature range \(|\epsilon|\)\(\leq\)0.01. Subsequently, the extracted values of \(\beta\) and \(\gamma\) are employed to reconstruct the MAPs and the straight lines are extrapolated in the high field region, then the new values of \(M_{S}\) and \(\chi_{0}^{-1}\) are obtained from the plots. We use the optimal theoretical model and repeat the iteration method until the convergent of new \(\beta\) and \(\gamma\) values having a better fit to the data. The estimation of critical exponents using this approach is independent of the original parameters, confirming the analysis's validity and the intrinsic character of the derived critical exponents. After a few iterations, the \(M_{S}\) and \(\chi_{0}^{-1}\) estimated from the linear extrapolation in the high field region up to zero field, which gives the positive intercept (\(M_{S}\) values) on the \(y-\)axis (\(M\))\({}^{1/\beta}\) and also positive intercept (\(\chi_{0}^{-1}\) values) on the \(x-\)axis (\(H/M\))\({}^{1/\gamma}\). The obtained values of the \(M_{S}\) and \(\chi_{0}^{-1}\) are displayed as a function of temperature in Fig. 6(a). In practical scenarios, extrapolation becomes necessary for numerous systems due to the deviation of isotherms from a straight-line behavior at very low fields, making it imperative to rely on the bahavior in high-field region [34]. By fitting the values in Fig. 6(a), we find the stable values of \(\beta\) (0.364) and \(\gamma\) (1.440) with the \(T_{\rm C}\) (\(\sim\)446 K), as shown by the black dotted lines where the vertical arrows indicate the deviation at the temperature values far away from the \(T_{\rm C}\). In practice for most of the cases, the critical exponents are extracted using simple power-law fits in a very wide temperature range across the \(T_{\rm C}\); however, the effective values may depend on the chosen temperature range. Therefore, here we perform the analysis with correction-to-scaling (CTS) terms to find the critical exponents in the wide range of temperature (-0.04\(<\)\(\epsilon\)\(<\)0.03) across the \(T_{\rm C}\), as suggested in ref. [32]. In this case, the following modified magnetic equation of state are used: \[M_{\rm S}(0,T)=M_{0}(-\epsilon)^{\beta}\left[1+a_{M_{1}}^{-}|\epsilon|^{\Delta _{1}}+a_{M_{2}}^{-}|\epsilon|^{\Delta_{2}}\right]\ \ \epsilon<0 \tag{6}\] \[\chi_{0}^{-1}(0,T)=\left(h_{0}/M_{0}\right)\epsilon^{\gamma}\left[1+a_{\chi_{ 1}}^{+}|\epsilon|^{\Delta_{1}}+a_{\chi_{2}}^{+}|\epsilon|^{\Delta_{2}}\right] ^{-1}\ \ \epsilon>0 \tag{7}\] Figure 6: (a) The spontaneous magnetization \(M_{S}\) (red horizontal arrow) and inverse initial susceptibility \(\chi_{0}^{-1}(T)\) (blue horizontal arrow), fitted using the equations (3) and (4) (dashed lines), and using equations (6) and (7) (solid lines), respectively. (b) The Kouvel-Fisher plot \(M_{S}/(dM_{S}/dT)^{-1}\) (red horizontal arrow) and \(\chi_{0}^{-1}/(d\chi_{0}^{-1}/dT)^{-1}\) (blue horizontal arrow) as a function of temperature below and above \(T_{\rm C}\), respectively. The solid black lines are the straight line fit using equations 8 and 9. The black vertical arrows in (a, b) indicate the deviation in the far temperature range. (c) The Isothermal magnetization is at \(T_{\rm C}=446\) K and inset is the same plot in \(\log(M)\)-\(\log(H)\) scale where the high field region is fitted with a straight line (solid red line) using equation (5). where \((a_{M_{1}}^{-},\,a_{M_{2}}^{-})\) and \((a_{\chi_{1}}^{+},\,a_{\chi_{2}}^{+})\) are the amplitudes, \(\Delta_{1}\) and \(\Delta_{2}\) are the leading CTS critical exponents, while remaining terms have the same meaning as aforementioned. Here, only the leading CTS terms \(\Delta_{2}\) is considered during fitting because \(\Delta_{1}\) (\(<<\)\(\Delta_{2}\)) can be neglected in the wide temperature range away from the \(T_{\rm C}\). Using these equations [32], the \(M_{S}(T,0)\) and \(\chi_{0}^{-1}(0,T)\) data in Fig. 6(a) are fitted (as shown by the solid black lines) in the temperature range of 425-460 K. The obtained value of critical exponents using CTS terms are found to be very similar to the values extracted from simple-power-law fit in the asymptotic region of the \(T_{\rm C}\), as summarized in Table 1. Furthermore, the Kouvel-Fisher method (KF) [59] is used to validate the accuracy of the values of the critical exponents extracted from the MAP, where equations (8) and (9) are modified according to the Kouvel-Fisher method, as below: \[\frac{M_{\rm S}(T)}{d[M_{\rm S}(T)/dT]}=\frac{T-T_{\rm C}}{\beta}, \tag{8}\] \[\frac{\chi_{0}^{-1}(T)}{d[\chi_{0}^{-1}(T)/dT]}=\frac{T-T_{\rm C}}{\gamma}, \tag{9}\] The value of \(T_{\rm C}\) is determined by the intersection of \(M_{S}/(dM_{S}/dT)^{-1}\) and \(\chi_{0}^{-1}/(d\chi_{0}^{-1}/dT)^{-1}\) on the temperature axis, as shown in Fig. 6(b), which should both be straight lines with slopes of \(\frac{1}{\beta}\) and \(\frac{1}{\gamma}\), respectively, as determined by above equations (8) and (9). The linear fit to the data in Fig. 6(b) give rise the values of \(\beta\), \(\gamma\) and \(\rm T_{C}\), which are found to be 0.369(3), 1.445(4) and 446 K, respectively. It is important to note here that the values of critical exponents found to be similar in both the cases (magnetic equation of state and Kouvel-Fisher method), which validate the correctness of the analysis methods. Moreover, Fig. 6(c) displays the \(M-H\) curve corresponding to the \(T_{\rm C}\) as well as the inset shows the same in log-log scale. This log(\(M\)) against log(\(H\)) curve should be linear in the high field region and follow equation 5 where the linear fit (solid red line) in the high-field region gives the slope (\(1/\delta\)). In addition, the reliability of the critical exponent \(\delta\) at the critical isotherm (\(M-H\) at 446 K) can be tested by applying Widom scaling formula [61] as written below: \[\delta=1+\frac{\gamma}{\beta} \tag{10}\] Using equation 10, the \(\delta\) value can be determined by considering the \(\beta\) and \(\gamma\) values estimated from Figs. 6(a) \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline Sample & Reference & Method(s) & \(T_{C}\) & \(\alpha\) & \(\beta\) & \(\gamma\) & \(\delta\) & \(a_{M_{2}}^{-}\) & \(a_{\chi_{2}}^{+}\) \\ \hline CoFeV\({}_{0.8}\)Mn\({}_{0.2}\)Si & This work & MAP & 446.25(5) & –0.168(3) & 0.364(3) & 1.440(6) & 4.96 & & \\ & This work & CTS & 446.25(5) & & 0.368(4) & & & 0.53(6) & \\ & This work & CTS & 446.27(6) & & & 1.451(4) & & –0.91(4) \\ & This work & KF & 446.23(7) & –0.183(2) & 0.369(3) & 1.445(4) & 4.92 & & \\ & This work & CI & & & & & 4.89 & \\ Mean field & [33] and [30] & Theoretical & 0 & 0.5 & 1.0 & 3.0 & & \\ 3D Heisenberg & [36], [30] and [35] & Theoretical & –0.116 & 0.365 & 1.386 & 4.80 & & \\ 3D XY & [36], [30] and [35] & Theoretical & –0.006 & 0.345 & 1.316 & 4.82 & & \\ 3D Ising & [36], [30] and [35] & Theoretical & 0.009 & 0.325 & 1.24 & 4.81 & & \\ Tricritical & [39] & Theoretical & 0.5 & 0.25 & 1.0 & 5.0 & & \\ \hline \hline \end{tabular} \end{table} Table 1: The extracted values of critical exponents (\(\alpha\), \(\beta\), \(\gamma\), and \(\delta\)) from the modified Arrott plot (MAP), Kouvel-Fisher (KF) technique, correction-to-scaling (CTS) and critical isotherm (CI) for CoFeV\({}_{0.8}\)Mn\({}_{0.2}\)Si Heusler alloy and comparison with the theoretically anticipated values for different universality classes. Figure 7: The modified Arrott plot (MAP) with \(\beta=0.369(3)\) and \(\gamma=1.445(4)\) values obtained from the K–F method, shown at few temperatures for clarity. The linear fittings are shown with solid black lines in the high field region and the red line at \(T_{\rm C}\) is extrapolated up to zero field. and 6(b), which yields \(\delta=4.956\) and 4.961, respectively. These values are quite similar to those found in the critical isotherms, see in Fig. 6(c), within the experimental error. Overall, we find that the extracted values of critical exponents (\(\beta\), \(\gamma\), \(\delta\)) as well as the \(T_{\rm C}\) are consistent and in good agreement with the Widom scaling relation. The critical exponents obtaining from the K-F method are used to rebuild the MAPs, which manifests straight and parallel lines \(\geq\)1 T field region, as shown in Fig. 7. However, the non-linearity at very low field region (not shown) indicates the formation of multi-domain spin structure with different orientations [13, 62]. Finally, in order to relate the \(M(H,\epsilon)\), \(H\), and \(T_{\rm C}\), the magnetic equation of state can be expressed in the vicinity of \(T_{\rm C}\), according to the scaling theory [63, 64]: \[M(H,\epsilon)=|\epsilon|^{\beta}f_{\pm}\left(\frac{H}{|\epsilon|^{\beta+\gamma }}\right) \tag{11}\] where the regular functions for temperature, denoted as \(f_{-}\) (for \(T<T_{\rm C}\)) and \(f_{+}\) (for \(T>T_{\rm C}\)) play an important role in the scaling equation. In terms of the renormalized magnetization, denoted as \(m\approx M(H,\epsilon)|\epsilon|^{-\beta}\), and the renormalized field, denoted as \(h\approx H|\epsilon|^{-(\gamma+\beta)}\), the above scaling equation can be modified as \(m=f_{\pm}(h)\). The isotherms below \(T_{\rm C}\) correspond to the function f\({}_{-}\), while for isotherms above \(T_{\rm C}\) correspond to the function f\({}_{+}\). The plot between \(m\) and \(h\) align with the two distinct branches of the scaling function when the appropriate values of \(\beta\) and \(\gamma\) are chosen as described by the equations 8 and 9. This is a crucial test to check whether the critical exponents are stable across the \(T_{\rm C}\). In Table 1, we present the values of critical exponents (\(\beta\), \(\gamma\) and \(\delta\)) estimated from various approaches, as well as the theoretically anticipated values for various models. Now we can determine whether these exponents satisfy the scaling equation of state (equation 11). Therefore, we use the critical exponents from the K-F plot and plotted \(m\) versus \(h\) in Fig. 8(a) and the same plot is zoomed in the low field region in log-log scale, as shown in the inset. It is noted that a few data points are removed from the low field region for the sake of clarity of two distinct branches above and below \(T_{\rm C}\). These isotherms clearly exhibit a collapse behavior forming two distinct branches: one located just above and below the \(T_{\rm C}\). The accuracy and reliability of the exponents and \(T_{\rm C}\) values are further confirmed by a plot between \(m^{2}\) and \(h/m\), where \(M-H\) curves separate into two branches and follow the scaling function, as shown in Fig. 8(b). Here, the crucial point is to investigate the type of magnetic interactions in the proximity of \(T_{\rm C}\) to understand the spin interactions. The exchange interaction of long-range extended type itinerant spin interaction spatially decays as \(J(r){\sim}r^{-(d+\sigma)}\), where \(d\) and \(\sigma\) stand for the lattice dimensionality and range of interaction, respectively [32, 33, 34, 26, 14]. It has been well established that the value of \(\sigma\) is crucial to understand the type of spin interactions; for example \(\sigma{<}1.5\) for the long-range ordering and \(\sigma{\geq}\) 2 for the short-range ordering [65, 34]. Here, in order to compute the value of \(\sigma\), the renormalization group technique can be used where the expression for \(\gamma\) is given as below [66, 38]: \[\begin{split}\gamma=& 1+\frac{4}{d}\left(\frac{n+2}{n+8} \right)\Delta\sigma+\left[\frac{8(n+2)(n-4)}{d^{2}(n+8)^{2}}\right]\\ &\times\left[1+\frac{2G\left(\frac{d}{2}\right)(7n+20)}{(n-4)(n+ 8)}\right]\Delta\sigma^{2},\end{split} \tag{12}\] Here the G(\(d/2\)) = 3-\(\frac{1}{4}(\frac{d}{2})^{2}\), and \(\Delta\sigma\) = \(\sigma\)-\((n/2)\) where \(n=3\) represents the spin dimensionality. The estimated value of \(\gamma\) = 1.445 (see Fig. 7) can be used to determine the value of \(\sigma\) using the above equation 12, which found to be \(\sim\)1.987. Further, we use the value of \(\sigma\) to compute Figure 8: (a) The renormalized magnetization (m) as a function of the renormalized field (h), and inset is the log-log plot of m(h) data. (b) m\({}^{2}\) versus h/m plot below and above \(T_{\rm C}\), using the \(\beta\) and \(\gamma\) values from the K–F method. The data at different temperatures are represented by various symbols/colors. the other exponents using the following formulas: \(\alpha=2\)-\(\nu d\) where the exponent of correlation length \(\nu=\gamma/\sigma\), \(\beta=\) (2-\(\alpha\)-\(\gamma\))/2, and \(\delta=\) 1 + (\(\gamma/\beta\)) [30; 33]. Here, the determined values of \(\beta\) = 0.368 and \(\delta\) = 4.927 are found to be consistent with those obtained in the above analysis using the K-F method. In the case of isotropic three-dimensional ferromagnetism, the exchange interaction in the 3D Heisenberg model (with \(\beta\) = 0.365, \(\gamma\) = 1.386, and \(\delta\) = 4.8) exhibits a decay rate that is faster than the power law of \(J(r)\sim r^{-5}\) for the value of \(\sigma=2\). In the present case, the value of \(\sigma\) is comparable to 2 and \(\beta\) = 0.369, which suggest for the the presence of short-range spin interactions and approach to the 3D Heisenberg model [65; 26]. ## IV Conclusions In summary, we find that the CoFeV\({}_{0.8}\)Mn\({}_{0.2}\)Si quaternary sample is stable and crystallizes in a single-phase cubic Y-type structure, having \(\approx\)28% anti-site disorder identified between V-Si atoms. We observe a second-order paramagnetic to ferromagnetic thermodynamic phase transition at around 446 K. Interestingly, the saturation magnetization of \(\approx\)2.2 \(\mu_{\rm B}\)/f.u. measured at 5 K is found to be in good agreement with the Slater-Pauling rule, which is perquisite for half-metallic nature. The itinerant ferromagnetic character of moment is obtained from the self-consistent theory and the Rhodes-Wohlfarth ratio of the sample. The values of critical exponents (\(\beta\), \(\gamma\), and \(\delta\)) are extracted from the modified Arrott plot, Kouvel-Fisher method, and critical isotherms, which consistently and accurately describe the scaling behavior and suggest for the presence of 3D Heisenberg type interactions in the sample. Furthermore, the spin interaction decays as \(J(r)\)\(\sim\)\(r^{(-4.99)}\), which indicates the short-range itinerant magnetic ordering. ## V Acknowledgments GDG thanks the MHRD, India, for the fellowship and IIT Delhi for different experimental facilities; XRD and PPMS in the Department of Physics; glass blowing section, and EDX at the central research facility (CRF). The work is financially supported by the BRNS through DAE Young Scientist Research Award to RSD with the project sanction No. 34/20/12/2015/BRNS. RSD also acknowledges SERB-DST for the financial support through a core research grant (project reference no. CRG/2020/003436).
2302.01581
Learning to Decouple Complex Systems
A complex system with cluttered observations may be a coupled mixture of multiple simple sub-systems corresponding to latent entities. Such sub-systems may hold distinct dynamics in the continuous-time domain; therein, complicated interactions between sub-systems also evolve over time. This setting is fairly common in the real world but has been less considered. In this paper, we propose a sequential learning approach under this setting by decoupling a complex system for handling irregularly sampled and cluttered sequential observations. Such decoupling brings about not only subsystems describing the dynamics of each latent entity but also a meta-system capturing the interaction between entities over time. Specifically, we argue that the meta-system evolving within a simplex is governed by projected differential equations (ProjDEs). We further analyze and provide neural-friendly projection operators in the context of Bregman divergence. Experimental results on synthetic and real-world datasets show the advantages of our approach when facing complex and cluttered sequential data compared to the state-of-the-art.
Zihan Zhou, Tianshu Yu
2023-02-03T07:24:58Z
http://arxiv.org/abs/2302.01581v1
# Learning to Decouple Complex Systems ###### Abstract A complex system with cluttered observations may be a coupled mixture of multiple simple subsystems corresponding to _latent entities_. Such sub-systems may hold distinct dynamics in the continuous-time domain; therein, complicated interactions between sub-systems also evolve over time. This setting is fairly common in the real world but has been less considered. In this paper, we propose a sequential learning approach under this setting by decoupling a complex system for handling irregularly sampled and cluttered sequential observations. Such decoupling brings about not only subsystems describing the dynamics of each latent entity but also a meta-system capturing the interaction between entities over time. Specifically, we argue that the meta-system evolving within a simplex is governed by _projected differential equations (ProjDEs)_. We further analyze and provide neural-friendly projection operators in the context of Bregman divergence. Experimental results on synthetic and real-world datasets show the advantages of our approach when facing complex and cluttered sequential data compared to the state-of-the-art. Machine Learning, ICML ## 1 Introduction Discovering hidden rules from sequential observations has been an essential topic in machine learning, with a large variety of applications such as physics simulation (Sanchez-Gonzalez et al., 2020), autonomous driving (Diehl et al., 2019), ECG analysis (Golany et al., 2021) and event analysis (Chen et al., 2021), to name a few. A standard scheme is to consider sequential data at each timestamp to be holistic and homogeneous under some ideal assumptions (i.e., only the temporal behavior of one entity is involved in a sequence), under which data/observation is treated as a collection of slices at a different time from a unified system. A series of sequential learning models fall into this category, including variants of recurrent neural networks (RNNs) (Cho et al., 2014; Hochreiter and Schmidhuber, 1997), neural differential equations (DEs) (Chen et al., 2018; Kidger et al., 2020; Rusch and Mishra, 2021; Zhu et al., 2021) and spatial/temporal attention-based approaches (Vaswani et al., 2017; Fan et al., 2019; Song et al., 2017). These variants fit well into the scenarios agreeing with the aforementioned assumptions and are proved effective in learning or modeling for relatively simple applications with clean data sources. In the real world, a system may not only describe a single and holistic entity but also consist of several _distinguishable_ interacting but simple subsystems, where each subsystem corresponds to a physical entity. For example, we can think of the movement of a solar system as the mixture of distinguishable subsystems of the sun and surrounding planets, while interactions between these celestial bodies over time are governed by the laws of gravity. Back centuries ago, physicists and astronomers made enormous efforts to discover the rule of celestial movements from the records of every single body and eventually delivered the neat yet elegant differential equations (DEs) depicting principles of moving bodies and interactions therein. Likewise, nowadays, researchers also developed a series of machine learning models for sequential data with distinguishable partitions (Qin et al., 2017). Two widely adopted strategies for learning the interactions between subsystems are graph neural networks (Iakovlev et al., 2021; Ha and Jeong, 2021; Kipf et al., 2018; Yildiz et al., 2022; Xhonneux et al., 2020) and attention mechanism (Vaswani et al., 2017; Lu et al., 2020; Goyal et al., 2021), while the interactions are typically encoded with "messages" between nodes and pair-wise "attention scores", respectively. It is worth noting an even more difficult scenario: * _The data/observation is so cluttered that cannot be readily distinguished into separate parts_. This can be either due to the way of data collection (e.g., videos consisting of multiple objects) or because there are no explicit physical entities originally (e.g., weather time series). To tackle this, a fair assumption can be introduced that complex observations can be decoupled into several relatively independent modules in the feature space, where each module corresponds to a _latent entity_. Latent entities may not have exact physical meanings, but learning procedures can greatly benefit from such decoupling, as this assumption can be viewed as strong regularization to the system. This assumption has been successfully incorporated in several models for learning from _regularly_ sampled sequential data by emphasizing "independence" to some extent between channels or groups in the feature space (Li et al., 2018; Yu et al., 2020; Goyal et al., 2021; Madan et al., 2021). Another successful counterpart in parallel benefiting from this assumption is transformer (Vaswani et al., 2017) which stacks multiple layers of self-attention and point-wise feedforward networks. In transformers, each attention head can be viewed as a relatively independent module, and interaction happens throughout the head re-weighting procedure following the attention scores. Lu et al. (2020) presented an interpretation from a dynamic point of view by regarding a basic layer in the transformer as one step of integration governed by differential equations derived from interacting particles. Vuckovic et al. (2020) extended this interpretation with more solid mathematical support by viewing the forward pass of the transformer as applying successive Markov kernels in a particle-based dynamic system. We note, however, despite the ubiquity of this setting, there is barely any previous investigation focusing on learning for _irregularly sampled_ and _cluttered_ sequential data. The aforementioned works either fail to handle the irregularity (Goyal et al., 2021; Li et al., 2018) or neglect the independence/modularity assumption in the latent space (Chen et al., 2018; Kidger et al., 2020). In this paper, inspired by recent advances of neural controlled dynamics (Kidger et al., 2020) and novel interpretation of attention mechanism (Vuckovic et al., 2020), we take a step to propose an effective approach addressing this problem under the dynamic setting. To this end, our approach explicitly learned to decouple a complex system into several latent sub-systems and utilizes an additional meta-system capturing the evolution of interactions over time. Specifically, taking into account the meta-system capturing interactions evolving in a constrained set (e.g., simplex), we further characterized such interactions using projected differential equations (ProjDEs) with neural-friendly projection operators. We argued our **contributions** as follows: * We provide a novel modeling strategy for sequential data from a system decoupling perspective; * We propose a novel and natural interpretation of evolving interactions as a ProjDE-based meta-system, with insights into projection operators in the sense of Bregman divergence; * Our approach is parameter-insensitive and more compatible with other modules and data, thus being flexible to be integrated into various tasks. Extensive experiments were conducted on either regularly or irregularly sampled sequential data, including both synthetic and real-world settings. It was observed that our approach achieved prominent performance compared to the state-of-the-art on a wide spectrum of tasks. ## 2 Related Work Sequential learning.Traditionally, learning with sequential data can be performed using variants of recurrent neural networks (RNNs) (Hochreiter and Schmidhuber, 1997; Cho et al., 2014; Li et al., 2018) under the Markov setting. While such RNNs are generally designed for regular sampling frequency, a more natural line of counterparts lies in the continuous time domain allowing irregularly sampled time series as input. As such, a variety of RNN-based methods are developed by introducing exponential decay on observations (Che et al., 2018; Mei and Eisner, 2017), incorporating an underlying Gaussian process (Li and Marlin, 2016; Futoma et al., 2017), or integrating some latent evolution under ODEs (Rubanova et al., 2019; De Brouwer et al., 2019). A seminal work interpreting forward passing in neural networks as an integration of ODEs was proposed in Chen et al. (2018), followed by a series of relevant works (Liu et al., 2019; Li et al., 2020; Dupont et al., 2019). As integration over ODEs allows for arbitrary step length, it is natural modeling of irregular time series and proved powerful in many machine learning tasks (e.g., bioinformatics (Golany et al., 2021), physics (Nardini et al., 2021) and computer vision (Park et al., 2021)). (Kidger et al., 2020) studied a more effective way of injecting observations into the system via a mathematical tool called Controlled differential Equation, achieving state-of-the-art performance on several benchmarks. Some variants of neural ODEs have also been extended to discrete structure (Chamberlain et al., 2021; Xhonneux et al., 2020) and non-Euclidean setting (Chamberlain et al., 2021). Learning with independence.Independence or modular property serves as strong regularization or prior in some learning tasks under static setting (Wang et al., 2020; Liu et al., 2020). In the sequential case, some early attempts over RNNs emphasized implicit "independence" in the feature space between dimensions or channels (Li et al., 2018; Yu et al., 2020). As independence assumption commonly holds in vision tasks (with distinguishable objects), Pang et al. (2020); Li et al. (2020) proposed video understanding schemes by decoupling the spatiotemporal patterns. For a more generic case where the observations are collected without any prior, Goyal et al. (2021) devised a sequential learning scheme called recurrent independence mechanism (RIM), and its generalization ability was extensively stud ied in Madan et al. (2021). Lu et al. (2020) investigated self-attention mechanism (Vaswani et al., 2017) and interpreted it as a nearly independent multi-particle system with interactions therein. Vuckovic et al. (2020) further provided more solid mathematical analysis with the tool of Markov kernel. The study of such a mechanism in the dynamical setting was barely observed. **Learning dynamics under constraints.** It is practically significant as a series of real-world systems evolve within some manifolds, such as fluid (Vinuesa & Brunton, 2022), coarse-grained dynamics (Kaltenbach & Koutsourelakis, 2020), and molecule modeling (Chmiela et al., 2020). While some previous research incorporates constraints from a physical perspective (Kaltenbach & Koutsourelakis, 2020; Linot & Graham, 2020), an emerging line is empowered by machine learning to integrate or even discover the constraints (Kolter & Manek, 2019; Lou et al., 2020; Goldt et al., 2020). To ensure a system evolves in constraints, efficient projections or pseudo-projections are required, about which Bregman divergence provides rich insights (Martins & Astudillo, 2016; Krichene et al., 2015; Lim & Wright, 2016). Despite these results, to our best knowledge, there is barely any related investigation about neural-friendly projections. ## 3 Methodology ### Background In this section, we briefly review three aspects related to our approach. Our approach is built upon the basic sub-system derived from _Neural Controlled Dynamics_(Kidger et al., 2020), while the interactions are modeled at an additional meta-system analogous to _Self-attention_(Lu et al., 2020; Vuckovic et al., 2020), and further interpreted and generalized using the tool of _Projected Differential Equations_(Dupuis & Nagurney, 1993). Neural Controlled Dynamics.Continuous-time dynamics can be expressed using differential equations \(\mathbf{z}^{\prime}(t)=d\mathbf{z}/dt=f(\mathbf{z}(t),t)\), where \(\mathbf{z}\in\mathbb{R}^{d}\) and \(t\) are a \(d\)-dimension state and the time, respectively. Function \(f:\mathbb{R}^{d}\times\mathbb{R}_{+}\rightarrow\mathbb{R}^{d}\) governs the evolution of the dynamics. Given the initial state \(\mathbf{z}(t_{0})\), the state at any time \(t_{1}\) can be evaluated with: \[\mathbf{z}(t_{1})=\mathbf{z}(t_{0})+\int_{t_{0}}^{t_{1}}f(\mathbf{z}(s),s) \mathrm{d}s \tag{1}\] In practice, we aim at learning the dynamics from a series of observations or controls \(\{\mathbf{x}(t_{k})\in\mathbb{R}^{b}|k=0,1,...\}\) by parameterizing the dynamics with \(f_{\theta}(\cdot)\) where \(\theta\) is the unknown parameter to be learned. Thus, a generic dynamics incorporating outer signals \(\mathbf{x}\) can be written as: \[\mathbf{z}(t_{1})=\mathbf{z}(t_{0})+\int_{t_{0}}^{t_{1}}f_{\theta}(\mathbf{z} (s),\mathbf{x}(s),s)\mathrm{d}s \tag{2}\] Rather than directly injecting \(\mathbf{x}\) as in Eq. (2), Neural Controlled Differential Equation (CDE) proposed to deal with outer signals with a Riemann-Stieltjes integral (Kidger et al., 2020): \[\mathbf{z}(t_{1})=\mathbf{z}(t_{0})+\int_{t_{0}}^{t_{1}}\mathbf{F}_{\theta}( \mathbf{z}(s))\mathbf{x}^{\prime}(s)\mathrm{d}s \tag{3}\] where \(\mathbf{F}_{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d\times b}\) is a learnable vector field and \(\mathbf{x}^{\prime}(s)=\mathrm{d}\mathbf{x}/\mathrm{d}s\) is the derivative of signal \(\mathbf{x}\) w.r.t. time \(s\), thus "\(\mathbf{F}_{\theta}(\mathbf{z}(s))\mathbf{x}^{\prime}(s)\)" is a matrix-vector multiplication. During implementation, Kidger et al. (2020) argued that a simple cubic spline interpolation on \(\mathbf{x}\) allows dense calculation of \(\mathbf{x}^{\prime}(t)\) at any time \(t\) and exhibits promising performance. In (Kidger et al., 2020), it is also mathematically shown that incorporating observations/controls following Eq. (3) is with greater representation ability compared to Eq. (2), hence achieving state-of-the-art performance on several public tasks. Self-attention.It is argued in Lu et al. (2020); Vuckovic et al. (2020) that a basic unit in Transformer (Vaswani et al., 2017) with self-link consisting of one self-attention layer and point-wise feedforward layer amounts to simulating a multi-particle dynamical system. Considering such a layer with \(n\) attention-heads (corresponding to \(n\) particles), given an attention head index \(i\in\{1,2,...,n\}\), the update rule of the \(i\)th unit at depth \(l\) reads: \[\tilde{\mathbf{z}}_{l,i}=\mathbf{z}_{l,i}+\mathrm{MHAtt}_{W_{\mathrm{att}}^{ l}}\left(\mathbf{z}_{l,i},[\mathbf{z}_{l,1},...,\mathbf{z}_{l,n}]\right) \tag{4a}\] \[\mathbf{z}_{l+1,i}=\tilde{\mathbf{z}}_{l,i}+\mathrm{FFN}_{W_{\mathrm{ fin}}^{l}}\left(\tilde{\mathbf{z}}_{l,i}\right) \tag{4b}\] where \(\mathrm{MHAtt}_{W_{\mathrm{att}}^{l}}\) and \(\mathrm{FFN}_{W_{\mathrm{fin}}^{l}}\) are multi-head attention layer and feedforward layer with parameters \(W_{\mathrm{att}}^{l}\) and \(W_{\mathrm{fin}}^{l}\), respectively. Eq. (4) can then be interpreted as an interacting multi-particle system: \[\frac{\mathrm{d}\mathbf{z}_{i}(t)}{\mathrm{d}t}=F(\mathbf{z}_{i}(t),[\mathbf{ z}_{1}(t),...,\mathbf{z}_{n}(t)],t)+G(\mathbf{z}_{i}(t)) \tag{5}\] where function \(F\) corresponding to Eq. (4a) represents the diffusion term and \(G\) corresponding to Eq. (4b) stands for the convection term. Notably, the attention score obtained via \(\mathrm{softmax}\) in Eq. (4a) is regarded as a Markov kernel. Readers are referred to Lu et al. (2020); Vuckovic et al. (2020) for more details. Projected DEs.It is a tool depicting the behavior of dynamics where solutions are constrained within a (convex) set. Concretely, given a closed polyhedral \(\mathcal{K}\subset\mathbb{R}^{n}\) and a mapping \(H:\mathcal{K}\rightarrow\mathbb{R}^{n}\), we can introduce an operator \(\Pi_{\mathcal{K}}:\mathbb{R}^{n}\times\mathcal{K}\rightarrow\mathbb{R}^{n}\) which is defined by means of directional derivatives as: \[\Pi_{\mathcal{K}}(\mathbf{a},H(\mathbf{a}))=\lim_{\alpha\to 0_{+}}\frac{P_{ \mathcal{K}}(\mathbf{a}+\alpha H(\mathbf{a}))-\mathbf{a}}{\alpha} \tag{6}\] where \(P_{\mathcal{K}}(\cdot)\) is a projection onto \(\mathcal{K}\) in terms of Euclidean distance: \[\|P_{\mathcal{K}}(\mathbf{a})-\mathbf{a}\|_{2}=\inf_{\mathbf{y}\in\mathcal{K}}\| \mathbf{y}-\mathbf{a}\|_{2} \tag{7}\] Intuitively, Eq. (6) pictures the dynamics of \(\mathbf{a}\) driven by function \(H\), but constrained within \(\mathcal{K}\). Whenever \(\mathbf{a}\) reaches beyond \(\mathcal{K}\), it would be projected back using Eq. (7). By extending Eq. (6), (Dupuis and Nagurney, 1993; Zhang and Nagurney, 1995) considered the projected differential equations as follows: \[\frac{\mathrm{d}\mathbf{a}(t)}{\mathrm{d}t}=\Pi_{\mathcal{K}}(\mathbf{a},H( \mathbf{a})) \tag{8}\] which allows for discontinuous dynamics on \(\mathbf{a}\). ### Learning to decouple Our method is built upon the assumption that cluttered sequential observations are composed of several relatively independent sub-systems and, therefore, explicitly learns to decouple them as well as to capture the mutual interactions with a meta-system in parallel. Let the cluttered observations/controls be \(\mathbf{c}(t)\in\mathbb{R}^{k}\) at time \(t\) for \(t=1,...,T\), where \(T\) is the time horizon. We employ \(k\) distinct mappings with learnable parameters (e.g., MLP) to obtain respective controls to each sub-system: \(\mathbf{x}_{i}(t)=p_{i}(\mathbf{c}(t))\in\mathbb{R}^{m}\) for \(i=1,...,n\). A generic dynamics of the proposed method can be written as: \[\frac{\mathrm{d}\mathbf{z}_{i}(t)}{\mathrm{d}t} =f_{i}\left(\mathbf{z}_{i}(t),\left[\mathbf{z}_{1}(t),..., \mathbf{z}_{n}(t)\right],\mathbf{x}_{i}(t),\mathbf{a}(t)\right) \tag{9a}\] \[\frac{\mathrm{d}\mathbf{a}(t)}{\mathrm{d}t} =\Pi_{\mathcal{S}}\left(\mathbf{a}(t),g(\mathbf{a}(t),\left[ \mathbf{z}_{1}(t),...,\mathbf{z}_{n}(t)\right])\right) \tag{9b}\] where Eq. (9a) and Eq. (9b) refer to the \(i\)th sub-system describing the evolution of a single latent entity and meta-system depicting the interactions, respectively. \(\mathbf{z}_{i}(t)\in\mathbb{R}^{q}\) is the hidden state for the \(i\)th subsystem, and \(\mathbf{a}\) is a tensor governs the dynamics of the interactions. Here \(\Pi_{\mathcal{S}}(\cdot)\) is a projection operator, which projects the evolving trajectory into set \(\mathcal{S}\). We introduce such an operator as it is assumed that interactions among latent entities should be constrained following some latent manifold structure. \(f_{i}(\cdot)\) and \(g(\cdot)\) are both learnable functions and also the essential roles for capturing the underlying complex dynamics. _Remark 1_.: It is seen the projection operator \(\Pi_{\mathcal{S}}(\cdot)\) and the set \(\mathcal{S}\) play important roles in Eq. (9b). For \(\Pi_{\mathcal{S}}(\cdot)\), while previous works of ProjDEs only consider L2-induced projection, we propose novel interpretation and extension under Bregman divergence. For \(\mathcal{S}\), we consider a probabilistic simplex following the setting in Lu et al. (2020); Vuckovic et al. (2020), though it can be any polyhedral. According to Eq. (9), we fully decouple a complex system into several components. Although we found some decoupling counterparts in the context of RNNs (Li et al., 2018; Yu et al., 2020) and attention-like mechanism (Lu et al., 2020; Goyal et al., 2021), their decoupling could not be applied to our problem. We elaborate on the details of implementing Eq. (9) in the following. Learning sub-systems.Sub-systems corresponding to the latent entities seek to model relatively independent dynamics separately. Specifically, we employ the way of integrating \(\mathbf{x}_{i}\)s into Eq. (9a) in a controlled dynamical fashion as in the state-of-the-art method (Kidger et al., 2020): \[\mathrm{d}\mathbf{z}_{i}(t)=\mathbf{F}_{i}\left(\mathbf{z}_{i}(t),\mathbf{a} (t),\left[\mathbf{z}_{1}(t),...,\mathbf{z}_{n}(t)\right]\right)\mathrm{d} \mathbf{x}_{i}(t) \tag{10}\] where \(\mathbf{F}_{i}(\cdot)\in\mathbb{R}^{q\times m}\) is a learnable vector field. Concretely, if we let \(\mathbf{z}(t)=[\mathbf{z}_{i}(t),...,\mathbf{z}_{n}(t)]\) be the tensor collecting all sub-systems, the \(i\)th sub-system in a self-attention fashion reads: \[\mathrm{d}\mathbf{z}_{i}(t)=\mathbf{F}(\left[\mathbf{A}(t)\cdot\mathbf{z}(t) \right]_{i})\mathrm{d}\mathbf{x}_{i}(t) \tag{11}\] where \([\cdot]_{i}\) takes the \(i\)th slice from a tensor. Note timestamp \(t\) can be arbitrary, resulting in irregularly sampled sequential data. To address this, we follow the strategy in Kidger et al. (2020) by performing cubic spline interpolation on \(\mathbf{x}_{i}\) over observed timestamp \(t\), resulting in \(\mathbf{x}_{i}(t)\) at dense time \(t\). Note that for all sub-systems, different from Eq. (10) we utilize an identical function/network \(\mathbf{F}(\cdot)\) as in Eq. (11), but with different control sequence \(\mathbf{x}_{i}(t)=p_{i}(\mathbf{c}(t))\). Since in our implementation, \(p_{i}(\cdot)\) is a lightweight network such as MLP, this can significantly reduce the parameter size. Learning interactions.In our approach, interactions between latent entities are modeled separately as another meta-system. This is quite different from some related methods (Lu et al., 2020; Vuckovic et al., 2020) where sub-systems and interactions are treated as one holistic step of forward integration. For the meta-system describing the interactions in Eq. (9b), two essential components are involved: domain \(\mathcal{S}\) and the projection operator \(\Pi\). In the context of ProjDEs, a system is constrained as \(\mathbf{a}(t)\in\mathcal{S}\) for any \(t\). In terms of interactions, a common choice of \(\mathcal{S}\) is the stochastic simplex which can be interpreted as a transition kernel (Vuckovic et al., 2020). We allow follow this setting by defining \(\mathcal{S}\) be a row-wise stochastic \((n-1)\)-simplices: \[\mathcal{S}\triangleq\{\mathbf{A}\in\mathbb{R}^{n\times n}|\mathbf{A1}= \mathbf{1},\mathbf{A}_{ij}\geq 0\} \tag{12}\] where \(\mathbf{1}\) is a vector with all \(1\) entries. \(\mathbf{A}=\mathrm{mat}(\mathbf{a})\) is a \(n\times n\) matrix. In the sequel, we will use the notation \(\mathbf{A}\) throughout. Thus the meta-system capturing the interactions can be implemented as follows: \[\frac{d\mathbf{A}(t)}{dt}=\Pi_{\mathcal{S}}\left(\mathbf{A}(t),g(\mathbf{A}(t ),\left[\mathbf{z}_{1}(t),...,\mathbf{z}_{n}(t)\right])\right) \tag{13}\] For the projection operator, we consider two versions shown in Eq. (14). In Eq. (14a), we give a row-wise projection onto the \((n-1)\)-simplex with entropic regularization (Amos, 2019), which has a well-known closed-form solution \(\mathrm{softmax}(\cdot)\) appearing in attention mechanism. In Eq. (14b), we adopt a standard L2-induced projection identical to Eq. (7), which leads to sparse solutions (Wainwright et al., 2008). Intuitively, the projection of a point onto a simplex in terms of L2 distance tends to lie on a facet or a vertex of a simplex, thus being sparse. \[P_{\mathcal{S}}^{\text{soft}}(\mathbf{A}_{j,:}) =\operatorname*{arg\,min}_{\mathbf{B}\in\mathcal{S}}\mathbf{A}_{ j,:}^{\top}\mathbf{B}_{:,j}-\mathbb{H}^{\text{enfr}}(\mathbf{B}_{:,j}) \tag{14a}\] \[P_{\mathcal{S}}^{\text{sparse}}(\mathbf{A}_{j,:}) =\operatorname*{arg\,min}_{\mathbf{B}\in\mathcal{S}}\mathbf{A}_{ j,:}^{\top}\mathbf{B}_{:,j}-\mathbb{H}^{\text{gini}}(\mathbf{B}_{:,j})\] (14b) \[=\operatorname*{arg\,min}_{\mathbf{B}\in\mathcal{S}}|\mathbf{A}_{ j,:}-\mathbf{B}_{:,j}|^{2}\] where \(\mathbb{H}^{\text{enfr}}(\cdot)\) and \(\mathbb{H}^{\text{gini}}(\mathbf{y})=\frac{1}{2}\sum_{i}\mathbf{y}_{i}( \mathbf{y}_{i}-1)\) are the standard entropy and the gini-entropy, respectively. \(\mathbf{A}_{j,:}\) and \(\mathbf{B}_{:,j}\) are the \(i\)th row and column of \(\mathbf{A}\) and \(\mathbf{B}\), respectively. While the solution to Eq. (14a) is \(\mathrm{softmax}(\mathbf{A})\), Eq. (14b) also has closed-form solution shown in Appendix A.3. Comparing Eq. (14a) to the standard Euclidean projection in Eq. (14b), we note the entropic regularization \(\mathbb{H}(\cdot)\) in Eq. (14a) allows for a smoother trajectory by projecting any \(\mathbf{A}\) into the interior of \((n-1)\)-simplex. We visualize the two versions of projections in Eq. (14) onto \(1\)-simplex from some random points in Fig. 1. One can readily see that Eq. (14b) is an exact projection such that points far from the simplex are projected onto the boundary. However, \(\mathrm{softmax}\) is smoother by projecting all points onto a relative interior of 1-simplex without sudden change. In the context of Bregman divergence, different distances can facilitate efficient convergence under different "L-relative smoothness" (Dragomir et al., 2021), which can potentially accelerate the learning of dynamics. We leave this to our future work. We further discuss some neural-friendly features of Eq. (14a) and (14b) facilitating the neural computation: **(1)** First, the neural computational graph can be simplified using projection Eq. (14a). Though Eq. (13) using projection Eq. (14a) defines a projected dynamical system directly on \(\mathbf{A}\), we switch to update the system using \(\mathbf{L}\) as follows, which is considered to further ease the forward integration. This is achieved by instead modeling the dynamics of the feature before fed into \(\mathrm{softmax}(\cdot)\): \[\mathbf{A}(t) =\text{Softmax}(\mathbf{L}(t)) \tag{15a}\] \[\mathbf{L}(t) =\mathbf{L}(0)+\int_{0}^{t}\frac{\mathrm{d}}{\mathrm{d}s}\frac{ \mathbf{Q}(\mathbf{z}(s))\cdot\mathbf{K}^{\top}(\mathbf{z}(s))}{\sqrt{d_{k}}} \mathrm{d}s,\] (15b) \[\mathbf{L}(t+\Delta t) =\mathbf{L}(t)+\Delta t\cdot\frac{\mathrm{d}}{\mathrm{d}s}\frac{ \mathbf{Q}(\mathbf{z}(s))\cdot\mathbf{K}^{\top}(\mathbf{z}(s))}{\sqrt{d_{k}} }\Big{|}_{s=t} \tag{15c}\] where \(\mathbf{Q}(\cdot)\) and \(\mathbf{K}(\cdot)\) correspond to the query and key in the attention mechanism, respectively. \(\mathbf{L}(0)=\mathbf{Q}(\mathbf{z}(0))\cdot\mathbf{K}^{\top}(\mathbf{z}(0) )/\sqrt{d_{k}}\). We show that updating the dynamic of \(\mathbf{L}\) following Eq. (15) is equivalent to directly updating \(\mathbf{A}\) in Appendix A.2. **(2)** Second, both the solution to projection Eq. (14b) and its gradient w.r.t. \(\mathbf{A}\) are in closed form. See Proposition 1 and Proposition 2 in Appendix A.3 for more details. This, in turn, eases the computational flow in the neural architecture with high efficiency and stability. Though only two versions of projections are discussed under Bregman divergence, we believe they are sufficiently distinguishable for analyzing the behavior of ProjDEs. For generic neural-friendly projections, we leave them to our future work. **Integration.** We employ the standard Euler's discretization for performing the forward integration by updating \(\mathbf{z}\) and \(\mathbf{A}\) simultaneously with a sufficiently small time step. We term our approach a **d**ecoupling-based **n**eural **s**ystem (**DNS**) using projection Eq. (14a) and \(\mathbf{DNS_{G}}\) using projection Eq. (14b), respectively. ## 4 Experiments We evaluate the performance of DNS on multiple synthetic and real-world datasets. More details about the dataset and implementation details can be found in Appendix A.4 and A.6. Throughout all the tables consisting of the results, "-" indicates "not applicable" since RIM cannot handle irregular cases. _Remark 2_.: In all the experiments, the input feature is treated holistically without any distinguishable parts. For example, in the Three Body dataset, the input is a 9-dimensional vector, with every 3 dimensions (coordinates) from a single object. However, this prior is not fed into any models in comparison. Thus, we do not compare to models integrated with strong prior such as (Kipf et al., 2018). Figure 1: Comparison of softmax and L2 projection onto a simplex. We see that the softmax projection trends to project onto the “center” of the simplex while the L2 projection trends to project onto the corner. **Baselines.** We compare DNS with several selected models capturing interactions or modeling irregular time series, including **CT-GRU**(Mozer et al., 2017) using state-decay decay mechanisms, **RIM**(Goyal et al., 2021) updating almost independent modules discretely, and **NeuralCDE**(Kidger et al., 2020) which reports state-of-the-art performance on several benchmarks. **Adapting DNS to the noisy case.** To allow DNS fitting to noisy and uncertain circumstances, we create a variant by slightly modifying it. This variant is obtained by replacing cubic spline interpolation over \(\mathbf{x}_{i}(t)\) with natural smoothing spline (Green & Silverman, 1993), in consideration of incorporating smoother controls and alleviating data noise. This version is termed as **DNS\({}_{\mathbf{S}}\)**. ### Three body The three-body problem is characterized by a chaotic dynamical system for most randomly initial conditions. A small perturbation may cause drastic changes in the movement. Taking into account the problem's complexity, it is particularly suitable for testing our approach. In this experiment, we consider a trajectory predicting problem given the noisy historical motion of three masses, where gravity causes interactions between them. Therefore, models need to (implicitly) learn both Newton's laws of motion for modeling sub-system dynamics and Newton's law of universal gravitation to decouple the latent interaction. This dataset consists of 50k training samples and 5k test samples. For each sample, 8 historical locations for the regular setting and 6 historical locations (randomly sampled from 8) for the irregular setting in the 3-dimensional space of three bodies are given to predict 3 subsequent locations. To equip with the cluttered setting, the correspondence between dimensions and bodies will not be fed into the learning models, hence a 9-dimensional observation at each time stamp. Models' performance is summarized in Table 1. We can conclude that DNS outperformed all the selected counterparts in both regular and irregular settings. Notably, although our method is built on NeuralCDE, with the decoupling, the performance can be significantly improved. See Table 5 in Appendix A.7.2 for more detailed results. **Visualization and analysis.** We visualize dynamics \(\mathbf{A}\) of DNS along the movements of three body system. See Fig. 2 for results. We set the time stamps starting from 5 to 12 to make visualization more informative. It is seen in the beginning (\(t=5,6\) or even earlier), \(\mathbf{A}\) remains stable as the three bodies are apart from each other without intensive interactions. At \(t=7\), \(\mathbf{A}\) demonstrates obvious change when two bodies start to the coil. Another body joins in this party at \(t=8\), yielding another moderate change of \(\mathbf{A}\). When flying apart, one body seems more independent, while another two keep entangled together. These are well reflected via the meta-system \(\mathbf{A}\). To further see how the holistic 9-dimensional input is decoupled into sub-systems \(\mathbf{z}_{i}\), we visualize the sub-system focus in Fig. 3 (also see Appendix A.1.1). Interestingly, latent entities (sub-systems) do not correspond to physical entities (three bodies). Instead, the first sub-system puts more focus on the whole input, but the remaining two sub-systems concentrate on the x-axis \begin{table} \begin{tabular}{l c c} \hline Model & Regular & Irregular \\ \hline CT-GRU & 1.8272 & 2.4811 \\ NeuralCDE & 3.3297 & 5.0077 \\ RIM & 2.4510 & - \\ \hline DNS & **1.7573** & **2.2164** \\ \hline \end{tabular} \end{table} Table 1: **Trajectory prediction**. MSE loss of the three body dataset (\(\times 10^{-2}\)). Figure 3: A figure showing the focus of 3 sub-systems on 9-dimensional input of Three Body. The strength of focus is reflected by the thickness of the lines. Figure 2: A figure showing the corresponding three-body trajectory (on the top), as well as the evolution over time on interactions (at the bottom) between three **latent sub-systems** in a Three-Body environment. Timestamp from 5 to 12. and y/z-axis, respectively. Though counterintuitive, this unexpected decoupling exhibits good performance. We will investigate how to decouple out physical entities from cluttered observations in our future work. ### Spring We experiment with the capability of DNS in decoupling the independence in complex dynamics controlled by simple physics rules. We use a simulated system in which particles are connected by (invisible) springs (Kuramoto, 1975; Kipf et al., 2018). Each pair of particles has an equal probability of having an interaction or not. Our task is to use observed trajectory to predict whether there are springs between any pair of two particles, which is analogous to the task of link prediction under a dynamical setting. This can be inferred from whether two trajectories change coherently. The spring dataset consists of 50k training examples and 10k test examples. Each sample has a length of 49. We test a variety of combinations of the number of sub-systems and dimensions of the hidden state. Experimental results are in Table 2. To test the models' noise resistance, we add Gaussian noise to the spring dataset and obtain the noisy spring dataset. We set two scenarios, "Train&Test" and "Test", corresponding to injecting noise at both training and test phases and only at testing phases, respectively. Experimental results are in Table 2. **Clean Spring.** From Clean part of Table 2, we see variants of DNS stably outperform all the selected counterparts by a large margin. Especially, under the irregularly sampled data, DNS and DNS\({}_{\text{G}}\) have a remarkable performance gap with all other methods and maintain reliability as in the regular setting. We believe this is significant since learning from irregularly sampled data is typically much more difficult than learning from normal data. **Noisy Spring.** According to Noisy part of Table 2, DNS\({}_{\text{S}}\) is quite reliable in noisy cases. It seems a smoothing procedure on the controls can be helpful under massive uncertainty. Also, we see that adding noise tends to damage the performance of all methods. This also raises one of our future research directions to investigate how to handle different controls. Without applying a smooth cubic spline, DNS can still have a good performance, which indicates that by decoupling, the model focuses on learning latent interaction patterns, and patterns are less susceptible to noise. **Visualization and analysis.** We also visualize state \(\mathbf{A}\) of meta-systems over time in Fig. 4 for Spring. From top to bottom, the first, second and third rows correspond to the trajectory of particles, meta-system state of DNS, and meta-system state of DNS\({}_{\text{G}}\). One interesting thing we note is that the interactions in DNS\({}_{\text{G}}\) almost concentrate on the starting portion of all the time stamps. At \(t=8\) and after, there is no interaction at all. Though not obvious, this also happens to DNS in the sense that \(\mathbf{A}\) tends to be diagonal. We suppose this is because DNS and DNS\({}_{\text{G}}\) only need a portion of data from the start to determine the existence of a link rather than looking into all the redundant time stamps. **Short Spring.** We thus verify this by training and testing both variants with 50% and 25% of data cropped from the starting time stamp and summarize results in Short part of Table 2. It is seen that incomplete data in this task only slightly impact the performance. And this can be surprisingly reflected in the evolution of meta-systems. This also aligns with the intuition that _Link prediction_ needs fewer data than _Trajectory prediction_ as in Three Body. **Ablation study.** Since our method merely incorporates an extra meta-system and a control encoder for modeling the interaction compared to standard NeuralCDE, we conduct experiments under different settings to see how different encoders and hidden state dimensions can contribute to improving NeuralCDE. To ensure fairness, we cast a 2-layer MLP with different output sizes (2 and 16 times of input size) as in DNS to obtain varying sizes of controls. Results \begin{table} \begin{tabular}{l c c|c c|c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Clean} & \multicolumn{2}{c}{Noisy} & \multicolumn{2}{c}{Short} \\ \cline{2-7} & Regular & Irregular & Train\&Test & Test & 50\% & 25\% \\ \hline CT-GRU & 92.89\(\pm\)0.52 & 88.47\(\pm\)0.34 & 92.71\(\pm\)0.55 & 92.80\(\pm\)0.53 & - & - \\ NeuralCDE & 92.47\(\pm\)0.06 & 89.74\(\pm\)0.18 & 90.76\(\pm\)0.08 & 89.61\(\pm\)0.09 & - & - \\ RIM & 89.73\(\pm\)0.07 & - & 89.65\(\pm\)0.14 & 89.64\(\pm\)0.10 & - & - \\ \hline DNS\({}_{\text{G}}\) & 94.31\(\pm\)0.48 & **94.25\(\pm\)0.29** & - & - & 92.58 & 92.31 \\ DNS\({}_{\text{S}}\) & - & - & **93.67\(\pm\)0.57** & **92.99\(\pm\)1.30** & - & - \\ DNS & **94.44\(\pm\)0.69** & 93.60\(\pm\)1.21 & 93.42\(\pm\)1.05 & 89.56\(\pm\)0.42 & 91.11 & 92.13 \\ \hline \hline \end{tabular} \end{table} Table 2: **Link prediction**. Accuracy on Spring (%). Clean, Noisy, and Short correspond to settings with clean, noisy, and short portion data, respectively. Detailed results for Clean and Noisy are separately summarized in Tab. 7 and Tab. 8 in the appendix. \begin{table} \begin{tabular}{l c} \hline \hline Control & Accuracy (\%) \\ \hline No encoding & 91.57 \\ MLP(2\(\times\)input) & 91.51 \\ MLP(16\(\times\)input) & 91.17 \\ DNS (8\(\times\)MLP(2\(\times\)input)) & **95.38** \\ \hline \hline \end{tabular} \end{table} Table 3: **Link prediction**. Ablation study. (%). are summarized in Table 3 (detailed in Tab. 6). We see that with an extra control encoder, there is no obvious performance difference among these settings. However, once the interaction meta-system is imposed, DNS can achieve quite significant performance gain. This, in turn, shows the necessity of the proposed meta-system for explicitly modeling the evolving interactions. ### Human actions The recognition of human actions dataset contains three types of human actions, which are hand clapping, hand waving, and jogging (Schuldt et al., 2004). For this dataset, we consider the limbs of the character as subsystems. When the character does one kind of action, subsystems interact in a specific pattern. We test the performance of all the selected models with the learnable backbone Resnet18 (He et al., 2016). We also test the compatibility of all methods with different dynamical ranges: Norm and Unnorm indicate pixel value in \([0,1]\) and \([0,255]\), respectively. Experimental results are summarized in Table 4. DNS consistently outperforms all other methods, and exhibits strong compatibility to drastically changed ranges under Unnorm setting. Thus it is potentially more flexible to be integrated into various tasks with a large dynamical range (e.g., earthquake). To view how the decoupling works for video recognition tasks, we visualize the strength of the learned parameters by mapping the 128-D feature into 6 latent sub-systems in Figure 5 with re-ordered indices for better view. It can be seen that there are some obvious latent structures in the grouping of the parameters 128-D control to the system. Each sub-system mainly focuses on a small portion of the control, based on which we can infer that each sub-system models different components in inputted images. ## 5 Conclusion In this paper, we propose a method for modeling cluttered and irregularly sampled sequential data. Our method is built upon the assumption that complex observation may be derived from relatively simple and independent latent sub-systems, wherein the interactions also evolve over time. We devise a strategy to explicitly decouple such latent sub-systems and a meta-system governing the interaction. Inspired by recent findings of projected differential equations and the tool of Bregman divergence, we present a novel interpretation of our model and pose some potential future directions. Experiments on various tasks demonstrate the prominent performance of our method over previous state-of-the-art methods. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Norm} & \multicolumn{2}{c}{Unnorm} \\ \cline{2-4} & IrReg & Reg & IrReg \\ \hline CT-GRU & 67.30\(\pm\)6.19\({}^{\dagger}\) & 60.33\({}^{\ddagger}\) & 66.67\({}^{\ddagger}\) \\ NeuralCDE & 89.73\(\pm\)3.38\({}^{\dagger}\) & 70.33\({}^{\ddagger}\) & 59.17\({}^{\ddagger}\) \\ RIM & - & 55.50\({}^{\ddagger}\) & - \\ \hline DNS & **91.35\(\pm\)3.48\({}^{\dagger}\)** & **97.00\({}^{\ddagger}\)** & **95.33\({}^{\ddagger}\)** \\ \hline \hline \end{tabular} \end{table} Table 4: **Video classification**. Accuracy of the human actions dataset (\(\%\)). Norm and Unnorm refer to normalized and un-normalized inputs, respectively. Detailed results with superscript \({}^{\dagger}\) and \({}^{\ddagger}\) are in Tab. 10 and Tab. 11, respectively. Figure 4: Visualization of the evolution of the meta-systems of DNS and DNS\({}_{\text{G}}\) on Spring dataset. On each time stamp \(t\), from top to bottom, we show the trajectory of the 5 balls, the meta-system state of DNS, and the meta-system state of DNS\({}_{\text{G}}\), respectively. Figure 5: A figure showing the importance of each feature vector entry for subsystems
2304.05469
CamDiff: Camouflage Image Augmentation via Diffusion Model
The burgeoning field of camouflaged object detection (COD) seeks to identify objects that blend into their surroundings. Despite the impressive performance of recent models, we have identified a limitation in their robustness, where existing methods may misclassify salient objects as camouflaged ones, despite these two characteristics being contradictory. This limitation may stem from lacking multi-pattern training images, leading to less saliency robustness. To address this issue, we introduce CamDiff, a novel approach inspired by AI-Generated Content (AIGC) that overcomes the scarcity of multi-pattern training images. Specifically, we leverage the latent diffusion model to synthesize salient objects in camouflaged scenes, while using the zero-shot image classification ability of the Contrastive Language-Image Pre-training (CLIP) model to prevent synthesis failures and ensure the synthesized object aligns with the input prompt. Consequently, the synthesized image retains its original camouflage label while incorporating salient objects, yielding camouflage samples with richer characteristics. The results of user studies show that the salient objects in the scenes synthesized by our framework attract the user's attention more; thus, such samples pose a greater challenge to the existing COD models. Our approach enables flexible editing and efficient large-scale dataset generation at a low cost. It significantly enhances COD baselines' training and testing phases, emphasizing robustness across diverse domains. Our newly-generated datasets and source code are available at https://github.com/drlxj/CamDiff.
Xue-Jing Luo, Shuo Wang, Zongwei Wu, Christos Sakaridis, Yun Cheng, Deng-Ping Fan, Luc Van Gool
2023-04-11T19:37:47Z
http://arxiv.org/abs/2304.05469v1
# CamDiff: Camouflage Image Augmentation via Diffusion Model ###### Abstract The burgeoning field of camouflaged object detection (COD) seeks to identify objects that blend into their surroundings. Despite the impressive performance of recent models, we have identified a limitation in their robustness, where existing methods may misclassify salient objects as camouflaged ones, despite these two characteristics being contradictory. This limitation may stem from lacking multi-pattern training images, leading to less saliency robustness. To address this issue, we introduce **CamD-iff**, a novel approach inspired by AI-Generated Content (AIGC) that overcomes the scarcity of multi-pattern training images. Specifically, we leverage the latent diffusion model to synthesize salient objects in camouflaged scenes, while using the zero-shot image classification ability of the Contrastive Language-Image Pre-training (CLIP) model to prevent synthesis failures and ensure the synthesized object aligns with the input prompt. Consequently, the synthesized image retains its original camouflage label while incorporating salient objects, yielding camouflage samples with richer characteristics. The results of user studies show that the salient objects in the scenes synthesized by our framework attract the user's attention more; thus, such samples pose a greater challenge to the existing COD models. Our approach enables flexible editing and efficient large-scale dataset generation at a low cost. It significantly enhances COD baselines' training and testing phases, emphasizing robustness across diverse domains. Our newly-generated datasets and source code are available at [https://github.com/drlxj/CamDiff](https://github.com/drlxj/CamDiff). ## 1 Introduction Camouflage is a predatory strategy that has evolved in natural objects through biological adaptation [4]. Visually, organisms alter the appearance of their bodies to match their surroundings, making them difficult to detect at first glance. Motivated by this phenomenon, a recent field of research called camouflage object detection (COD) [8, 12, 14] has gained significant attention from the computer vision community [16, 48, 55]. This area of study has broad applications, including medical image diagnosis and segmentation [35, 46, 7], species discovery [33], and crack inspection [10]. In the literature, several works [9, 7, 32] directly extend several well-developed salient object detection (SOD) for COD tasks. However, it is noteworthy that salient and camouflaged objects are two contrasting object categories. The greater the level of saliency, the lower the degree of camouflage, and vice versa [22]. Hence, different strategies are imperative for detecting these two distinct object types. SOD models are based on both global and local contrasts, whereas COD models aim to avoid regions of high saliency. Unfortunately, our experiments (see Sec. 4) reveal a decline in the accuracy of current COD methods when both salient and camouflaged objects co-exist in an image. As Fig. 1 illustrates, we tested the robustness of several state-of-the-art (SOTA) COD methods on salient objects. Nevertheless, many of these methods misclassified the salient objects as camouflaged ones. These results indicate that the current COD models are not robust enough regarding the scenes containing salient objects. Specifically, the algorithms employed by PFNet [28] and ZoomNet [32] detect only the more salient object (the yellow ball) and neglect the less salient object (the green ball). Thus, we speculate that existing COD works may only learn to distinguish the foreground and background rather than the camouflage and saliency patterns. This underscores the necessity of further research in COD to gain insight into the camouflage pattern and make COD methods truly effective. To distinguish the saliency and camouflage patterns, one straightforward idea is to train the network via contrastive learning, which has demonstrated its effectiveness in other vision tasks [5, 54, 19]. As suggested in [3, 44, 20], strong data augmentation can significantly benefit contrastive learning, leading to effective feature representation modeling. However, generating positive and negative pairs as contrastive samples is not feasible in our setup due to the lack of salient objects in conventional camouflage datasets. Furthermore, existing COD datasets mainly contain a single object, making the direct extension of contrastive learning infeasible. Besides, collecting and annotating a new dataset containing camouflage and salient objects within a single image would be time-consuming and labor-intensive. In this study, we aim to enhance the robustness of future COD models regarding salient objects. To achieve this objective, we propose augmenting contrastive samples in the training data by leveraging the recent diffusion model [38, 13] as a form of data augmentation to generate synthetic images. This approach is inspired by the success of AI-Generated Content (AIGC) [2, 37] and large-scale generative models. While some recent attempts have been to utilize diffused images for data augmentation, these efforts are only feasible for more common scenarios such as daily indoor scenes [15] or urban landscapes [24] where the domain gap is small. By contrast, we are specifically interested in camouflage scenes, which are rare and challenging for pre-trained diffusion models. These differences make our task very challenging for synthesizing multi-pattern images with large domain gaps, which up to our current knowledge, has not been addressed in camouflage settings. In addition, existing works [17] rely on additional freeze-weight deep networks to generate pseudo labels as supervision, limiting their performance and application. These limitations motivate us to design a novel framework that generates realistic salient objects within the camouflaged scenes. Our approach differs from the concurrent diffusion-augmentation methods [1, 52] regarding (a) the non-negligible domain gap and (b) the preserved camouflage label. To address the target problem, in this work, we propose a diffusion-based adversarial generation framework **CamDiff**. Specifically, our method consists of a generator and a discriminator. The generator is a freeze-weight Latent Diffusion Model (LDM) [38] that has been trained on a large number of categories, making it possible to synthesize the most salient objects at scale. For the discriminator, we adopt the Contrastive Language-Image Pre-training (CLIP) [36] for its generality. Our discriminator compares the input image prompt and the synthesized object to ensure semantic consistency. To preserve the original camouflage label, we only add the generated salient object on the background, _i.e_., outside of the ground truth (GT) label. Therefore, our CamDiff effectively transforms the problem into an inpainting task, without requiring any additional labeling cost. In such a way, we can effectively and easily enable customized editing, hence improving the development of COD from the data-driven aspect. Our main contributions are summarized as follows: * We introduce CamDiff which generates salient objects on top of camouflage scenes while preserving the original label. This framework facilitates collating and combining contrastive patterns within realistic images without incurring extra costs related to learning and labeling. * We conduct experiments to test the robustness of the SOTA COD methods on the COD test sets (_i.e_., Diff-COD), which are created from the original COD testing sets using CamDiff. Our results indicate that the current COD methods are not sufficiently robust to saliency. * To improve the resilience of current COD methods against saliency, we generate a novel training set, called Diff-COD training set, from the original COD training sets using CamDiff. Our experimental results demonstrate that training the existing COD models on this new training set can enhance their robustness to saliency Overall, our research provides a fresh perspective on the notion of _camouflage_, and our newly introduced camouflage synthesis tool will serve as a foundation for advancing this rapidly growing field. ## 2 Related Works ### Diffusion Models Diffusion models [13, 38] are generative models that generate samples from a distribution by learning to gradually remove noise from data points. Recent research [6] shows that diffusion models outperform Generative Adversarial Networks (GANs) [11] in high-resolution image generation tasks without the drawbacks of mode collapse [31] and unstable training [30], and achieve unprecedented results in conditional image generation [37]. Therefore, they have been applied in many domains such as text-to-image and guided synthesis [29, 34], 3D shape generation [56, 26], Figure 1: Visualization results with current COD models tested on an image with salient objects. As the object is salient, the ground truth (GT) should be all-black for the COD task. Nonetheless, the existing COD methods are less robust to the scenes with salient objects, especially the PFNet and ZoomNet. molecule prediction [45], video generation [51] and image inpainting [38]. Some researchers have studied the diffusion model for image inpainting. For example, Meng _et al._[29] has found that diffusion models can not only fill regions of an image but can also accomplish it conditioned on a rough sketch of the image. Another study by literature [39] concludes that diffusion models can smoothly fill regions of an image with realistic content without edge artifacts when trained directly on the inpainting task. ### Camouflage Object Detection Camouflage object detection (COD) detects a concealed object within an image. Several research attention (_e.g._, SINet [9], UGTR [50], ZoomNet [32]) have focused on the comparison of COD with SOD and concluded that simply extending SOD models to solve the COD task cannot bring the desired results because the target objects have different attributes, _i.e._, concealed or prominent. To detect the concealed image, many methods have been proposed recently. For example, some methods utilize a multi-stage strategy to solve the concealment of camouflaged images. SINet [9] is the first multi-stage method to locate and distinguish camouflaged objects. Another multi-stage method is SegMar [18], which localizes objects and zooms in on possible object regions to progressively detect camouflaged objects. In addition, the multi-scale feature aggregation is the second main strategy that has been used in many methods, such as CubeNet [57], which integrates low-level and high-level features by introducing X connection and attention fusion, as well as ZoomNet [32], which process the input images at three different scales to fully explores imperceptible clues between the candidate objects and background surroundings. A detailed review of COD models is out of the scope of this work; we refer readers to recent top-tier works [16, 14]. ### Camouflage Image Generation Although generating camouflage images has received limited attention, a few notable works exist in this area. One of the earliest methods, proposed in 2010, relies on hand-crafted features [4]. Zhang _et al._[53] have recently proposed a deep learning-based approach for generating camouflaged images. Their method employs iterative optimization and attention-aware camouflage loss to selectively mask out salient features of foreground objects, while a saliency map ensures these features remain recognizable. However, the slow iterative optimization process limits the practical application of their method. Moreover, the style transfer of the background image to the hidden objects can often result in noticeable appearance discontinuities, leading to visually unnatural synthesized images. To overcome these limitations, Li _et al._[23] has proposed a Location-Free Camouflage Generation Network. Although this method outperforms the previous approach [53] in terms of visual quality, it may fail to preserve desired foreground features or make objects identifiable using the saliency map in certain cases. In summary, existing methods all follow the same strategy to produce camouflage images: They use two images to represent the foreground image and the background image respectively, and they attempt to directly integrate the foreground image with the background image by finding a place where the foreground object is hard to detect within the synthesized image. ## 3 Proposed CamDiff ### Overall Architecture To evaluate the effectiveness of existing camouflage object detection (COD) methods on negative samples (_i.e._, scenes with salient objects), we suggest creating synthetic salient objects on top of current camouflage datasets. Normally, when a task-specific model is trained with COD datasets, it should effectively detect the camouflaged samples, while being robust and not detecting the synthesized salient ones. Therefore, such an approach allows us to thoroughly investigate whether a learning-based COD method can accurately distinguish between camouflage and salient objects. To achieve this objective, we propose a new generation network called CamDiff, which is built upon existing COD datasets. Since these datasets already contain camouflaged objects with corresponding camouflage ground truth masks, our aim is to add synthesized salient objects into the background. By doing so, we can maintain the original camouflage labels and leverage them while also introducing salient samples that have contrasting characteristics. Fig. 2 illustrates the overall architecture of our proposed method. We start with a COD dataset, which provides us with a source image and its corresponding ground truth (GT). Using the GT, we identify the bounding box with the minimum coverage area to prevent CamDiff from altering the camouflaged image. Next, we divide the source image into nine areas via grid lines, using the bounding box to preserve the area where the camouflaged object is placed. Only eight of the areas are available for input into CamDiff. We randomly select one of these regions and cut it out from the source image, covering a specific proportion (_e.g._, 75% as the default setting in our experiments) of the total area from the center. We then feed the masked image into the generation network, and CamDiff generates a salient object within the masked area. Finally, we place the selected region back into its original location within the source image. In such a manner, we can not only preserve the GT labels for camouflaged objects but also add contradictory synthesized salient samples. To generate the salient object, we propose a generation framework based on the Generative Adversarial Network (GAN) architecture. Specifically, we utilize the widely-acknowledged Latent Diffusion Model (LDM) as the generator and the Contrastive Language-Image Pre-Training (CLIP) as the discriminator. As shown in Fig. 2, the input to our framework is an image with the previously-masked region, along with a text prompt that describes the target object. This masked region and text prompt are then fed into the generator. Based on the prompt, the LDM block generates the target object on top of the masked region. The filled-up region is then sent to the discriminator to determine if it matches the input prompt. If not, the generator adjusts the seed to generate a new salient object. The objective is to train the generation network to only produce validated images when the discriminator predicts a high probability of matching the input prompt. Our framework transforms the image generation task into an inpainting task, and thus requires a mask to cover the selected region. The mask generation process is explained in Algorithm 1. The mask is designed to cover a certain percentage of the selected region to avoid artifacts when blending the synthesized object with the source image. The ratio of the masked area to the region area is set to a constant, \(RATIO_{MASK}\). The size of the selected region is crucial for the inpainting task, as it can affect the quality of the generated salient object. If the region is too small, the LDM may fill the background instead of the object, while if it is too large, the salient object may be too much larger than the concealed object, misleading COD methods. Therefore, we set an upper bound (\(RATIO_{MAX}\)) and a lower bound (\(RATIO_{MIN}\)) for the ratio between the region area and the total area of the source image. The values for these parameters are listed in Tab. 1. ### Latent Diffusion Model (LDM) We use the LDM [38] which is pre-trained on a large-scale dataset as our generator's base model. The LDM is a two-stage method that consists of an autoencoding model to learn the latent representation of an image and a Denoising Diffusion Probabilistic Model (DDPM)[13]. In the first stage, the autoencoding model is trained to learn a space that is perceptually equivalent to the image space. The encoder \(\mathcal{E}\) encodes the given image \(x\in\mathbb{R}^{H\times W\times 3}\) to the latent representation \(z\in\mathbb{R}^{H\times W\times C}\) so that \(z=\mathcal{E}(x)\), while the \begin{table} \begin{tabular}{c|c} \hline Parameter & Value \\ \hline \(RATIO_{MIN}\) & 6.25\% \\ \(RATIO_{MAX}\) & 25\% \\ \(RATIO_{MASK}\) & 75\% \\ \hline \end{tabular} \end{table} Table 1: Hyperparameters setting. Figure 2: Our CamDiff consists of a generator and a discriminator. The input of CamDiff is a pair of a masked image and a text prompt. Only after the discriminator judges that the synthesized object is consistent with the text input, the synthesized image can be output and placed back into the source image. The white star in the source image means that region (8) is selected as the masked region. deocder \(\mathcal{D}\) reconstructs the estimated image \(\tilde{x}\) from the latent representation, such that \(\tilde{x}=\mathcal{D}(\tilde{z})\) and \(\tilde{x}\approx x\). In the second stage, the DDPM is trained to generate the latent representation within the pre-trained latent space based on a random Gaussian noise input \(z_{t}\). The neural backbone \(\epsilon_{\theta}(z_{t},t)\) of the LDM is realized as a time-conditional UNet, and the objective of the DDPM trained on latent space is simplified as: \[L_{DM}:=\mathbb{E}_{\mathcal{E}(x),\epsilon\sim\mathcal{N}(0,1),t}\left[\| \epsilon-\epsilon_{\theta}(z_{t},t)\|_{2}^{2}\right], \tag{1}\] ### Conditioning LDM To control the image synthesis, the conditional LDM implements a conditional denoising autoencoder \(\epsilon_{\theta}(z_{t},y,t)\) through inputs \(y\) such as text, semantic maps, or other image-to-image translation tasks [38]. The proposed CamDiff exploits this ability to control image synthesis through text input. To turn DDPMs into more flexible conditional image generators, their underlying UNet backbone is augmented with the cross-attention mechanism. The embedding sequences \(\tau_{\theta}(y)\in\mathcal{R}^{M\times d_{\tau}}\) from the CLIP ViT-L/14 encoder is fused with latent feature maps via a cross-attention layer implementing as: \[\text{Attention}(Q,K,V)=\text{softmax}\left(\frac{QK^{T}}{\sqrt{d}}\cdot V \right), \tag{2}\] where \(Q=W_{Q}^{(i)}\cdot\varphi_{i}(z_{t}),K=W_{K}^{(i)}\cdot\tau_{\theta}(y),V=W_{ V}^{(i)}\cdot\tau_{\theta}(y)\), and \(\varphi_{i}(z_{t})\) is a intermediate representation of the UNet implementing \(\epsilon_{\theta}\). \(W_{Q}^{(i)}\), \(W_{K}^{(i)}\), and \(W_{V}^{(i)}\) are learnable projection matrix. The objective of the conditional LDM is converted from Eqn. 1 to: \[L_{CDM}:=\mathbb{E}_{\mathcal{E}(x),y,\epsilon\sim\mathcal{N}(0,1),t}\left[\| \epsilon-\epsilon_{\theta}(z_{t},t,\tau_{\theta}(y))\|_{2}^{2}\right], \tag{3}\] ### CLIP for Zero-Shot Image Classification To improve the quality of generated objects based on text input, it is necessary to use a discriminator that can assess the consistency of the generated objects with the text prompt. However, since the text prompt can be any arbitrary class, traditional classifiers that only recognize a fixed set of object categories are unsuitable for this task. Therefore, CLIP models offer a better option for this task. The CLIP model comprises an image encoder and a text encoder. The image encoder can employ various computer vision architectures, including five ResNets of varying sizes and three vision transformer architectures. Meanwhile, the text encoder is a decoder-only transformer that uses masked self-attention to ensure that the transformer's representation for each token in a sequence depends solely on tokens that appear before it. This approach prevents any token from looking ahead to inform its representation better. Both encoders undergo pretraining to align similar text and images in vector space. This is achieved by taking image-text pairs and pushing their output vectors closer in vector space while separating the vectors of non-pairs. The CLIP model is trained on a massive dataset of 400 million text-image pairs already publicly available on the internet. ## 4 Experiments ### Experimental Setup **Datasets.** To synthesize multi-pattern images for the COD task, we selected four widely-used COD datasets: CAMO [21], CHAM [42], COD10K [9], and NC4K [27]. It should be noted that the COD10K dataset provides semantic labels as filenames. Therefore, we used the label directly as the text prompt. Some prompts are shown in Fig. 2, which lists the classes. However, the list of classes is not directly available for the other three datasets. Since they contain common animal species such as birds, cats, dogs, _etc._, we randomly chose a text prompt from the COD10K label list. **Baselines.** To evaluate the robustness of existing COD methods to both salient and camouflaged objects, we selected four representative and classical COD methods: SINet [9], PFNet [28], C2FNet [43], and ZoomNet [32], as our baselines. It is worth noting that since our paper submission, several new SOTA models have emerged, including FSPNet [16] and EVP model [25]. However, this paper aims to explore new mechanisms for detecting camouflage patterns, and thus comprehensive testing of all models falls beyond the scope of this article. **Evaluation Metrics.** To assess the quality of the synthesized image, we employed Inception Scores [40]. For COD models, we follow previous works [8, 48] and evaluate the performance using conventional metrics: Mean Absolute Error (\(M\)), max F-measure (\(F_{m}\)), S-measure (\(S_{m}\)), and max E-measure (\(E_{m}\)). **Implementation Details.** Our implementation of CamDiff is realized in the Pytorch framework, with hyperparameters related to mask generation specified in Tab. 1. The whole learning process is executed on a 2080Ti GPU. We followed the conventional train-test split [9, 8, 57, 32], using a training set of 4,040 images from COD10K and CAMO. Among these training samples, we replaced 3,717 images with our synthesized multi-pattern images. The original testing samples comprised 6,473 images from CAMO, CHAM, COD10K, and NC4K. To form our Diff-COD testing set, we replaced 5,395 images with our generated images. Although we cannot entirely replace the camouflage dataset since some images contain specific objects that the diffusion model may not generate well using the pre-trained weights, our success rate remains high. Specifically, over 92% of the training images and 83% of the testing images can be modified with extra salient patterns. This high success rate confirms the effectiveness of our generation framework. Note that we resized the images and masks to \(512\times 512\) to meet the requirements of the LDM. ### Quality of Synthesized Images **Inception Score.** To prove that our CamDiff can generate a prominent object rather than a concealed object, we choose the inception score as the evaluation metric and evaluate it on the SOD datasets [47, 41, 49], COD datasets [21, 42, 9, 27], and our generated dataset with multi-pattern images. Tab. 2 shows that the original SOD datasets have a higher inception score than the original COD dataset, which aligns with our expectations. The rationale behind the Inception Score is that a well-synthesized image should contain easily recognizable objects for an off-the-shelf recognition system. The recognition system is more likely to detect prominent objects rather than camouflaged ones. As a result, images with multiple patterns tend to have a higher Inception Score than those with camouflaged patterns. By comparing the Inception Score before and after the modification, we can easily evaluate the effectiveness of our framework. Upon replacing images in the COD dataset with multi-pattern images, it's evident that the inception score has increased across all COD datasets. This indicates that we have successfully incorporated prominent patterns on top of the original COD datasets. **User Study.** We conducted a user study to further evaluate the synthesized images' quality. Participants were given a subset of our synthesized images along with their corresponding labels (_e.g_., "Butterfly" in Fig. 3) and were asked to circle the object they detected first based on the label. The salient object chosen by the user was considered the most \begin{table} \begin{tabular}{l l l|r} \hline \hline & Dataset & Type & Inception Score \(\uparrow\) \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & DUTSE-TE & orig. & 71.63 \\ & ECSSD & orig. & 24.40 \\ & XPIE (Salient) & orig. & 96.79 \\ & XPIE (Not Salient) & orig. & 13.96 \\ \hline \multirow{4}{*}{ \begin{tabular}{} \end{tabular} } & CAMO & orig. & 6.61 \\ & new & 9.90 \\ & CHAM & orig. & 4.38 \\ & new & 5.98 \\ COD10K & orig. & 7.00 \\ & new & 14.85 \\ NC4K & orig. & 7.00 \\ & new & 12.87 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparision of the generated dataset with the original COD and SOD dataset. The type “orig.” means the original dataset, while the type “new” means the synthesized dataset based on the corresponding COD dataset. Figure 3: In the user study, the solution involved presenting the synthesized object within a green box, while the original object within the image was enclosed in a red box. The study results indicate that users were more likely to circle the objects in the green box, highlighting the synthesized objects as more prominent and easier to detect than the original objects within the images. prominent since it attracted the most human attention. The results of our user study, with over 10 participants, showed that the average rate of users choosing the synthesized object, _i.e_., the salient ones, was 98%. This indicates that the synthesized objects are more prominent and easier to detect than the original objects in the images. Overall, the increased inception score and positive results from the user study support our claim that CamDiff generates prominent objects rather than concealed ones in the synthesized images. In addition, CamDiff has demonstrated its robust capability to generate diverse objects and variations in posture for a single object type. Fig. 4 provides examples of various classes of synthesized images, each of which can be extended to generate three additional images of the same class. ### Quantitative Comparison In this section, we continue by introducing quantitative experiments by evaluating SOTA COD methods on the synthesized samples generated by our CamDiff. Tab. 3 shows the performance of pretrained models on original and generated testing samples; Tab. 4 compares the performance trained with original COD images and our generated training samples; Tab. 5 presents the robustness analysis on SOD datasets. **Pretrained Weights Setting.** We created a new Diff-COD dataset to evaluate existing COD methods' effectiveness on images containing salient and camouflaged objects. This dataset includes both types of images, and we trained four SOTA COD methods (SINet [9], PFNet [28], C2FNet [43], and ZoomNet [32]) on the Diff-COD training set. We then evaluated their performance on the Diff-COD testing set. It's important to note that the pre-trained LDM (low-level dense module) block can only output images with a resolution of \(512\times 512\). This resolution is suitable for most existing methods trained with a resolution less than \(352\times 352\). However, the current SOTA method, ZoomNet [5], requires a main resolution of \(384\times 384\) and an additional higher resolution with a scale of 1.5 (\(576\times 576\)), which is larger than the capacity of the LDM model. To ensure a fair comparison, we retrained ZoomNet with a main scale \begin{table} \begin{tabular}{l l|c|c|c|c} \hline \multicolumn{2}{c|}{} & \multicolumn{1}{c|}{Freezed} & \multicolumn{1}{c|}{SINet [9]} & \multicolumn{1}{c|}{PFNet [28]} & \multicolumn{1}{c|}{C2FNet [43]} & \multicolumn{1}{c}{ZoomNet [32]} \\ \hline \multicolumn{2}{c|}{} & \(M\downarrow\) &.099 &.085 &.079 &.066 \\ \multicolumn{2}{c|}{} & \(F_{m}\uparrow\) &.762 & 793 &.802 &.832 \\ \multicolumn{2}{c|}{} & \(S_{m}\uparrow\) &.751 &.782 &.796 &.819 \\ \multicolumn{2}{c|}{} & \(E_{m}\uparrow\) &.790 &.845 &.856 &.881 \\ \multicolumn{2}{c|}{} & \(M\downarrow\) &.130 &.122 &.116 &.136 \\ \multicolumn{2}{c|}{} & \(F_{m}\uparrow\) &.581 &.626 &.632 &.557 \\ \multicolumn{2}{c|}{} & \(S_{m}\uparrow\) &.651 &.686 &.700 &.664 \\ \multicolumn{2}{c|}{} & \(E_{m}\uparrow\) &.768 &.792 &.802 &.790 \\ \hline \multicolumn{2}{c|}{} & \(M\downarrow\) &.044 &.033 &.032 &.023 \\ \multicolumn{2}{c|}{} & \(F_{m}\uparrow\) &.845 &.859 &.871 &.883 \\ \multicolumn{2}{c|}{} & \(S_{m}\uparrow\) &.868 &.882 &.888 &.900 \\ \multicolumn{2}{c|}{} & \(E_{m}\uparrow\) &.908 &.927 &.936 &.944 \\ \multicolumn{2}{c|}{} & \(M\downarrow\) &.065 &.065 &.061 &.088 \\ \multicolumn{2}{c|}{} & \(F_{m}\uparrow\) &.700 &.795 &.726 &.596 \\ \multicolumn{2}{c|}{} & \(S_{m}\uparrow\) &.787 &.708 &.798 &.726 \\ \multicolumn{2}{c|}{} & \(E_{m}\uparrow\) &.869 &.865 &.869 &.850 \\ \hline \multicolumn{2}{c|}{} & \(M\downarrow\) &.051 &.040 &.036 &.029 \\ \multicolumn{2}{c|}{} & \(F_{m}\uparrow\) &.708 &.747 &.764 &.799 \\ \multicolumn{2}{c|}{} & \(S_{m}\uparrow\) &.771 &.800 &.813 &.836 \\ \multicolumn{2}{c|}{} & \(E_{m}\uparrow\) &.832 &.880 &.894 &.887 \\ \multicolumn{2}{c|}{} & \(M\downarrow\) &.057 &.054 &.052 &.064 \\ \multicolumn{2}{c|}{} & \(F_{m}\uparrow\) &.620 &.644 &.656 &.585 \\ \multicolumn{2}{c|}{} & \(S_{m}\uparrow\) &.727 &.751 &.757 &.729 \\ \multicolumn{2}{c|}{} & \(E_{m}\uparrow\) &.826 &.832 &.839 &.841 \\ \hline \multicolumn{2}{c|}{} & \(M\downarrow\) &.058 &.053 &.049 &.044 \\ \multicolumn{2}{c|}{} & \(F_{m}\uparrow\) &.804 &.820 &.831 &.845 \\ \multicolumn{2}{c|}{} & \(S_{m}\uparrow\) &.808 &.829 &.838 &.851 \\ \multicolumn{2}{c|}{} & \(E_{m}\uparrow\) &.873 &.891 &.898 &.896 \\ \multicolumn{2}{c|}{} & \(M\downarrow\) &.090 &.084 &.080 &.076 \\ \multicolumn{2}{c|}{} & \(F_{m}\uparrow\) &.640 &.664 &.666 &.631 \\ \multicolumn{2}{c|}{} & \(S_{m}\uparrow\) &.719 &.744 &.746 &.739 \\ \multicolumn{2}{c|}{} & \(E_{m}\uparrow\) &.821 &.830 &.834 &.841 \\ \hline \end{tabular} \end{table} Table 3: Quantitative results of the pre-trained COD models on Diff-COD test dataset and COD dataset. \(\uparrow\) (\(\downarrow\)) denotes that the higher (lower) is better. Figure 4: Examples of the synthesized images from CamDiff from various classes. Each image is extended to generate three additional images of the same class, featuring objects with varying appearances. of \(288\times 288\) since \(288\times 1.5=432\) is less than 512 and still a relatively high resolution. To ensure equal evaluation, we trained ZoomNet on the original and our new train sets with the same main resolution of \(288\times 288\). Tab. 3 compares each model's performance with its pre-trained checkpoints on both Diff-COD and original COD datasets. The results indicate that all COD methods perform significantly worse on the Diff-COD dataset. This is because these methods detect the additionally generated salient object and classify them as camouflage ones, indicating a lack of robustness to saliency. As a result, we can conclude that our Diff-COD testing set serves as a more challenging benchmark and can be used as an additional tool for robustness analysis. **Trained on our Generated Datasets.** As previously mentioned, our framework has the capability to generate new training samples with both salient and camouflage objects. By training on our Diff-COD dataset using only camouflage supervision, the networks should learn the distinction between the two contrasting notions and become more resilient to saliency. Tab. 4 displays the results of the pre-trained COD models trained with original COD training sets and the newly-trained COD models on our Diff-COD training sets. It is evident that the models trained on the Diff-COD training set perform significantly better on the Diff-COD testing set compared to their counterparts. To further confirm the effectiveness of our approach in enhancing the robustness of COD models against saliency, we conducted experiments on conventional saliency datasets, including DUTS-TE [47], ECSSD [41], XPIE [49]. As displayed in Tab. 5, when the models were trained using our Diff-COD dataset, their performance on saliency benchmarks declined. This is expected since the poorer performance on the SOD datasets indicates that the newly-trained models have truly learned the camouflage pattern but not the salient pattern. As a result, these models are better equipped to withstand the influence of salient objects. ### Qualitative Comparison Fig. 5 demonstrates the effect of training on multi-pattern images on the performance of COD models. The figure is divided into three cases, each presenting the results for a different camouflaged object (fish, crab, and frog). On the left side of the dashed line in each case, the original image from the COD dataset, a synthesized multi-pattern image, and the ground truth are shown. The right side displays the results of four pre-trained models (SINet, PFNet, C2FNet, and ZoomNet) on the original COD datasets in the first row. The second row of the illustration presents the results of the models tested on the synthesized images using the same checkpoints as in the first row. Most of them detect salient objects, which is undesirable, and the accuracy of detecting camouflaged objects decreases. For instance, SINet loses some parts compared with the mask in the first row, and ZoomNet ignores camouflaged objects. These results indicate that COD methods lack robustness to saliency. The third row of the illustration presents the results of the models trained on our Diff-COD dataset and then tested \begin{table} \begin{tabular}{l l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Dataset} & \multicolumn{3}{c|}{SINet [9]} & \multicolumn{3}{c|}{PFNet [28]} & \multicolumn{3}{c|}{C2FNet [43]} & \multicolumn{3}{c}{ZoomNet [32]} \\ & & Pre. & Tr. & Pre. & Tr. & Pre. & Tr. & Pre. & Tr. \\ \hline \multirow{4}{*}{Dimension} & \(M\downarrow\) &.130 &.094 &.122 &.087 &.116 &.078 &.136 &.092 \\ & \(F_{m}\uparrow\) &.581 &.769 &.626 &.787 &.632 &.800 &.557 &.758 \\ & \(S_{m}\uparrow\) &.651 &.753 &.686 &.773 &.700 &.789 &.664 &.773 \\ & \(E_{m}\uparrow\) &.768 &.802 &.792 &.828 &.802 &.848 &.790 &.803 \\ \cline{2-10} & \(M\downarrow\) &.065 &.036 &.065 &.033 &.061 &.030 &.088 &.058 \\ & \(F_{m}\uparrow\) &.700 &.864 &.795 &.858 &.726 &.870 &.596 &.764 \\ & \(S_{m}\uparrow\) &.787 &.884 &.708 &.880 &.798 &.888 &.726 &.816 \\ & \(E_{m}\uparrow\) &.869 &.931 &.865 &.933 &.869 &.949 &.850 &.845 \\ \cline{2-10} & \(M\downarrow\) &.057 &.047 &.054 &.041 &.052 &.038 &.064 &.053 \\ & \(F_{m}\uparrow\) &.620 &.708 &.644 &.735 &.656 &.748 &.585 &.691 \\ & \(S_{m}\uparrow\) &.727 &.773 &.751 &.794 &.757 &.801 &.729 &.770 \\ & \(E_{m}\uparrow\) &.826 &.849 &.832 &.874 &.839 &.887 &.841 &.805 \\ \cline{2-10} & \(M\downarrow\) &.090 &.060 &.084 &.052 &.080 &.047 &.076 &.069 \\ & \(F_{m}\uparrow\) &.640 &.807 &.664 &.821 &.666 &.834 &.631 &.789 \\ & \(S_{m}\uparrow\) &.719 &.811 &.744 &.830 &.746 &.840 &.739 &.814 \\ & \(E_{m}\uparrow\) &.821 &.866 &.830 &.894 &.834 &.905 &.841 &.847 \\ \hline \end{tabular} \end{table} Table 4: Quantitative results of the test Diff-COD dataset. “Pre.” means the model is loaded with the pre-trained checkpoint the officially released code provides. “Tr.” means that the model is loaded by the checkpoints trained on our synthesized training set. \begin{table} \begin{tabular}{l l|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Dataset} & \multicolumn{3}{c|}{SINet [9]} & \multicolumn{3}{c|}{PFNet [28]} & \multicolumn{3}{c|}{C2FNet [43]} & \multicolumn{3}{c}{ZoomNet [32]} \\ & & Pre. & Tr. & Pre. & Tr. & Pre. & Tr. & Pre. & Tr. \\ \hline \multirow{4}{*}{Dimension} & \(M\downarrow\) &.065 &.082 &.064 &.079 &.065 &.069 &.080 &.083 \\ & \(F_{m}\uparrow\) &.820 &.760 &.808 &.748 &.807 &.780 &.715 &.718 \\ & \(S_{m}\uparrow\) &.806 &.741 &.806 &.751 &.802 &.777 &.772 &.768 \\ & \(E_{m}\uparrow\) &.846 &.757 &.845 &.778 &.832 &.812 &.840 &.842 \\ \cline{2-10} & \(M\downarrow\) &.106 &.135 &.105 &.130 &.116 &.115 &.129 &.134 \\ & \(F_{m}\uparrow\) &.844 &.784 &.822 &.762 &.802 &.790 &.744 &.751 \\ & \(S_{m}\uparrow\) &.766 &.692 &.766 &.703 &.748 &.734 &.722 &.715 \\ & \(E_{m}\uparrow\) &.786 &.688 &.784 &.702 &.750 &.740 &.834 &.841 \\ \cline{2-10} & \(M\downarrow\) &.090 &.119 &.093 &.115 &.099 &.101 &.115 &.123 \\ & \(F_{m}\uparrow\) &.822 &.763 &.804 &.739 &.786 &.762 &.720 &.703 \\ & \(S_{m}\uparrow\) &.770 &.691 &.762 &.697 &.749 &.728 &.723 &.705 \\ & \(E_{m}\uparrow\) &.805 &.697 &.792 &.709 &.768 &.749 &.820 &.815 \\ \hline \end{tabular} \end{table} Table 4: Quantitative results of the test Diff-COD dataset. “Pre.” means the model is loaded with the pre-trained checkpoint the officially released code provides. “Tr.” means that the model is loaded by the checkpoints trained on our synthesized training set. Figure 5: Qualitative Comparison. We conducted a qualitative comparison on three cases: Fish, Crab, and Frog. We analyzed the impact of adding salient objects to camouflaged images on pre-trained SINet, PFNet, C2FNet, and ZoomNet, respectively, by comparing the results of the first two rows. Furthermore, we evaluated the training results on the Diff-COD test set by comparing the qualitative outcomes with the pre-trained results. on the synthesized images. Compared to the second row, the robustness to saliency improves significantly. Nevertheless, compared to the first row, ZoomNet loses some parts of the camouflaged object. We believe this may be caused by adding noise in the training set making the fitting more difficult, but we plan to evaluate the cause in future work. Overall, it can be concluded from Fig. 5 that the presence of salient objects harms the performance of COD models in detecting camouflaged objects. However, training the COD models on multi-pattern images increases their robustness to the effects of salient objects. ## 5 Conclusion In summary, our work introduces CamDiff, a framework that generates salient objects while preserving the original label on camouflage scenes, enabling the easier collation and combination of contrastive patterns in realistic images without incurring extra costs related to learning and labeling. Through experiments conducted on Diff-COD test sets, we demonstrate that current COD methods lack robustness to negative examples (_e.g._, scenes with salient objects). To address this limitation, we create a novel Diff-COD training set using CamDiff. Our experimental results demonstrate that training existing COD models on this set improves their resilience to saliency. Overall, our work provides a new perspective on camouflage and contributes to the development of this emerging field. **Future Work.** We aim to extend our framework to consider original images with multiple objects and save room for their generation. Additionally, while we only implemented multipattern images as the data augmentation method in our experiments, we plan to evaluate the results using other data augmentation methods to provide a more comprehensive analysis of the impact of multi-pattern images on the performance and robustness of these models.
2307.03548
Polarization Imaging and Edge Detection with Image-Processing Metasurfaces
Optical metasurfaces have been recently explored as ultrathin analog image differentiators. By tailoring the momentum transfer function, they can perform efficient Fourier-filtering - and thus potentially any linear mathematical operation - on an input image, replacing bulky 4f systems. While this approach has been investigated in different platforms, and several techniques have been explored to achieve the required angular response, little effort has been devoted so far to tailor and control also the polarization response of an image-processing metasurface. Here, we show that edge-detection metasurfaces can be designed with tailored polarization responses while simultaneously preserving an isotropic response. In particular, we demonstrate single-layer silicon metasurfaces yielding efficient Laplacian operation on a 2D image with either large polarization asymmetry, or nearly polarization-independent response. In the former case, we show that a strongly asymmetric polarization response can be used to unlock more sophisticated on-the-fly image processing functionalities, such as dynamically tunable direction-dependent edge detection. In parallel, metasurfaces with dual-polarized response are shown to enable efficient operation for unpolarized or arbitrarily polarized images, ensuring high efficiency. For both devices, we demonstrate edge detection within relatively large numerical apertures, with excellent isotropy and intensity throughput. Our study paves the way for the broad use of optical metasurfaces for sophisticated, massively parallel analog image processing with zero energy requirements.
Michele Cotrufo, Sahitya Singh, Akshaj Arora, Alexander Majewski, Andrea Alù
2023-07-07T12:28:28Z
http://arxiv.org/abs/2307.03548v1
# Polarization Imaging and Edge Detection with Image-Processing Metasurfaces ###### Abstract Optical metasurfaces have been recently explored as ultrathin analog image differentiators. By tailoring the momentum transfer function, they can perform efficient Fourier-filtering - and thus potentially any linear mathematical operation - on an input image, replacing bulky 4f systems. While this approach has been investigated in different platforms, and several techniques have been explored to achieve the required angular response, little effort has been devoted so far to tailor and control also the polarization response of an image-processing metasurface. Here, we show that edge-detection metasurfaces can be designed with tailored polarization responses while simultaneously preserving an isotropic response. In particular, we demonstrate single-layer silicon metasurfaces yielding efficient Laplacian operation on a 2D image with either large polarization asymmetry, or nearly polarization-independent response. In the former case, we show that a strongly asymmetric polarization response can be used to unlock more sophisticated on-the-fly image processing functionalities, such as dynamically tunable direction-dependent edge detection. In parallel, metasurfaces with dual-polarized response are shown to enable efficient operation for unpolarized or arbitrarily polarized images, ensuring high efficiency. For both devices, we demonstrate edge detection within relatively large numerical apertures, with excellent isotropy and intensity throughput. Our study paves the way for the broad use of optical metasurfaces for sophisticated, massively parallel analog image processing with zero energy requirements. ## 1 Introduction Information and image processing is of paramount importance for several technologies and applications. Commonly, image processing is performed digitally, i.e., the image is digitized via a camera or a detector, and then processed with electronics and digital computation. While digital approaches are versatile and easy to implement, they also suffer from several drawbacks, such as high latency times, need of bias, and energy consumption, which are critical factors in several applications. These issues, combined with the exponentially growing demand for data processing [1], have renewed the interest in replacing digital data processing with analog optical computing [2, 3, 4, 5, 6, 7] due to its appealing possibility of manipulating data at the speed of light while avoiding analog-to-digital conversion [8]. Analog image processing is conventionally performed via Fourier filtering techniques in the so called _4f_ configuration, which requires two lenses with focal length f [9, 10]: the first lens performs an analog Fourier transform of the input image, creating a physical plane (located at a distance 2f from the input image) where each point corresponds to a different Fourier component. A spatially varying physical mask is then used to block certain Fourier components (depending on the target operation), while a second lens performs the inverse Fourier transform, rendering the output image at another 2f distance away. This _4f_ approach, while easy to implement, is not suited for integrated devices because it is inherently bulky and prone to alignment issues. Notably, Fourier-based image processing can be implemented in a much more compact platform, by filtering the transverse momentum of an image directly in real space using metasurfaces [3, 6]. In particular, by engineering the nonlocal response of transversely invariant metasurfaces[11, 12], it is possible to perform momentum filtering within a very small footprint. Several studies [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] have demonstrated that optical metasurfaces can be used for this purpose, and the application that has garnered most attention is the use of these processing metasurfaces as compact analog differentiators - aimed at calculating the spatial gradients of an input image. The Laplacian operation -- the second-order derivative of an input image \(\mathsf{f}_{\mathsf{out}}(\mathsf{x},y)=\left(\partial_{x}^{2}+\partial_{y}^{2} \right)\mathsf{f}_{\mathsf{in}}(\mathsf{x},y)\) -- is indeed of particular interest due to its capability of enhancing the edges of an arbitrary input image (as sketched in Fig. 1a). In Fourier space, the Laplacian operation corresponds to a high-pass momentum filter, described by \(\mathsf{f}_{\mathsf{out}}\big{(}\mathsf{k}_{\mathsf{w}},\mathsf{k}_{\mathsf{ y}}\big{)}=-\big{(}\mathsf{k}_{\mathsf{x}}^{2}+\mathsf{k}_{\mathsf{y}}^{2} \big{)}\mathsf{f}_{\mathsf{in}}\big{(}\mathsf{k}_{\mathsf{x}},\mathsf{k}_{ \mathsf{y}}\big{)}\). To achieve this response, the metasurface must fully suppress plane waves incident at small angles, while progressively transmitting waves propagating at larger angles. While this approach has been investigated and demonstrated in different platforms, and several methods have been explored to achieve the required angular responses, little effort has been devoted to engineering and controlling the polarization response of the metasurface, while simultaneously maintaining large efficiency, large numerical apertures and an overall isotropic response. Previous efforts [17, 18, 20, 21] have been focused on optimizing the response for some of these metrics, but neglecting the role that polarization can play to enhance the functionality. Here we show that light polarization control can be merged with momentum filtering over the same metasurface platform, and it can be exploited as an additional degree of freedom for image processing, leading to novel computational functionalities and to combining the concept of analog image processing with techniques such as polarization-difference imaging [23, 24] and polarimetry [25]. The presented metasurfaces enable full polarization control, while preserving high-efficiency and isotropic edge detection with tailored numerical aperture. In our work we leverage ultrathin silicon films, patterned periodically to engineer their nonlocal response, demonstrating that this platform can enable both strong polarization asymmetry and nearly polarization-independent responses. In the former case, this asymmetric response can be used to achieve on-demand direction-dependent edge detection tunable through the input polarization. Specifically, rather than enhancing all edges of an arbitrary image our metasurface enhances only the edges oriented along a desired direction, determined by the direction of the input polarization, and thus enabling polarization imaging functionalities dynamically controlled in real-time. Remarkably, this on-demand polarization response can be achieved within a single-layer metasurface design, compatible with CMOS fabrication, and, importantly, without the need of any additional polarizing element. At the same time, the device showcases a large numerical aperture (NA > 0.3), excellent isotropy, and large edge detection efficiency, close to the maximum limit for a passive device. Next, we show that the same metasurface platform can be Figure 1: **(a)** Working principle of an edge-detection metasurface, showing the link between orientation of edges and the polarization of the transmitted image through the metasurface. In this panel we assume that an \(x\)-polarized input image (CUNY logo on the left) impinges on a polarization-independent edge-detection metasurface (gray device). The angular wave decomposition of the input image is composed of \(s\)- and \(p\)-polarized waves. While the output intensity field (green-colored plot labelled by ‘Output Intensity’) processed by the metasurface shows uniform and isotropic edge enhancement, each portion of the output image is in fact carried by waves with different polarization, as shown in the red-white-blue color-coded image in the right of panel a (see text for details). **(b–f)** Direction-dependent edge detection obtained via a polarization-dependent metasurface. An input image (b) impinges on a polarization-selective metasurface characterized by \(t_{s_{x}}\big{(}k_{x},k_{y}\big{)}=\big{(}k_{x}^{\;2}+k_{y}^{\;2}\big{)}\) and the input polarization, denoted by the black arrow in each panel (c–f). optimized to obtain nearly polarization-independent edge-detection, which sacrifices the on-demand edge-detection control but enables larger efficiencies for unpolarized or arbitrarily polarized images. In this case, all edges are uniformly and isotropically enhanced for any combination of input polarizations. ## 2 Results **General Principle and Metasurface Design** We begin by considering the general case of an input image angularly filtered by a metasurface (Fig. 1a), with the goal of elucidating how the output image depends on the interplay between the polarization of the input image, and the angular and polarization response of the metasurface. Assume an optical image in the plane z = 0 (Fig. 1a, left), described by an intensity profile \(I_{in}(x,y)=\left|\,\mathbf{E}_{in}\left(x,y\right)\,\right|^{2}\quad,\quad\) where \(\mathbf{E}_{in}\left(x,y\right)=f_{in}\left(x,y\right)\mathbf{e}_{in}\) is the electric field with polarization direction \(\mathbf{e}_{in}=\mathcal{E}_{z}\mathbf{e}_{z}+\mathcal{E}_{y}\mathbf{e}_{z}\) and angular frequency \(\omega=2\pi\mathbf{c}/\lambda\) = \(k_{0}\mathbf{c}\). Following Fourier optics analysis [10], the image can be decomposed into a bundle of plane waves travelling in different directions, identified by the polar and azimuthal angles \(\left(\boldsymbol{\theta},\boldsymbol{\phi}\right)\), or equivalently by the in-plane wave vector \(\mathbf{k}_{\parallel}=\left[k_{z},k_{y}\right]\) with components \(k_{x}=k_{0}\sin\theta\cos\boldsymbol{\phi}\) and \(k_{y}=k_{0}\sin\theta\sin\phi\). Each plane wave, propagating in a direction identified by \(\mathbf{k}_{\parallel}\), can be decomposed into \(s\)- and \(p\)-polarized components according to [9] \[\mathbf{E}_{in}\left(\mathbf{k}_{\parallel}\right)=\left(\begin{array}{c}E_ {in}^{\infty}\left(\mathbf{k}_{\parallel}\right)\\ E_{in}^{\infty}\left(\mathbf{k}_{\parallel}\right)\end{array}\right)=\tilde{f }_{in}\left(\mathbf{k}_{\parallel}\right)\left(\begin{array}{c}\frac{k_{z}}{ k_{0}}\left(\mathbf{\widehat{k}}_{\parallel}\times\mathbf{e}_{in}\right)_{z}\\ \mathbf{\widehat{k}}_{\parallel}\cdot\mathbf{e}_{in}\end{array}\right), \tag{1}\] where \(\tilde{f}_{in}\left(\mathbf{k}_{\parallel}\right)\) is the Fourier transform of \(f_{in}\left(x,y\right)\), \(k_{z}=\sqrt{k_{0}^{2}-\left|\,\mathbf{k}_{\parallel}\,\right|^{2}}\),, \(\mathbf{\widehat{k}}_{\parallel}=\mathbf{k}_{\parallel}/\left|\,\mathbf{k}_{ \parallel}\right|\), \(\left(...\right)_{z}\) denotes the z-component of the vector in the brackets, and an overall multiplicative factor has been omitted. After passing through the metasurface, the transmitted fields are \[\mathbf{E}_{out}\left(\mathbf{k}_{\parallel}\right)=\tilde{f}_{in}\left( \mathbf{k}_{\parallel}\right)\left(\begin{array}{cc}t_{in}\left(\mathbf{k} _{\parallel}\right)&t_{in}\left(\mathbf{k}_{\parallel}\right)\\ t_{in}\left(\mathbf{k}_{\parallel}\right)&t_{in}\left(\mathbf{k}_{\parallel} \right)\end{array}\right)\left(\begin{array}{c}\frac{k_{z}}{k_{0}}\left( \mathbf{\widehat{k}}_{\parallel}\times\mathbf{e}_{in}\right)_{z}\\ \mathbf{\widehat{k}}_{\parallel}\cdot\mathbf{e}_{in}\end{array}\right), \tag{2}\] where \(t_{in}\left(\mathbf{k}_{\parallel}\right)\) are the complex co- and cross-polarized transmission coefficients. The fields in Eq. 2 represent the plane wave decomposition of the filtered output image, and the real-space output image \(I_{out}\left(x,y\right)=\left|\,\mathbf{E}_{out}\left(x,y\right)\,\right|^{2}\) is obtained through the inverse Fourier transform (see [26] for additional details). Hence, the specific image processing performed by the metasurface depends on a nontrivial interplay between (i) the Fourier transform of the input image \(\tilde{f}_{in}\left(\mathbf{k}_{\parallel}\right)\), (ii) the transfer functions of the metasurface \(t_{in}\left(k_{z},k_{y}\right)\), and (iii) the input polarization \(\mathbf{e}_{in}\). For example, in order to perform isotropic and polarization-independent edge detection, the transfer functions must satisfy \(t_{in}\left(k_{x},k_{y}\right)=t_{in}\left(k_{x},k_{y}\right)\propto\left(k_{ x}^{2}+k_{y}^{2}\right)\) and \(t_{in}\left(k_{x},k_{y}\right)=t_{in}\left(k_{x},k_{y}\right)=0\). A schematic example of this behavior is shown in Fig. 1a. We assume that an \(x\)-polarized input image (CUNY logo, left part of the image) impinges on an isotropic and polarization-independent edge-detection metasurface (center), and we calculated [26] the intensity of the output image (green-coded color plot on the right of the metasurface). The output image intensity \(I_{out}\left(x,y\right)\) displays clear, homogeneous and isotropic edge detection. Importantly, while the output intensity map shows uniform and isotropic edges, different components of the output image are actually carried (from the input image to the output image) by either \(s\)- or \(p\)-polarized waves. This is shown as blue-white-red color-coded lines in the right-most part of Fig. 1a (see [26] for details on how this plot is calculated). Here, each point of the output image is color-coded based on whether the waves that formed that specific point were mainly \(p\)-polarized (dark red) or \(s\)-polarized (dark blue). Clearly, the horizontal edges are exclusively carried by \(s\)-polarized waves, while the vertical edges are carried by \(p\)-polarized waves. Hence, by tailoring the polarization response of an edge-detecting metasurface, it is possible to selectively enhance only certain edges. Consider, for example, a metasurface with a strong polarization asymmetry, e.g., with all the transfer functions identically zero except for \(t_{in}\left(k_{x},k_{y}\right)=\left(k_{x}^{2}+k_{y}^{2}\right)\). The filtered electric field in Eq. 2 becomes \[\mathbf{E}_{out}\left(\mathbf{k}_{\parallel}\right)=\tilde{f}_{in}\left( \mathbf{k}_{\parallel}\right)t_{in}\left(\mathbf{k}_{\parallel}\right)\frac{k_ {z}}{k_{0}}\left(\mathbf{\widehat{k}}_{\parallel}\times\mathbf{e}_{in} \right)_{z}\mathbf{e}_{z}. \tag{3}\] Because of the term \(\mathbf{\widehat{k}}_{\parallel}\times\mathbf{e}_{in}\), all the Fourier components \(\tilde{f}_{in}\left(\mathbf{k}_{\parallel}\right)\) corresponding to vectors \(\mathbf{k}_{\parallel}\) parallel to the impinging polarization \(\mathbf{e}_{in}\) will not be transmitted, thus not contributing to the output image. This effect becomes important when the input image \(f_{in}\left(x,y\right)\) contains an edge oriented along a certain direction \(\mathbf{n}\). In the limit of an infinitely long edge parallel to \(\mathbf{n}\), [e.g., \(f_{in}\left(x,y\right)=\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} \mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e}\mathit{e} i.e., the edges parallel to the input polarization (\(\mathbf{n}\cdot\mathbf{e}_{in}=1\)) contribute maximally to the filtered image, while the edges whose direction is orthogonal to the input polarization (\(\mathbf{n}\cdot\mathbf{e}_{in}=0\)) will not contribute. The opposite scenario occurs if the metasurface works only for \(p\)-polarization, i.e., if only \(t_{sp}(k_{x},k_{y})\neq 0\). In this case \[\mathbf{E}_{out}(\mathbf{k}_{\parallel})=\widehat{f}_{in}(\mathbf{k}_{ \parallel})t_{sp}(\mathbf{k}_{\parallel})\ \ [(\mathbf{n}\times\mathbf{e}_{in})_{z}]\mathbf{e}_{p}. \tag{5}\] Now, edges whose direction is parallel to the input polarization (\(\mathbf{n}\times\mathbf{e}_{in}=0\)) will not contribute to the output image. Equations 3-5 show that, by introducing polarization selectivity (i.e., by realizing a metasurface for which only \(t_{{}_{in}}(k_{x},k_{y})\) or only \(t_{sp}(k_{x},k_{y})\) provides the required Laplacian filtering, while the other is zero), it is possible to selectively enhance only the edges of an input image oriented along a desired direction, and that this direction is determined by the input polarization. Notice that this polarization selectivity can still be fully isotropic, which is important for imaging applications, enabling the detection of edges with a certain orientation independently of their position. To further numerically validate these results, we consider the case of an input image consisting of an octagon (Fig. 1b), impinging on a polarization-dependent edge-detection metasurface with only \(t_{{}_{in}}(k_{x},k_{y})\neq 0\). Using the full expansion in Eqs. 1-2 (see [26] for more details), we numerically calculated [Figs. 1(c-f)] the intensity of the output image for different linear polarizations of the input image (denoted by the black arrow in each panel). The results match the expectations based on Eq. 4: for any input polarization, edges parallel to input polarization are maximally enhanced, while edges perpendicular to the input polarization are absent. We emphasize that the plots in Figs. 1(c-f) correspond to the _total_ intensity of the output image. In other words, no additional polarization filtering is applied to the processed image, neither in these calculations nor later in our experiments. Instead, such strong polarization-dependent edge detection is embedded in the metasurface response. ### Experimental Results We now demonstrate that edge-detection metasurfaces with either strongly asymmetric polarization response [e.g., \(t_{{}_{in}}(k_{x},k_{y})\propto(k_{x}^{{}^{\prime}}+k_{y}^{{}^{\prime}})\) and \(t_{sp}(k_{x},k_{y})=0\)] or a nearly polarization-independent response [i.e., \(t_{{}_{in}}(k_{x},k_{y})\approx t_{sp}(k_{x},k_{y})\propto(k_{x}^{{}^{2}}+k_{ y}^{{}^{\prime}})\)] can be experimentally realized within a single-layer metasurface platform. We consider a thin silicon-on-glass metasurface operating in the near-infrared (\(\lambda\approx 1500\) nm), but similar principles can be readily adapted to different materials and spectral ranges. The metasurface consists of a periodic triangular lattice of air holes (unit cell shown in Fig. 2a) etched into a silicon thin slab. As we show below, this same platform and unit cell design can be used to engineer vastly different polarization responses by simply varying three design parameters: the lattice constant \(a\), the radius hole \(R\) and the slab thickness \(H\). We begin by designing and demonstrating a metasurface with a strongly polarization-dependent response (Fig. 3). Using a parameter-sweep optimization, we identify a design (\(a=924\) nm, \(R=265\) nm, \(H=315\) nm) that, up to an NA\(\approx 0.35\), features the desired Laplacian-like response for \(s\)-polarization [\(t_{{}_{in}}(k_{x},k_{y})\propto(k_{x}^{{}^{\prime}}+k_{y}^{{}^{\prime}})\)], while it reflects any \(p\)-polarized wave [\(t_{sp}(k_{x},k_{y})\approx 0\)]. The metasurface was fabricated using standard lithographic techniques, as discussed in [26]. The normal incidence transmission spectrum of the fabricated device (Fig. 3a) features a broad transmission dip for wavelengths close to the operational wavelength \(\lambda=1490\) nm (shaded area in Fig. 3a). At this wavelength, the measured normal-incidence transmission is less than 1%, ideally suited to suppress the low-spatial-frequency components of the input image and hence enhance the edges. The metasurface response is polarization-independent at normal incidence, as expected from the C\({}_{6}\) symmetry of our design. However, the response at tilted angles is strongly asymmetric, as shown in Figs. 3(b-d). The measured s-polarized transmission amplitude (Fig. 3b, blue circles) features the desired quadratic increase versus \(\sin\theta\), up to values of \(\theta\approx 20^{\circ}\), as shown by the solid blue line in Fig. 3b which is a fit of the data with the function \(t_{fit}=A(sin\theta)^{2}+B\). Moreover, the transmission amplitude reaches values as high as \(|t_{ss}|=0.9\) (transmission \(\approx 81\) %), which is close to the upper bound set by the Fresnel reflection coefficient at the glass/air interface. In the same angular range, the \(p\)-polarized transmission amplitude remains very low, with a maximum transmission amplitude \(|t_{sp}|\approx 0.12\) (transmission \(<1.5\)%) at \(\theta\approx 20^{\circ}\). This strong asymmetry is further demonstrated in Figs. 3(c-d), which show the measured amplitude of the transfer functions \(|t_{{}_{in}}(k_{x},k_{y})|\) (Fig. 3c) and \(|t_{sp}(k_{x},k_{y})|\) (Fig. 3d) for all impinging angles within a numerical aperture NA\(\leq 0.35\) nm. The measured amplitude remains very low, with a maximum transmission amplitude \(|t_{sp}|\approx 0.12\) (transmission \(<1.5\)%) at \(\theta\approx 20^{\circ}\). This strong asymmetry is further demonstrated in Figs. 3(c-d), which show the measured amplitude of the transfer functions \(|t_{{}_{in}}(k_{x},k_{y})|\) (Fig. 3c) and \(|t_{sp}(k_{x},k_{y})|\) (Fig. 3d) for all impinging angles within a numerical aperture NA\(\leq 0.35\) nm. The measured amplitude remains very low, with a maximum transmission amplitude \(|t_{sp}|\approx 0.12\) (transmission \(<1.5\)%) at \(\theta\approx 20^{\circ}\). This strong asymmetry is further demonstrated in Figs. 3(c-d), which show the measured amplitude of the transfer functions \(|t_{{}_{in}}(k_{x},k_{y})|\) (Fig. 3c) and \(|t_{sp}(k_{x},k_{y})|\) (Fig. 3d) for all impinging angles within a numerical aperture NA\(\leq 0.35\) nm. The measured amplitude remains very low, with a maximum transmission amplitude \(|t_{sp}|\approx 0.12\) (transmission \(<1.5\)%) at \(\theta\approx 20^{\circ}\). This strong asymmetry is further demonstrated in Figs. 3(c-d), which show the measured amplitude of the transfer functions \(|t_{{}_{in}}(k_{x},k_{y})|\) (Fig. 3c) and \(|t_{sp}(k_{x},k_{y})|\) (Fig. 3d) for all impinging angles within a numerical aperture NA\(\leq 0.35\) nm. The measured amplitude remains very low, with a maximum transmission amplitude \(|t_{sp}|\approx 0.12\) (transmission \(<1.5\)%) at \(\theta\approx 20^{\circ}\). This strong asymmetry is further demonstrated in Figs. 3(c-d), which show the measured amplitude of the transfer functions \(|t_{{}_{in}}(k_{x},k_{y})|\) (Fig. 3c) and \(|t_{sp}(k_{x},k_{y})|\) (Fig. 3d) for all impinging angles within a numerical aperture NA\(\leq 0.35\) nm. The measured amplitude remains very low, with a maximum transmission amplitude \(|t_{sp}|\approx 0.12\) (transmission \(<1.5\)%) at \(\theta\approx 20^{\circ}\). This strong asymmetry is further demonstrated in Figs. 3(c-d), which show the measured amplitude of the transfer functions \(|t_{{}_{in}}(k_{x},k_{y})|\) (Fig. 3c) and \(|t_{sp}(k_{x},k_{y})|\) (Fig. 3d) for all impinging angles within a numerical aperture NA\(\leq 0.35\) nm. The measured amplitude remains very low, with a maximum transmission amplitude \(|t_{sp}|\approx 0.12\) (transmission \(<1.5\)%) at \(\theta\approx 20^{\circ}\). This strong asymmetry is further demonstrated in Figs. 3(c-d), which show the measured amplitude of the transfer functions \(|t_{{}_{in}}(k_{x},k_{y})|\) (Fig. 3c) and \(|t_{sp}(k_{x},k_{y})|\) (Fig. 3d) for all impinging angles within a numerical aperture NA\(\leq 0.35\) nm. The measured amplitude remains very low, with a maximum transmission amplitude \(|t_{sp}|\approx 0.12\) (transmission \(<1.5\)%) at \(\theta\approx 20^{\circ}\). This strong asymmetry is further demonstrated in Figs. 3(c-d), which show the measured amplitude of the transfer functions \(|t_{{}_{in}}(k_{x},k_{y})|\) (Fig. 0.5. Besides the strong polarization asymmetry, Fig. 3c confirms that the metasurface provides an excellent isotropic response up to NA = 0.34 (dashed circles in Figs. 3c and 3d), reproducing almost exactly the desired transfer function, \(t_{n}(k_{x},k_{y})\propto(k_{x}^{\ 2}+k_{y}^{\ 2})\). Next, we experimentally verify the edge-detecting behavior of this polarization-selective metasurface. In the experimental setup [22, 26], sketched in Fig. 2b, the input image is created by illuminating a target (consisting of the CUNY logo etched in a chromium mask) with a collimated quasi-monochromatic beam. The image created by the mask is collected by a NIR objective (Mitutoyo, 50X, NA = 0.42) placed on the opposite side, and relayed on a near-infrared camera with a tube lens. In this modality, the setup is essentially a standard optical microscope, and the camera records the unfiltered input image, shown in Fig. 3e. In order to perform analog edge detection, the metasurface is placed between the objective and the target (Fig. 2b). The filtered image recorded by the camera when the illumination is unpolarized (Fig. 3f) shows a clear, well-defined, isotropic and high-contrast edge enhancement. The quality of the detected edges can be further appreciated by the horizontal cut in Fig. 3g (corresponding to the dashed horizontal line in Fig. 3f), which shows that the intensity of the edges is almost 10x higher than the intensity of the background. Moreover, Figure 3e confirms that, even though the metasurface has a strongly asymmetric polarization response, under unpolarized excitation all edges are equally enhanced, independently of their orientation. This is expected from Eqs. 3-5: for unpolarized excitation the angular wave decomposition of each edge contains an equal mixture of \(s\)- and \(p\)-polarized waves, independently of the edge orientation. The \(p\)-polarized contribution is uniformly reflected by the metasurface, while the \(s\)-polarized contribution undergoes the desired Laplacian filtering, resulting in isotropic and direction-independent edge detection. The measured response drastically changes when the impinging illumination is, instead, polarized. As shown in Figs. 3(h-k) for different linear polarizations, only the edges whose direction \(\mathbf{\Pi}\) has a nonzero component parallel to the impinging polarization \(\mathbf{e}_{\alpha}\) are enhanced, following the intensity trend \(I\propto|\mathbf{n}\cdot\mathbf{e}_{\alpha}|^{2}\) dictated by Eq. 4. In particular, only horizontal (vertical) edges are enhanced when the illumination is horizontally (vertically) polarized, as shown in Figs. 3h and 3i. As discussed above, this trend follows from the fact that the designed metasurface has a high efficiency Laplacian-like response for \(s\)-polarization \(t_{n}(k_{x},k_{y})\propto(k_{x}^{\ 2}+k_{y}^{\ 2})\), while it completely suppresses \(p\)-polarization [\(t_{{}_{pp}}(k_{x},k_{y})=0\) ]. If we designed a metasurface Figure 3: **Metasurface for polarization-dependent edge detection.** The unit cell of the metasurface is shown in Fig. 2a, with geometrical parameters \(a=924\) nm, \(R=265\) nm, \(H=315\) nm. **(a)** Measured normal incidence transmission spectrum. The grey area denotes the operational frequency range. **(b)** Measured \(s\)- (blue) and \(p\)-polarized (red) co-polarized transmission amplitude versus \(\theta\), for fixed \(\phi=0\). Dots are experimental data, and solid lines are fits (see text for details). **(c-d)** Amplitude of the co-polarized \(t_{\mathrm{ss}}(k_{x},k_{y})\) (panel c) and \(t_{pp}(k_{x},k_{y})\) (panel d) transfer functions of the metasurface. The dashed circles correspond to NA = 0.34 **(e)** Unfiltered test image, acquired with the setup in Fig. 2b without the metasurface (colorbar = 30 μm). **(f)** Output image when the metasurface is placed in front of the target, and the target is illuminated by unpolarized waves. **(g)** Horizontal cut corresponding to the white dashed line in (f). **(h-j)** Same as (f), but with polarized excitation: (h) horizontal, (i) vertical, (j) diagonal, (k) anti-diagonal polarization. with opposite response (i.e., where the role of \(s\)- and \(p\)-polarization are reversed), the opposite behavior would be observed, i.e., only horizontal edges would be enhanced under vertically-polarized illumination. The strong contrast between the intensity of edges oriented along orthogonal directions can be further verified by repeating the experiment with rectangular shapes as input images, as shown in [26]. As recently pointed out [22], an important figure of merit for edge-detection metasurfaces is their intensity efficiency, that is, how the intensity of the output image compares to the intensity of the input image. Following [22], we consider the peak efficiency defined as \(\eta_{\text{peak}}\equiv\max(I_{\text{out}})/\max(I_{\text{in}})\), where \(\max(I_{\text{in/out}})\) are the maximum intensities in the input and output images. In order to readily quantify this efficiency, all the experimental images in Figs. 3(e-k) have been normalized by dividing the counts recorded in each camera pixel by the camera integration time and the power impinging on the sample. This allows us to correctly compare the energy flux impinging on the camera in different scenarios. Moreover, for numerical convenience the intensities of all images have been further renormalized such that the maximum intensity of the input image (Fig. 3e) is 1. As a result of this normalization procedure, the peak efficiency \(\eta_{\text{peak}}\) can be readily extracted from the maximum values of the colorbars of the filtered images, leading to typical values of \(\eta_{\text{peak}}\approx 3\%-7\%\). These values are in line with values measured in [22] and, importantly, they are quite close to the maximum efficiency obtainable for any ideal passive edge-detection device [22]. These significantly large efficiencies are a direct consequence of the large transmission (\(\left\{t_{ss}\right\}|^{2}\sim\)81%) enabled by our metasurface at large angles (see Figs. 3b-c), which is achieved thanks to the optimized design, the simple fabrication process, and the absence of absorbing materials. As demonstrated above, an edge-detection metasurface with a strongly asymmetric polarization response can be used to achieve direction- and polarization-dependent edge detection. For other applications, however, a polarization-independent response [\(\left.t_{ss}(k_{x},k_{y})\right.\approx\)\(\left.t_{pp}(k_{x},k_{y})\right.\propto\left(k_{x}^{\ 2}+k_{y}^{\ 2}\ \right)\ ]\) may be desirable. Indeed, a polarization-independent response guarantees isotropic and homogeneous edge detection for any impinging polarization, including unpolarized light. Moreover, a polarization-independent response is also beneficial to increase the output intensity (and thus the efficiency \(\eta_{\text{peak}}\)) when the input image is of arbitrary polarization. For instance, under unpolarized excitation the polarization-dependent metasurface in Fig. 3 automatically rejects approximately half of the input intensity (corresponding to \(p\)-polarized waves), thus strongly reducing the overall efficiency. Remarkably, we show here that the same metasurface platform used for the Figure 4: **Metasurface for polarization-independent edge detection.** The unit cell of the metasurface is shown in Fig. 2a, with geometrical parameters \(a=785\) nm, \(R=153\) nm, \(H=273\) nm. **(a)** Measured normal incidence transmission spectrum. The grey area denotes the operational frequency. **(b)** Measured \(s\)- (blue) and \(p\)-polarized (red) co-polarized transmission amplitudes versus \(\theta\), and for fixed \(\phi=0\). Dots are experimental data, and solid lines are fits (see text for details). **(c-d)** Amplitude of the co-polarized \(t_{ss}(k_{x},k_{y})\) (panel c) and \(t_{pp}(k_{x},k_{y})\) (panel d) transfer functions of the metasurface. **(e)** Unfiltered test image, colorbar = 30um. **(f-l)** Experimentally measured edge detection. Panels (g-l) show the filtered image when the metasurface is placed in front of the target, and the targets illuminated by light with different polarization: horizontal (g), vertical (h), anti-diagonal (i), diagonal (j), right circular (k) and left circular (l). Panel (f) shows horizontal cuts from (g-l), corresponding to the segment denoted by the dashed horizontal line in panel g. Plots are displaced vertically in steps of 0.03. previous design - a triangular lattice of air holes etched in a silicon thin film - can also be used to obtain a polarization-independent response. To achieve this, we performed a second optimization by varying the three parameter designs (\(a\), \(R\) and \(H\)). In the optimization, we looked for designs for which, at a given wavelength, the transmission is zero at normal incidence and, simultaneously, the \(s\)- and \(p\)-polarized transmissions are large and similar at larger angles. The optimized design (\(a=785\) nm, \(R=153\) nm, \(H=273\) nm) was fabricated and tested with the same experimental procedures used for the single-polarization device shown in Fig. 3. The normal-incidence spectrum of this device (Fig. 4a) features a narrower transmission dip at \(\lambda\approx 1560\) nm, which defines the operational wavelength. The angle-dependent transmission measurements (Figs. 4(b-d)) confirm that the \(s\)- and \(p\)-polarized transfer functions feature the required quadratic-like increase with \(sin\theta\), albeit within a narrower numerical aperture and with lower isotropy compared to the response obtained in Fig. 3c. Despite these potentially detrimental issues, the imaging experiments [Figs. 4e and 4(g-l)], performed in the same conditions and with the same target as in Fig. 3, confirm that this device leads to high-quality, highly isotropic, and polarization-independent edge detection. For any linear (Figs. 4(g-j)) and circular polarization (Figs. 4(k-l)) input, all edges of the input image are equally enhanced, and independently of their orientation. This is further confirmed by the plots in Fig. 4f, corresponding to horizontal cuts of Figs. 4(g-l) at the position marked by the dashed white line in Fig 4g. Moreover, as expected from the discussion above, this dual-polarization device displays an increased efficiency with respect to the single-polarization device shown in Fig. 4. The peak efficiency \(\eta_{\text{peak}}\), extracted from the upper limits of the colorbars in Figs. 4(g-l), is above 10% for all input polarizations. ## 3 Discussion and Conclusions We have demonstrated full-control over the polarization response of edge-detection metasurfaces, serving as a new knob to tailor analog image processing on demand using metasurfaces. In particular, our work shows that, by using conventional materials and design platforms - in our case a single-layer patterned thin silicon film - it is possible to realize edge-detection metasurfaces that exhibit either a strongly asymmetric polarization response, or a nearly polarization-independent response, while maintaining a high degree of isotropy. In the former case, the device imparts the required Laplacian-like response only for s-polarized impinging waves, while it completely suppresses p-polarized waves. We demonstrated that this functionality can be used to achieve controllable direction-dependent edge detection: only the edges parallel to a certain direction, determined by the input polarization, are enhanced. This functionality paves the way to applications in the fields of polarization-difference imaging [23, 24] and polarimetry [25]. We then showed that, by simply tweaking a few geometrical parameters, a nearly polarization-independent device can be obtained. This nontrivial feature allows performing high-contrast edge detection independently of the input polarization and edge orientation, increasing the edge-detection efficiency for unpolarized images. More generally, our results demonstrate that, by leveraging the interplay between polarization of the input image and polarization response of the metasurface, it is possible to unlock novel functionalities in the field of analog image processing. We expect our work to pave the way towards a novel class of devices where nontrivial computational tasks can be achieved by further engineering the interplay between input polarization and the metasurface angular- and polarization-response. FundingThis work was supported by Danbury Technologies, the Air Force Office of Scientific Research MURI program and the Simons Foundation. Device fabrication was performed at the Nanofabrication Facility at the Advanced Science Research Center at The Graduate Center of the City University of New York Competing interestsThe authors declare no conflicts of interest. ContributionsAll authors conceived the idea and the corresponding experiment. A.Ar. and S.S. performed numerical simulations and optimizations. M.C. fabricated the devices and performed the experimental measurements together with S.S.. A.Al. supervised the project. All authors analyzed the data and contributed to writing the manuscript. Correspondence authorCorrespondence to Andrea Alu. Data availabilityData underlying the results presented in this paper may be obtained from the authors upon reasonable request.
2307.14034
A superconvergent stencil-adaptive SBP-SAT finite difference scheme
A stencil-adaptive SBP-SAT finite difference scheme is shown to display superconvergent behavior. Applied to the linear advection equation, it has a convergence rate $\mathcal{O}(\Delta x^4)$ in contrast to a conventional scheme, which converges at a rate $\mathcal{O}(\Delta x^3)$.
Viktor Linders, Mark Carpenter, Jan Nordström
2023-07-26T08:38:03Z
http://arxiv.org/abs/2307.14034v1
# A superconvergent stencil-adaptive SBP-SAT finite difference scheme ###### Abstract A stencil-adaptive SBP-SAT finite difference scheme is shown to display superconvergent behavior. Applied to the linear advection equation, it has a convergence rate \(\mathcal{O}(\Delta x^{4})\) in contrast to a conventional scheme, which converges at a rate \(\mathcal{O}(\Delta x^{3})\). keywords: Summation-By-Parts, Adaptivity, Superconvergence Msc: [2020] 65M06, 65M12 + Footnote †: journal: Journal of Computational Physics ## 1 Introduction This note concerns finite difference methods where the stencil coefficients are adapted in time. In [1], a systematic way of constructing adaptive _central_ difference stencils of bandwidth \(2n+1\) was presented and applied to first order hyperbolic problems with periodic boundary conditions. It was shown that such stencils were more accurate than conventional finite difference schemes based on Taylor expansions, irrespective of the grid resolution. Further, the advantages of Dispersion Relation Preserving (DRP) schemes1 were recovered for under-resolved problems. This was achieved without loss of the high convergence rate of conventional stencils, which tends to plague DRP schemes. These adaptive stencils are _superconvergent_ in the sense that they offer \({\cal O}(\Delta x^{2n})\) convergence despite having considerably lower formal order of accuracy. Here, \(\Delta x\) represents a characteristic grid spacing. The observations presented herein are a result of attempting to extend the work in [1] to Summation-By-Parts (SBP) operators. Adjoined with Simultaneous Approximation Terms (SATs), SBP methods offer a systematic approach for stable and accurate discretization of initial boundary value problems [3, 4]. Connections between SBP and DRP have been made in [5]. SBP operators consist of a pair of matrices (\(P\),\(Q\)), such that \(D=P^{-1}Q\) approximates the first derivative. By definition, they satisfy the properties \[P=P^{\top}>0\quad\mbox{and}\quad Q+Q^{\top}=\mbox{diag}(-1,0,\ldots,0,1).\] Operators that utilize a diagonal \(P\) enjoy advantageous stability properties when used for spatial as well as temporal discretization [6, 7], essentially because \(P\) then commutes with other diagonal matrices. However, finite difference SBP operators with diagonal \(P\) suffer from reduced order of accuracy near boundaries [8, 9, 10]. If a central difference stencil of order \(2p\) is used in the interior, the boundary stencil is limited to order \(p\). This typically leads to a convergence rate of order \(p+1\) when SBP-SAT is used to discretize first order energy stable hyperbolic IBVPs [11]. It is possible to increase the boundary accuracy of SBP operators to \(2p-1\) by utilizing so-called block norms, meaning that \(P\) has non-diagonal blocks near the boundaries. In case a stable scheme can be constructed, this increases the convergence rate to \(2p\). However, it is in general not straightforward to obtain energy estimates with such methods. In this note, we find a different use for block-norm SBP operators as 'target stencils' in an optimization problem. While our ambition has been to develop a general purpose adaptive SBP-SAT method, we disclaim right away that this goal is reached within this note. Instead, we present an example of an adaptive SBP method based on the conventional SBP(4,2) operator2 and discuss the various difficulties one faces in extending the method to arbitrary accuracy. A remarkable observation is that for the test problem considered, the observed convergence rate of the adaptive SBP method is \(\mathcal{O}(\Delta x^{4})\), which is one order higher than conventional SBP theory allows. The adaptive method is thus superconvergent. The remainder of this note is organized as follows: In Section 2 stability requirements on the adaptive scheme are presented. An outline of the adaptive procedure is given in Section 3. Numerical demonstrations of superconvergence are given in Section 4. Comments on generalizations are made in Section 5 followed by conclusions in Section 6. ## 2 Remarks on stability We will shortly describe how to construct the adaptive SBP operator and apply it to the linear advection equation with periodic boundary conditions: \[\begin{split} u_{t}+u_{x}&=0,\qquad\ x\in \Omega,\ \ t\in(0,T],\\ u(x,0)&=u_{0}(x),\ \ \ x\in\Omega.\end{split} \tag{1}\] Before doing so, we briefly discuss the SBP-SAT discretization and the stability requirements on the method. We divide \(\Omega\) into a set of \(K\) non-overlapping blocks/elements, each subdivided into \(N+1\) uniformly spaced grid points, and discretize with an SBP operator \(D_{k}=P_{k}^{-1}Q_{k}\) on the \(k\)th block: \[\mathbf{u}_{t}^{(k)}+D_{k}\mathbf{u}^{(k)}=-\frac{1+\theta}{2}P_{k}^{-1}(u_{0 }^{(k)}-u_{N}^{(k-1)})\mathbf{e}_{0}+\frac{1-\theta}{2}P_{k}^{-1}(u_{N}^{(k)}- u_{0}^{(k+1)})\mathbf{e}_{N}.\] Here, \(u_{i}^{(j)}\) is to be understood as the \(i\)th element of the vector \(\mathbf{u}^{(j)}\), which in turn denotes the numerical solution on the \(j\)th block. The SATs on the right-hand side couple the numerical solution across block interfaces in much the same way as numerical fluxes are used in finite element methods. The scalar \(\theta\) is a yet unspecified parameter. The scheme is conservative and if \(\theta\geq 0\), it satisfies the discrete energy estimate (see [12]) \[\frac{\mathrm{d}}{\mathrm{d}t}\sum_{k=1}^{K}\|\mathbf{u}^{(k)}\|_{P_{k}}^{2}=- \theta\sum_{k=1}^{K}(u_{0}^{(k)}-u_{N}^{(k-1)})^{2}\leq 0. \tag{2}\] Consequently, it is also stable. Here, for any vector \(\mathbf{v}\), we have defined \(\|\mathbf{v}\|_{P_{k}}^{2}=\mathbf{v}^{\top}P_{k}\mathbf{v}\). Due to the periodicity of the problem, we use the convention \(u_{N}^{(0)}\equiv u_{N}^{(K)}\). Throughout, we will use \(\theta=1\). Suppose that the above scheme is used up to some time \(t^{*}>0\), after which we change from the SBP operator \((P_{k},Q_{k})\) to some other SBP operator \((P_{k}^{*},Q_{k}^{*})\). Let \(\mathbf{u}^{*}\) denote the numerical solution after the change. Imposing that \(\mathbf{u}^{*}(t^{*})=\mathbf{u}(t^{*})\), this change constitutes an example of a so-called transmission problem [13]. A necessary and sufficient condition for retaining the energy estimate (2) through the operator shift is that \[P_{k}-P_{k}^{*}\geq 0. \tag{3}\] Recalling that \(P_{k}\) and \(P_{k}^{*}\) are diagonal, the following result holds: **Proposition 1**.: _Consider two consistent SBP operators \((P_{k},Q_{k})\) and \((P_{k}^{*},Q_{k}^{*})\). A necessary and sufficient condition for the estimate (2) to be retained after the operator shift is that \(P_{k}=P_{k}^{*}\)._ Proof.: Sufficiency is immediate. For necessity, note that the matrix \(P_{k}-P_{k}^{*}\) is diagonal, hence its eigenvalues are given by the diagonal elements. The diagonal elements of both \(P_{k}\) and \(P_{k}^{*}\) constitute weights in quadrature rules that integrate constants exactly [10]. Hence, \[\mathbf{1}^{\top}(P_{k}-P_{k}^{*})\mathbf{1}=0, \tag{4}\] where \(\mathbf{1}=(1,\ldots,1)^{\top}\). Condition (3) dictates that the diagonal elements of \(P_{k}-P_{k}^{*}\) are non-negative whereas (4) states that their sum is zero. This can only simultaneously hold if \(P_{k}=P_{k}^{*}\). In subsequent sections we consider adaptive SBP operators that may be inconsistent. Nonetheless, experiments suggest that choosing \(P_{k}=P_{k}^{*}\) helps to ensure a robust scheme, hence we will do so throughout. We remark that an alternative implementation of transmission problems was suggested in [14]. There, it was shown that stability is retained by imposing \(\mathbf{u}^{*}(t^{*})=2(I+P_{k}^{-1}P_{k}^{*})^{-1}\mathbf{u}(t^{*})\). Since the matrices involved are diagonal and positive definite, the inverse exists and is easy to compute. This approach permits \(P_{k}\neq P_{k}^{*}\) without stability issues, which unlocks additional degrees of freedom in the optimzation problem described in the next section. However, a detailed study of this approach is beyond the scope of this note. ## 3 Stencil optimization Let \(\mathbf{u}\) be a grid function and \(\mathbf{v}\approx\mathbf{u}_{x}\) be an accurate approximation of its derivative. We are looking for an SBP operator \(D=P^{-1}Q\) that minimizes \(D\mathbf{u}-\mathbf{v}\). We concentrate on an operator with the following sparsity pattern: \[P =\Delta x\operatorname{diag}(p_{0},p_{1},p_{2},p_{3},1,\dots),\] \[Q =\begin{pmatrix}-\frac{1}{2}&q_{0,1}&q_{0,2}&q_{0,3}\\ -q_{0,1}&0&q_{1,2}&q_{1,3}\\ -q_{0,2}&-q_{1,2}&0&q_{2,3}&a_{2}\\ -q_{0,3}&-q_{1,3}&-q_{2,3}&0&a_{1}&a_{2}\\ &&-a_{2}&-a_{1}&0&a_{1}&a_{2}\\ &&&&\ddots&\\ \end{pmatrix}.\] As a reference, we use the conventional SBP operator SBP(4,2), with stencil coefficients \[(p_{0},p_{1},p_{2},p_{3}) =(17/48,59/48,43/48,49/48),\] \[(a_{1},a_{2}) =(2/3,-1/12),\] \[(q_{0,1},q_{0,2},q_{1,2},q_{0,3},q_{1,3},q_{2,3}) =(59/96,-1/12,59/96,-1/32,0,59/96).\] As pointed out in the previous section, it is beneficial from a stability standpoint to fix \(P\) for the adaptive SBP operator. Since the target function \(\mathbf{u}\) could be a quadratic polynomial, and since SBP(4,2) is optimal in this case, we will choose the corresponding \(P\) also for the adaptive operator. In practice we therefore seek to minimize \(Q\mathbf{u}-P\mathbf{v}\), where the coefficients of \(Q\) constitute the unknowns. The minimization problem can be rearranged as \(A(\mathbf{u})\underline{w}=\mathbf{b}(\mathbf{u},\mathbf{v})\) where the vector \(\underline{w}\) contains the unknowns in \(Q\). In our case, \(A(\mathbf{u})\in\mathbb{R}^{(N+1)\times 14}\) since \(Q\) contains 14 unique coefficients3. This constitutes a _rank deficient least squares problem_, i.e. there are fewer unknowns than conditions, yet for any \(\mathbf{u}\) some columns in \(A(\mathbf{u})\) are linearly dependent. Footnote 3: Six coefficients per boundary and two in the interior. Conventional SBP operators are derived by imposing a sequence of such minimization problems where \(\mathbf{u}\) and \(\mathbf{v}\) are substituted by polynomials of increasing degree. For example, SBP(4,2) is uniquely determined by imposing \[\text{first }A(\mathbf{1})\underline{w}=\mathbf{b}(\mathbf{1}, \mathbf{0}),\] \[\text{then }A(\mathbf{x})\underline{w}=\mathbf{b}(\mathbf{x}, \mathbf{1}),\] \[\text{then }A(\mathbf{x}^{2})\underline{w}=\mathbf{b}(\mathbf{x}^{2}, 2\mathbf{x}).\] Each of these conditions can be satisfied exactly, i.e. SBP(4,2) differentiates quadratic polynomials without error. Suppose that the vector \(\mathbf{u}\) is the numerical solution of a PDE at some point in time and that \(\mathbf{v}\approx\mathbf{u}_{x}\) is an approximation of its derivative. The adaptive SBP operator considered here is uniquely defined by solving the sequence of least squares problems \[\begin{array}{l}\mbox{first }A(\mathbf{u})\underline{w}=\mathbf{b}( \mathbf{u},\mathbf{v}),\\ \mbox{then }A(\mathbf{1})\underline{w}=\mathbf{b}(\mathbf{1},\mathbf{0}),\\ \mbox{then }A(\mathbf{x})\underline{w}=\mathbf{b}(\mathbf{x},\mathbf{1}). \end{array}\] Thus, we first attempt to choose coefficients that accurately differentiate \(\mathbf{u}\), then use the remaining degrees of freedom to impose conventional accuracy conditions. An important difference to SBP(4,2) is that none of these conditions can be satisfied exactly; they are all imposed in a least squares sense. ## 4 Numerical experiments We explore the efficacy of the adaptive SBP operator by solving the periodic advection problem (1) on \(K\) elements with \(N+1\) grid points each. As the exact solution we choose \(u(x,t)=\sin\left(2\pi(x-t)\right)+\cos\left(4\pi(x-t)\right)/2\). To obtain the vector \(\mathbf{v}\) needed to optimize the SBP operator, we use a block norm SBP operator for which \(Q\) has identical sparsity pattern to SBP(4,2), and \(P\) has a \(4\times 4\) block near each boundary. Its boundary accuracy is \(\mathcal{O}(\Delta x^{3})\). To obtain any accuracy at all with the adaptive SBP operators it is necessary to re-optimize sufficiently often. This is a consequence of the adaptive operator being an accurate derivative approximation for \(\mathbf{u}\) only at a particular time. As \(\mathbf{u}\) evolves in time, the quality of any given operator deteriorates. Here we make an empirical choice to re-optimize at time intervals \(\Delta\tau=1/(2K(N+1))\). Time integration is performed using Matlab's ode45 routine (Dormand-Prince) with absolute and relative tolerances set to \(10^{-10}\). It is thus expected that the time integration has negligible impact on the overall error. We set \(K=4\) so that four SBP operators make up the spatial discretization. These are independently optimized, each using a quarter of the information contained in \(\mathbf{u}\). Fig. 1a shows the convergence of the \(L^{2}\)-errors of SBP(4,2) and the adaptive method when \(N\) is varied. Fig. 1b shows the time evolution of the errors when \(N=80\). On all grids the errors are visibly smaller with the adaptive SBP operators. On the finest grids, the error is reduced by more than an order of magnitude. The observed convergence rate is \(\mathcal{O}(\Delta x^{3})\) for SBP(4,2) as expected4. Remarkably, for the adaptive stencil it is \(\mathcal{O}(\Delta x^{4})\). Thus, the adaptive stencil overcomes the order limit for non-adaptive SBP operators. This happens despite the optimized SBP operator formally not having any order at all. It is not even consistent. Footnote 4: Here, \(\Delta x\) is a constant multiple of \(N^{-1}\) shown in Fig. 0(a) Repeating the experiment with \(K=1\) elements and four times as many grid points per element yields very similar results to those in Fig. 1 (not shown). Since a single SBP operator now makes up the discretization, there are fewer stencil coefficients to optimize. On the other hand, that optimization has access to all the data in \(\mathbf{u}\). ## 5 Generalizations We have considered the special case of an adaptive SBP operator whose sparsity pattern matches SBP(4,2). Several extensions are required to turn the approach into a general method: * Extending the adaptive method to SBP operators using central differences of bandwidth \(2n+1\) and boundary blocks of size \(r\times r\) for \(n>2\) and \(r>4\) unlocks additional free parameters. A longer sequence Figure 1: \(L^{2}\)-errors with \(K=4\). of least squares problems must be solved, approximately imposing exactness for higher degree polynomials. However, a complication arises where, numerically, the over-determined linear systems have some singular values near machine epsilon. The optimization is highly sensitive to these spurious singular values, which must be identified and deflated. Already with (\(n=3,r=6\)), standard least squares solvers fail for the system \(A(\mathbf{x}^{3})\underline{w}=\mathbf{b}(\mathbf{x}^{3},3\mathbf{x}^{2})\). * Additional desirable properties such as a small spectral radius can be imposed using appropriate constraints. * The selection of re-optimization times should be automized, which requires the development of a sensor. * Solving a sequence of least squares problems comes with a non-negligible expense. We have made no attempt at optimizing the procedure, however this will be necessary for the benefits to outweigh the cost of adaptivity. ## 6 Conclusions We have experimentally illustrated the improved accuracy of a stencil-adaptive SBP operator with a sparsity pattern matching SBP(4,2). The most notable feature is its superconvergence. While we have not provided a theoretical basis for this behavior, we remark that, to the best of our knowledge, this is the first observation of such rapid convergence for SBP finite difference methods with diagonal \(P\).
2301.08793
On left legal semigroups
In this paper we study semigroups satisfying the identity $aba=ab$.
Attila Nagy
2023-01-20T20:32:03Z
http://arxiv.org/abs/2301.08793v2
###### Abstract ###### Abstract In this paper we study semigroups satisfying the identity \(aba=ab\). **On left legal semigroups1** Footnote 1: Mathematics Subject Classification: 20M10, 20M12 Keywords: Semigroup, free semigroup, variety of semigroups **Attila Nagy2** Footnote 2: email: [email protected] Department of Algebra Budapest University of Technology and Economics Muegyetem rkp. 3, Budapest, 1111, Hungary ## 1 Introduction Many important combinatorial structures such as real and complex hyperplane arrangements, interval greedoids, matroids and oriented matroids have the structure of a left regular band (see, for example, [1, 8, 10, 21]). A band is a semigroup in which every element is an idempotent element, and a left regular band is a band which satisfies the identity \(aba=ab\). Left regular bands are examined by many authors. See, for example, papers [2, 3, 4, 2, 21, 22, 24, 26] and the book [16]. In [19, 20], the authors examined semigroups which are not necessarily bands, but satisfy the identity \(aba=ab\). Using the terminology of [20], such semigroups are called left legal semigroups. The class of all left legal semigroups is a variety. Therefore, it is natural to find free semigroups and subdirectly irreducible semigroups in this variety. In Section 3, we construct free left legal semigroups \(F_{X}\) for arbitrary nonempty sets \(X\). The subdirectly irreducible left legal semigroups are examined in Section 4. We show that a semigroup containing at least two elements is a subdirectly irreducible left legal semigroup satisfying the identiy \(ab=a^{2}b\) if and only if it is either a subdirectly irreducible left regular band or a two-element zero semigroup. We also show that a semigroup containing at least two elements is a subdirectly irreducible left legal semigroup satisfying the identity \(ab=a^{2}\) if and only if it is either a two-element left zero semigroup or a two-element zero semigroup. The concept of the retract ideal extension of semigroups is used effectively in many cases in the study of the structure of semigroups. In Section 4, we give a necessary and sufficient condition for a left legal semigroup to be a retract ideal extension of a left regular band by a zero semigroup. One of the basic concepts of semigroup theory is the semilattice decomposition. In Section 5, this concept is in the centre of investigations. We show that a left legal semigroup is semilattice indecomposable if and only if it satisfies the identity \(a^{2}b=a^{2}\). The left (resp., right, weakly) separative semigroups are examined in many papers. At the end of Section 5, we prove theorems on left (resp., right, weakly) separative left legal semigroups. We show that a left legal semigroup is right separative if and only if it is weakly separative. It is also proved that every left legal left separative semigroup is commutative. In Section 6, the lattice of all left legal semigroup varieties is examined. ## 2 Preliminaries By a _semigroup_ we mean a multiplicative semigroup, that is, a nonempty set together with an associative multiplication. A nonempty subset \(I\) of a semigroup \(S\) is said to be an _ideal_ of \(S\) if \(sa,as\in I\) for every \(a\in I\) and \(s\in S\). If \(I\) is an ideal of a semigroup \(S\), then the relation \(\varrho_{I}\) on \(S\) defined by \((a,b)\in\varrho_{I}\) if and only if \(a=b\) or \(a,b\in I\) is a congruence on \(S\), which is called the _Rees congruence on \(S\) determined by \(I\)_. The equivalence classes of \(S\) mod \(\varrho_{I}\) are \(I\) itself and every one-element set \(\{a\}\) with \(a\in S\setminus I\). The factor semigroup \(S/\varrho_{I}\) is called the _Rees factor semigroup of \(S\) modulo \(I\)_. We shall write \(S/I\) instead of \(S/\varrho_{I}\). We may describe \(S/I\) as the result of collapsing \(I\) into a single (zero) element, while the elements of \(S\) outside of \(I\) retain their identity. A semigroup \(S\) containing a zero element \(0\) is called a _zero semigroup_ if \(ab=0\) for every \(a,b\in S\). It is obvious that if \(S\) is a semigroup, then \(S^{2}\) is an ideal of \(S\), and the Rees factor semigroup \(S/S^{2}\) is a zero semigroup. Let \(B\) and \(Q\) be disjoint semigroups, \(Q\) having a zero element. A semigroup \(S\) is called an _ideal extension_ of \(B\) by \(Q\) if \(S\) contains \(B\) as an ideal, and if the Rees factor semigroup \(S/B\) is isomorphic with \(Q\). An ideal \(I\) of a semigroup \(S\) is called a _retract ideal_ if there is a homomorphism of \(S\) onto \(I\) which leaves the elements of \(I\) fixed. Such a homomorphism is called a _retract homomorphism_ of \(S\) onto \(I\). In this case we say that \(S\) is a _retract ideal extension_ of \(I\). We say that a semigroup \(S\) is a _subdirect product_ of semigroups \(S_{i}\) (\(i\in I\)) if \(S\) is isomorphic to a subsemigroup \(T\) of the direct product of semigroups \(S_{i}\) (\(i\in I\)) such that the restriction of the projection homomorphisms to \(T\) are surjective. A semigroup \(S\) is said to be _subdirectly irreducible_ if whenever \(S\) is written as a subdirect product of a family of semigroups \(S_{i}\) (\(i\in I\)), then, for at least one \(j\in I\), the projection homomorphism \(\pi_{j}\) maps \(S\) onto \(S_{j}\) isomorphically. Subdirect decompositions of a semigroup \(S\) are closely connected with congruences on \(S\) (see, for example, [16, I.3.6]). If \(\alpha_{i}\) (\(i\in I\)) are congruences on a semigroup \(S\) and \(\cap_{i\in I}\alpha_{i}=\iota_{S}\), the equality relation on \(S\), then \(S\) is a subdirect product of the factor semigroups \(S/\alpha_{i}\). Conversely, if a semigroup is a subdirect product of semigroups \(S_{i}\) (\(i\in I\)) and \(\alpha_{i}\) is the congruence on \(S\) induced by the projection homomorphism \(\pi_{i}\) (\(i\in I\)), then \(\cap_{i\in I}\alpha_{i}=\iota_{S}\). If \(\mathcal{C}\) is a class of semigroups, then a congruence \(\varrho\) on a semigroup \(S\) is called a \(\mathcal{C}\)-_congruence_ if the factor semigroup \(S/\varrho\) belongs to \(\mathcal{C}\). An element \(e\) of a semigroup is called an _idempotent element_ if \(e^{2}=e\). A semigroup in which every element is an idempotent element is called a _band_. A commutative band is called a _semilattice_. It is clear that the universal relation on an arbitrary semigroup is a semilattice congruence. A semigroup \(S\) is said to be _semilattice indecomposable_ if the universal relation is the only semilattice congruence on \(S\). By Theorem of [23], every semigroup \(S\) has a least semilattice congruence \(\eta_{S}\) whose classes are semilattice indecomposable semigroups. In other words, every semigroup is a semilattice of semilattice indecomposable semigroups. A band satisfying the identity \(aba=ab\) is called a _left regular band_. A semigroup satisfying the identity \(ab=a\) is called a _left zero semigroup_. By the dual of [16, II.3.12. Proposition], a band is left regular if and only if it is a semilattice of left zero semigroups. It is clear that every left zero semigroup is left legal and semilattice indecomposable. A semigroup \(S\) is called a left legal semigroup if it satisfies the identity \(aba=ab\). Every left regular band is a left legal semigroup. The converse is also true for semigroups \(S\) containing a right identity element, because \(a^{2}=aea=ae=a\) is satisfied for every element \(a\) of \(S\) and a right identity element \(e\) of \(S\). A semigroup \(S\) is called an _\(\mathcal{L}\)-commutative semigroup_[13] if, for every elements \(a,b\in S\), there is an element \(x\in S^{1}\) such that \(ab=xba\). A semigroup \(S\) is called a _right weakly commutative semigroup_ if, for every \(a,b\in S\), there exists \(x\in S\) and a positive integer \(n\) such that \((ab)^{n}=xa\). It is clear that every \(\mathcal{L}\)-commutative semigroup is right weakly commutative. A semigroup \(S\) is said to be a _right separative semigroup_ if \(ab=b^{2}\) and \(ba=a^{2}\) imply \(a=b\) for every \(a,b\in S\). A _left separative semigroup_ is defined analogously. A semigroup \(S\) is called a _weakly separative semigroup_ if \(a^{2}=ab=b^{2}\) implies \(a=b\) for every \(a,b\in S\). For an arbitrary semigroup \(S\), \[S^{1}=\begin{cases}S,&\text{if $S$ has an identity element};\\ S\cup 1,&\text{otherwise},\end{cases}\] where \(S\cup 1\) is the semigroup which is obtained by the adjunction of an identity element \(1\) to \(S\). For notions and notations not defined but used in this paper, we refer the reader to [5]. ## 3 Free left legal semigroups Let \(\mathcal{V}\) be a non-trivial variety of algebras, and \(X\) be a nonempty set. An algebra \(F_{X}\) is said to be a free algebra in \(\mathcal{V}\) with a free generating system (or basis) \(X\) if \(F_{X}\) is in \(\mathcal{V}\), \(X\) generates \(F_{X}\), and every mapping of \(X\) into any algebra \(A\) from \(\mathcal{V}\) can be extended to a homomorphism of \(F_{X}\) into \(A\). In this section we construct free left legal semigroups \(F_{X}\) for arbitrary nonempty sets \(X\). First we prove two lemmas which will be used throughout the paper. **Lemma 1**.: _If \(S\) is a left legal semigroup then every element of \(S^{2}\) is an idempotent element. Moreover, \(a^{k}=a^{2}\) for every \(a\in S\) and every integer \(k\geq 2\)._ Proof.: Let \(S\) be a left legal semigroup. For every elements \(a\) and \(b\) of \(S\), we have \((ab)^{2}=a(bab)=aba=ab\). Thus every element of \(S^{2}\) is an idempotent element. For every \(a\in S\), \(a^{3}=a^{2}\) by definition, and hence \(a^{k}=a^{2}\) for every integer \(k\geq 2\). **Lemma 2**.: _For arbitrary elements \(a\) and \(b\) of a left legal semigroup, \(ab=ab^{2}\)._ Proof.: Using Lemma 1, we get \(ab=(ab)^{2}=(aba)b=ab^{2}\). Let \(X\) be a nonempty set (whose elements are also said to be letters). Let \(F_{X}\) denote the set of all finite nonempty sequences (or words) of letters of \(X\) in which each letter \(x\in X\) can occur at most once or twice, in the latter case in the form \(xx\) at the beginning of the word. For example, if \(x\) and \(y\) are elements of \(X\) with \(x\neq y\), then \(xx,xxy\in F_{X}\) but \(xyx,yxx\notin F_{X}\). If \(\omega_{1}\) and \(\omega_{2}\) are words of \(F_{X}\), then let \(\exists_{\omega_{1}}(\omega_{2})\) denote the word which can be obtained from \(\omega_{2}\) in the following way: for every \(x\in X\), if \(x\) occurs in both of \(\omega_{1}\) and \(\omega_{2}\), then we delete \(x\) from \(\omega_{2}\). For example, \(\exists_{ux}(xxy)=y\). We note that \(\exists_{\omega_{1}}(\omega_{2})\) may be the empty word. For an arbitrary \(\omega\in F_{X}\), let \(\omega^{*}=\omega\), if the letters of \(\omega\) are pairwise distinct, and let \(\omega^{*}\) be the word which can be obtained from \(\omega\) by deleting its second letter if the first two letters of \(\omega\) are equal. For example, if \(x,y,z\in X\) are pairwise distinct, then \((xyz)^{*}=xyz=(xxyz)^{*}\). We note that if \(\exists_{\omega_{1}}(\omega_{2}^{*})\) is not empty, then its letters are pairwise distinct. Thus \((\omega^{*})^{*}=\omega^{*}\) for every \(\omega\in F_{X}\). We define an operation \(\circ\) on \(F_{X}\). For arbitrary words \(\omega_{1}=x_{1}\dots x_{n}\) and \(\omega_{2}=y_{1}\dots y_{m}\) of \(F_{X}\), let \[\omega_{1}\circ\omega_{2}=\begin{cases}xx\exists_{\omega_{1}}(\omega_{2}^{*}), &\text{if $\omega_{1}=x$ and $y_{1}=x$ for some $x\in X$,}\\ \omega_{1}\exists_{\omega_{1}}(\omega_{2}^{*}),&\text{otherwise.}\end{cases} \tag{1}\] **Remark 1**.: _It is clear that, for arbitrary nonempty set \(X\) and arbitrary words \(\omega_{1}\) and \(\omega_{2}\) of \(F_{X}\), the length of \(\omega_{1}\circ\omega_{2}\) is greater than \(1\)._ **Remark 2**.: _The word \(xx\exists_{\omega_{1}}(\omega_{2}^{*})\) in the first row of (1) can also be written in the form \(\omega_{1}x\exists_{\omega_{1}}(\omega_{2}^{*})\). Thus, for every \(\omega_{1},\omega_{2}\in F_{X}\), the word \(\omega_{1}\) is a prefix of the word \(\omega_{1}\circ\omega_{2}\)._ **Example 1**.: If \(X=\{x,y\}\), then \(F_{X}=\{x,xx,y,yy,xy,xxy,yx,yyx\}\). The operation \(\circ\) on \(F_{X}\) is given by Table 1. \begin{table} \begin{tabular}{l|l l l l l l l l} \(\circ\) & \(x\) & \(xx\) & \(y\) & \(yy\) & \(xy\) & \(xxy\) & \(yx\) & \(yyx\) \\ \hline \(x\) & \(xx\) & \(xx\) & \(xy\) & \(xy\) & \(xxy\) & \(xxy\) & \(xy\) & \(xy\) \\ \(xx\) & \(xx\) & \(xx\) & \(xxy\) & \(xxy\) & \(xxy\) & \(xxy\) & \(xxy\) & \(xxy\) \\ \(y\) & \(yx\) & \(yx\) & \(yy\) & \(yy\) & \(yx\) & \(yx\) & \(yyx\) & \(yyx\) \\ \(yy\) & \(yyx\) & \(yyx\) & \(yy\) & \(yy\) & \(yyx\) & \(yyx\) & \(yyx\) & \(yyx\) \\ \(xy\) & \(xy\) & \(xy\) & \(xy\) & \(xy\) & \(xy\) & \(xy\) & \(xy\) & \(xy\) & \(xy\) \\ \(xxy\) & \(xxy\) & \(xxy\) & \(xxy\) & \(xxy\) & \(xxy\) & \(xxy\) & \(xxy\) & \(xxy\) \\ \(yx\) & \(yx\) & \(yx\) & \(yx\) & \(yx\) & \(yx\) & \(yx\) & \(yx\) & \(yx\) & \(yx\) \\ \(yyx\) & \(yyx\) & \(yyx\) & \(yyx\) & \(yyx\) & \(yyx\) & \(yyx\) & \(yyx\) & \(yyx\) \\ \end{tabular} \end{table} Table 1: **Theorem 3**.: _For an arbitrary nonempty set \(X\), the operation \(\circ\) is associative on \(F_{X}\)._ Proof.: (By induction on the length of the middle word) Let \(\omega_{1}=x_{1}\ldots x_{n}\) and \(\omega_{2}=y_{1}\ldots y_{m}\) be arbitrary words of \(F_{X}\). Let \(x\in X\) be an arbitrary letter. First consider the case \(x\notin\omega_{1}\). Then \[(\omega_{1}\circ x)\circ\omega_{2}=\omega_{1}x\exists_{\omega_{1}x}(\omega_{2} ^{*})=\omega_{1}\circ(x\circ\omega_{2}).\] Next consider the case when \(x\in\omega_{1}\). If \(\omega_{1}=x\), then \[(\omega_{1}\circ x)\circ\omega_{2}=xx\exists_{\omega_{1}}(\omega_{2}^{*})= \omega_{1}\circ(x\circ\omega_{2}).\] If \(\omega_{1}\neq x\), then \[(\omega_{1}\circ x)\circ\omega_{2}=\omega_{1}\circ\omega_{2}=\omega_{1}\circ( x\circ\omega_{2}).\] Let \(n\) be a positive integer greater than \(1\). Assume that \[(\omega_{1}\circ\omega_{2})\circ\omega_{3}=\omega_{1}\circ(\omega_{2}\circ \omega_{3})\] is satisfied for every words \(\omega_{1},\omega_{2},\omega_{3}\in F_{X}\) where the length of \(\omega_{2}\) is less than \(n\). Let \(\omega_{2}\) be a word of \(F_{X}\) whose length is \(n\). Denote the last letter of \(\omega_{2}\) by \(x\), that is, \(\omega_{2}=\omega x=\omega\circ x\) (\(\omega\in F_{X}\), \(x\in X\)). Then, for arbitrary \(\omega_{1},\omega_{3}\in F_{X}\), \[(\omega_{1}\circ\omega_{2})\circ\omega_{3}=(\omega_{1}\circ(\omega\circ x)) \circ\omega_{3}=((\omega_{1}\circ\omega)\circ x)\circ\omega_{3}=(\omega_{1} \circ\omega)(x\circ\omega_{3})=\] \[=\omega_{1}\circ(\omega\circ(x\circ\omega_{3}))=\omega_{1}\circ((\omega\circ x )\circ\omega_{3})=\omega_{1}\circ(\omega_{2}\circ\omega_{3}).\] Thus the operation \(\circ\) is associative, and hence \(F_{X}\) is a semigroup with respect to the operation \(\circ\). **Theorem 4**.: _For an arbitrary nonempty set \(X\), the semigroup \(F_{X}\) is left legal._ Proof.: Let \(\omega_{1}\) and \(\omega_{2}\) be arbitrary elements of \(F_{X}\). By Remark 1, the length of \(\omega_{1}\circ\omega_{2}\) is greater that \(1\). By Remark 2, \(\omega_{1}\) is a prefix of \(\omega_{1}\circ\omega_{2}\). Thus \[\omega_{1}\circ\omega_{2}\circ\omega_{1}=\omega_{1}\circ\omega_{2}\exists_{( \omega_{1}\circ\omega_{2})}(\omega_{1}^{*})=\omega_{1}\circ\omega_{2}.\] **Theorem 5**.: _If \(X\) is an arbitrary nonempty set, then \(F_{X}\) is a free left legal semigroup with a free generating system \(X\)._ Proof.: Let \(X\) be an arbitrary nonempty set. By Theorem 4, the semigroup \(F_{X}\) is left legal. Let \(S\) be an arbitrary left legal semigroup and \(f\) be an arbitrary mapping of \(X\) into \(S\). Let \(\Phi\) be the mapping of the semigroup \(F_{X}\) into the semigroup \(S\) defined by the usual way: for arbitrary \(\omega=x_{1}\ldots x_{n}\in F_{X}\), let \[\Phi(x_{1}\ldots x_{n})=f(x_{1})\cdots f(x_{n}).\] We show that \(\Phi\) is a homomorphism. Let \(\omega_{1}=x_{1}\dots x_{n}\) and \(\omega_{2}=y_{1}\dots y_{m}\) be arbitrary elements of \(F_{X}\). First consider the case when \(n=1\) and \(\omega_{1}=x=y_{1}\) for some \(x\in X\). If \(y_{2}\neq x\), then \[\Phi(\omega_{1}\circ\omega_{2})=\Phi(xxy_{2}\cdots y_{m})=f(x)f(x)f(y_{2}) \cdots f(y_{m})=\Phi(\omega_{1})\Phi(\omega_{2}).\] If \(y_{2}=x\), then (using also Lemma 1) \[\Phi(\omega_{1}\circ\omega_{2})=\Phi(xxy_{3}\cdots y_{m})=f(x)f(x)f(y_{3}) \cdots f(y_{m})=\] \[f(x)f(x)f(x)f(y_{3})\cdots f(y_{m})=\Phi(\omega_{1})\Phi(\omega_{2}).\] Next consider the opposite case: \(\omega_{1}=x\) for some \(x\in X\) and \(y_{1}\neq x\) or \(n>1\). In this case \(y_{1}\in\omega_{1}\) implies \(n>1\), and hence \[f(x_{1})\cdots f(x_{n})=f(x_{1})\cdots f(x_{n})f(y_{1}), \tag{2}\] because \(S\) satisfies the identities \(ab=aba\) and \(ab=ab^{2}\). By the definition of the operation \(\circ\), \[\omega_{1}\circ\omega_{2}=\omega_{1}\exists_{\omega_{1}}(\omega_{2}^{*}). \tag{3}\] We have two subcases. First assume that \(\exists_{\omega_{1}}(\omega_{2}^{*})\) is the empty word. Then \(y_{j}\in\omega_{1}\) for every \(j\in\{1,\dots,m\}\). Thus (2) holds, and hence \[f(x_{1})\cdots f(x_{n})=f(x_{1})\cdots f(x_{n})f(y_{1})=\] \[=f(x_{1})\cdots f(x_{n})f(y_{1})\cdots f(y_{m}),\] because \(S\) satisfies the identity \(ab=aba\), and every factor \(f(y_{j})\) (\(j=1,\dots,m\)) equals some factor \(f(x_{t})\) (\(t=1,\dots,n\)). Next, assume that \(\exists_{\omega_{1}}(\omega_{2}^{*})\) is not the empty word. Then there are pairwise distinct letters \(z_{1},\dots,z_{k}\) such that every \(z_{i}\) equals some \(y_{j}\), and \[\exists_{\omega_{1}}(\omega_{2}^{*})=z_{1}z_{2}\cdots z_{k}. \tag{4}\] Using (3) and (4), we have \[\Phi(\omega_{1}\circ\omega_{2})=f(x_{1})\cdots f(x_{n})f(z_{1})\cdots f(z_{k}). \tag{5}\] If \(y_{1}=z_{1}\), then we can complete the product \(f(x_{1})\cdots f(x_{n})f(y_{1})f(z_{2})\cdots f(z_{k})\) with the missing factors \(f(y_{j})\) (\(y_{j}\in\omega_{1}\)) to get the equation \[f(x_{1})\cdots f(x_{n})f(y_{1})f(z_{2})\cdots f(z_{k})=f(x_{1})\cdots f(x_{n}) f(y_{1})f(y_{2})\cdots f(y_{m}). \tag{6}\] Equations (5) and (6) together imply that \(\Phi(\omega_{1}\circ\omega_{2})=\Phi(\omega_{1})\Phi(\omega_{2})\). If \(y_{1}\neq z_{1}\), then \(y_{1}\in\omega_{1}\), and hence (2) holds. Thus \[f(x_{1})\cdots f(x_{n})f(z_{1})\cdots f(z_{k})=\] \[=f(x_{1})\cdots f(x_{n})f(y_{1})f(z_{1})\cdots f(z_{k})=f(x_{1})\cdots f(x_{n} )f(y_{1})f(y_{2})\cdots f(y_{m}),\] because we can complete the product \(f(x_{1})\cdots f(x_{n})f(y_{1})f(z_{1})\cdots f(z_{k})\) with the missing factors. Hence \(\Phi(\omega_{1}\circ\omega_{2})=\Phi(\omega_{1})\Phi(\omega_{2})\). Thus \(\Phi\) is a homomorphism. It is obvious that \(\Phi(x)=f(x)\) for every \(x\in X\). ## 4 Retract ideal extensions of left regular bands by zero semigroups By Lemma 1, if \(S\) is a left legal semigroup, then every element of \(S^{2}\) is an idempotent element. Thus every left legal semigroup is an ideal extension of a left regular band by a zero semigroup. In this section we focus on the case when this extension is retract. **Lemma 6**.: _If \(S\) is a semigroup such that \(S^{2}\) is a band and \(\varphi\) is a retract homomorphism of \(S\) onto \(S^{2}\), then \(\varphi(a)=a^{2}\) for every \(a\in S\)._ Proof.: Let \(S\) be a semigroup such that \(S^{2}\) is a band. Let \(\varphi\) be a retract homomorphism of \(S\) onto \(S^{2}\). Then, for arbitrary \(a\in S\), \[\varphi(a)=(\varphi(a))^{2}=\varphi(a^{2})=a^{2}.\] **Theorem 7**.: _The following four conditions on a semigroup \(S\) are equivalent._ 1. \(S\) _is a left legal semigroup satisfying the identity_ \(ab=a^{2}b\)_._ 2. \(S\) _is a retract ideal extension of a left regular band by a zero semigroup._ 3. \(S\) _is a subdirect product of a left regular band and a zero semigroup._ 4. \(S\) _is a left legal semigroup such that the factor semigroup_ \(S/\theta_{S}^{\mathrm{left}}\) _is a band._ Proof.: It is obvious that \((i)\) and \((iv)\) are equivalent. \((i)\Rightarrow(ii)\): Assume that \(S\) is a left legal semigroup satisfying the identity \(ab=a^{2}b\). By Lemma 1, \(S\) is an ideal extension of the left regular band \(S^{2}\) by the zero semigroup \(S/S^{2}\). Let \(\varphi\) be the mapping of \(S\) into \(S^{2}\) defined by the following way: for an arbitrary \(a\in S\), \(\varphi(a)=a^{2}\). As \(S^{2}\) is a band, \(\varphi\) maps \(S\) onto \(S^{2}\) and leaves the elements of \(S^{2}\) fixed. Let \(a,b\in S\) be arbitrary elements. Since \(S\) is a left legal semigroup and satisfies the identity \(ab=a^{2}b\), we have \[\varphi(a)\varphi(b)=a^{2}b^{2}=a(ab)b=a(aba)b=a(ab)^{2}=\] \[=a(ab)=a^{2}b=ab=(ab)^{2}=\varphi(ab).\] Then \(\varphi\) is a homomorphism. Consequently \(S\) is a retract ideal extension of the left regular band \(S^{2}\) by the zero semigroup \(S/S^{2}\). Thus \((ii)\) is satisfied. \((ii)\Rightarrow(iii)\): Assume that \(S\) is a retract ideal extension of a left regular band \(B\) by a zero semigroup. Then \(B\) is an ideal of \(S\) such that the Rees factor semigroup \(S/B\) is a zero semigroup. Since \(B\) is a band, \(B\subseteq S^{2}\). Since the Rees factor semigroup \(S/B\) is a zero semigroup, \(S^{2}\subseteq B\). Thus \(B=S^{2}\). Let \(\varphi\) be a retract homomorphism of \(S\) onto \(S^{2}\). It is easy to see that \(ker_{\varphi}\cap\varrho_{S^{2}}=\iota_{S}\), where \(ker_{\varphi}\) denotes the kernel of \(\varphi\), \(\varrho_{S^{2}}\) denotes the Rees congruence on \(S\) determined by the ideal \(S^{2}\). Obviously, the factor semigroup \(S/ker_{\varphi}\) is isomorphic to the left regular band \(S^{2}\) and the Rees factor semigroup \(S/S^{2}\) is a zero semigroup. Thus \(S\) is a subdirect product of a left regular band and a zero semigroup. \((iii)\Rightarrow(i)\): It is obvious, because every subsemigroup of the direct product of a left regular band and a zero semigroup is a left legal semigroup satisfying the identity \(ab=a^{2}b\). **Example 2**.: On the set \(S=\{x,e,f,g,h\}\) consider the operation defined by Table 2. It is a matter of checking to see that this operation is associative. \(S^{2}=\{e,f,g,h\}\) is a semilattice of the left zero semigroups \(S_{1}=\{e,f\}\) and \(S_{2}=\{g,h\}\), and hence \(S^{2}\) is a left regular band. The mapping \(\varphi:s\mapsto s^{2}\) is a retract homomorphism of \(S\) onto \(S^{2}\). The Rees factor semigroup \(S/S^{2}=\{x,0\}\) is a two-element zero semigroup. By Theorem 7, \(S\) is a left legal semigroup satisfying the identity \(ab=a^{2}b\). \(S\) is a subdirect product of the left regular band \(S^{2}\) and the two-element zero semigroup \(S/S^{2}\). **Theorem 8**.: _A semigroup containing at least two elements is a subdirectly irreducible left legal semigroup satisfying the identity \(ab=a^{2}b\) if and only if it is either a subdirectly irreducible left regular band or a two-element zero semigroup._ Proof.: A left regular band is a left legal semigroup and satisfies the identity \(ab=a^{2}b\). It is easy to see that if \(Z\) is a two-element zero semigroup, then \(Z\) is a subdirectly irreducible left legal semigroup satisfying the identity \(ab=a^{2}b\). To prove the converse assertion, assume that \(S\) is a subdirectly irreducible left legal semigroup which has at least two elements and satisfies the identity \(ab=a^{2}b\). Then, by \((iii)\) of Theorem 7, \(S\) is either a subdirectly irreducible left regular band or a subdirectly irreducible zero semigroup \(Z\). Since a zero semigroup is subdirectly irreducible if and only if it has at most two elements, we have \(\mid Z\mid=2\). For subdirectly irreducible left regular bands, we refer the reader to [26, Theorem 2.16]. **Theorem 9**.: _The following three conditions on a semigroup \(S\) are equivalent._ 1. \(S\) _is a left legal semigroup satisfying the identity_ \(ab=a^{2}\)_._ 2. \(S\) _is a retract ideal extension of a left zero semigroup by a zero semigroup._ \begin{table} \begin{tabular}{c|c c c c c} & \(x\) & \(e\) & \(f\) & \(g\) & \(h\) \\ \hline \(x\) & \(e\) & \(e\) & \(e\) & \(g\) & \(h\) \\ \(e\) & \(e\) & \(e\) & \(e\) & \(g\) & \(h\) \\ \(f\) & \(f\) & \(f\) & \(f\) & \(g\) & \(h\) \\ \(g\) & \(g\) & \(g\) & \(g\) & \(g\) & \(g\) \\ \(h\) & \(h\) & \(h\) & \(h\) & \(h\) & \(h\) \\ \end{tabular} \end{table} Table 2: _._ 3. \(S\) _is a subdirect product of a left zero semigroup and a zero semigroup._ Proof.: \((i)\Rightarrow(ii)\): Let \(S\) be a left legal semigroup satisfying the identity \(ab=a^{2}\). Since \(S\) is left legal, \(S^{2}\) is a band by Lemma 1. For arbitrary elements \(e,f\in S^{2}\), we have \[ef=e^{2}=e.\] Hence \(S^{2}\) is a left zero semigroup. Let \(\varphi\) be the mapping of \(S\) into \(S^{2}\) defined by the following way: for an arbitrary \(a\in S\), \(\varphi(a)=a^{2}\). As \(S^{2}\) is a band, \(\varphi\) maps \(S\) onto \(S^{2}\) which leaves the elements of \(S^{2}\) fixed. For arbitrary \(a,b\in S\), \[\varphi(a)\varphi(b)=a^{2}b^{2}=(ab)(ba)=ab^{2}a=ab^{2}=(ab)b=(aba)b=(ab)^{2}= \varphi(ab).\] Thus \(\varphi\) is a homomorphism. Consequently \(S\) is a retract ideal extension of the left zero semigroup \(S^{2}\) by the zero semigroup \(S/S^{2}\), and hence \((ii)\) is satisfied. \((ii)\Rightarrow(iii)\): Assume that \(S\) is a retract ideal extension of a left zero semigroup \(B\) by a zero semigroup. Then \(B\) is an ideal of \(S\), and the Rees factor semigroup \(S/B\) is a zero semigroup. Since \(B\) is a band, \(B\subseteq S^{2}\). Since \(S/B\) is a zero semigroup, \(S^{2}\subseteq B\). Thus \(B=S^{2}\). Let \(\varphi\) be a retract homomorphism of \(S\) onto \(S^{2}\). Then \(ker_{\varphi}\cap\varrho_{S^{2}}=\iota_{S}\). Thus \(S\) is a subdirect product of the left regular band \(S^{2}\) and the zero semigroup \(S/S^{2}\). \((iii)\Rightarrow(i)\): It is obvious, because every subsemigroup of the direct product of a left zero semigroup and a zero semigroup is a left legal semigroup satisfying the identity \(ab=a^{2}\). **Example 3**.: On the set \(S=\{x,y,b,c\}\) consider the operation defined by Table 3. It is a matter of checking to see that \(S\) is a left legal semigroup satisfying the identity \(ab=a^{2}\). \(S\) is an ideal extension of the left zero semigroup \(S^{2}=\{b,c\}\) by the zero semigroup \(S/S^{2}=\{x,y,0\}\). The mapping \(\varphi\colon s\mapsto s^{2}\) is a retract homomorphism of \(S\) onto \(S^{2}\). **Theorem 10**.: _A semigroup containing at least two elements is a subdirectly irreducible left legal semigroup satisfying the identity \(ab=a^{2}\) if and only if it is either a two-element left zero semigroup or a two-element zero semigroup._ Proof.: It is obvious that every two-element semigroup is subdirectly irreducible. Moreover, every left zero semigroup and every zero semigroup is left legal and satisfies the identity \(ab=a^{2}\). To prove the converse assertion, let \(S\) be a \begin{table} \begin{tabular}{c|c c c c} & \(x\) & \(y\) & \(b\) & \(c\) \\ \hline \(x\) & \(b\) & \(b\) & \(b\) & \(b\) \\ \(y\) & \(c\) & \(c\) & \(c\) & \(c\) \\ \(b\) & \(b\) & \(b\) & \(b\) & \(b\) \\ \(c\) & \(c\) & \(c\) & \(c\) & \(c\) \\ \end{tabular} \end{table} Table 3: subdirectly irreducible left legal semigroup which has at least two elements and satisfies the identity \(ab=a^{2}\). Then, by \((iii)\) of Theorem 9, \(S\) is either a left zero semigroup or a zero semigroup. Since a left zero semigroup and, similarly, a zero semigroup is subdirectly irreducible if and only if it contains at most two elements, \(S\) is either a two-element left zero semigroup or a two-element zero semigroup. ## 5 The least semilattice congruence on left legal semigroups In the study of the structure of semigroups, the semilattice decomposition plays a basic role. A semigroup \(S\) is said to be an _archimedean semigroup_ if, for arbitrary \(a,b\in S\), there are positive integers \(m\) and \(n\) such that \(a^{m}\in SbS\) and \(b^{n}\in SaS\). Every archimedean semigroup is semilattice indecomposable. A semigroup \(S\) is called a _Putcha semigroup_ if, for every \(a,b\in S\), the assumption \(a\in S^{1}bS^{2}\) implies \(a^{m}\in S^{1}b^{2}S^{1}\) for some positive integer \(m\). It is proved in [18] that a semigroup is a semilattice of archimedean semigroups if and only if it is a Putcha semigroup. **Theorem 11**.: _Every left legal semigroup is a Putcha semigroup._ Proof.: Let \(S\) be a left legal semigroup. It is clear that \(S\) is \(\mathcal{L}\)-commutative, and hence right weakly commutative. By [14, Theorem 4.1] and [14, Lemma 2.2], every right weakly commutative semigroup is a Putcha semigroup, which proves our assertion. **Remark 3**.: _Theorem 11 and [14, Theorem 2.1] together imply that if \(S\) is a left legal semigroup, then,_ \[\eta_{S}=\{(a,b)\in S\times S:\ a^{m}\in SbS,\ b^{n}\in SaS\text{ for some positive integers }m,n\},\] _where \(\eta_{S}\) denotes the least semilattice congruence on \(S\). This implies that a left legal semigroup is semilattice indecomposable if and only if it is archimedean._ For an arbitrary semigroup \(S\), let \(\tau_{S}\) denote the binary relations on \(S\) defined by the following way: \((a,b)\in\tau_{S}\) for elements \(a,b\in S\), if and only if there is a positive integer \(n\) such that \(a^{n}b=a^{n+1}\) and \(b^{n}a=b^{n+1}\). By [14, Lemma 4.1], \(\tau_{S}\) and its dual \(\sigma_{S}\) are equivalence relations on an arbitrary semigroup \(S\). Equivalence relations \(\tau_{S}\) and \(\sigma_{S}\) are studied by many authors in special classes of semigroups. In the book [5], Theorem 4.14 asserts that if \(S\) is a commutative semigroup, then \(\tau_{S}=\sigma_{S}\) is a least weakly separative congruence on \(S\). Chrislock shown [6, Theorem 5.4] that if \(S\) is a medial semigroup, then \(\tau_{S}\) is the least left separative congruence on \(S\), and \(\sigma_{S}\) is the least right separative congruence on \(S\). Mukherjee proved [11, Theorem 6] that \(\sigma_{S}\) is a weakly separative congruence on a quasicommutative semigroup \(S\). It is also conjectured that \(\sigma_{S}\) is the least weakly separative congruence. Pondelicek proved [17, Theorem 5] that if \(S\) is a duo semigroup, then \(\tau_{S}\cap\sigma_{S}\) is the least weakly separative congruence on \(S\). Since every quasicommutative semigroup is a duo semigroup, this result implies that \(\tau_{S}\cap\sigma_{S}\) is the least weakly separative congruence on a quasicommutative semigroup, giving a negative answer for Mukherjee's conjecture. Generalizing and completing the above results, in [12, Theorem 1] it is proved that \(\tau_{S}\cap\sigma_{S}\) is the least weakly separative congruence on a weakly commutative semigroup \(S\). Using this result, in [15] the weakly separative weakly commutative semigroups are characterized. Since every left legal semigroup is right weakly commutative, [14, Theorem 4.7] implies that \(\tau_{S}\) is a weakly separative congruence on a left legal semigroup. The next theorem gives an additional result about \(\tau_{S}\) if \(S\) is a left legal semigroup. **Theorem 12**.: _If \(S\) is a left legal semigroup, then \(\tau_{S}\) is the least semilattice congruence on \(S\)._ Proof.: We use Remark 3. First we show that \(\eta_{S}\subseteq\tau_{S}\). Assume \((a,b)\in\eta_{S}\) for elements \(a,b\in S\). Then, by Remark 3, \(a^{n}=xby\) and \(b^{n}=uav\) for a positive integer \(n\) and for some elements \(x,y,u,v\in S\). We can suppose \(n\geq 2\). Then \(a^{n}b=(xby)b=x(byb)=xby=a^{n}=a^{n+1}\), because \(a^{k}=a^{2}\) for every integer \(k\geq 2\). Similarly, \(b^{n}a=b^{n+1}\). Thus \((a,b)\in\tau_{S}\). Hence \(\eta_{S}\subseteq\tau_{S}\). To show \(\tau_{S}\subseteq\eta_{S}\), assume \((a,b)\in\tau_{S}\) for elements \(a,b\in S\). Then \(a^{n}b=a^{n+1}\) and \(b^{n}a=b^{n+1}\) for some positive integer \(n\geq 2\). Thus \(a^{n+2}=a^{n}ba\in SbS\) and \(b^{n+2}=b^{n}ab\in SaS\). Hence \((a,b)\in\eta_{S}\). Thus \(\tau_{S}\subseteq\eta_{S}\). Consequently \(\eta_{S}=\tau_{S}\). **Remark 4**.: _By Lemma 1, if \(a\) is an arbitrary element of a left legal semigroup, then \(a^{k}=a^{2}\) is satisfied for every integer \(k\geq 2\). Thus \((a,b)\in\tau_{S}\) is satisfied for elements \(a\) and \(b\) of a left legal semigroup if and only if \(a^{2}b=a^{2}\) and \(b^{2}a=b^{2}\)._ **Theorem 13**.: _The following two conditions on a left legal semigroup \(S\) are equivalent._ 1. \(S\) _is semilattice indecomposable._ 2. \(S\) _satisfies the identity_ \(a^{2}b=a^{2}\)_._ Proof.: A semigroup \(S\) is semilattice indecomposable if and only if \(\eta_{S}\) is the universal relation on \(S\). By Theorem 12, \(\eta_{S}=\tau_{S}\) on a left legal semigroup \(S\). Thus a left legal semigroup \(S\) is semilattice indecomposable if and only if \(\tau_{S}\) is the universal relation on \(S\). By Remark 4, if \(S\) is a left legal semigroup, then \((a,b)\in\tau_{S}\) if and only if \(a^{2}b=a^{2}\) and \(b^{2}a=b^{2}\). Consequently a left legal semigroup is semilattice indecomposable if and only if it satisfies the identity \(a^{2}b=a^{2}\). **Corollary 14**.: _If a semigroup is a retract ideal extension of a left zero semigroup by a zero semigroup, then it is a semilattice indecomposable left legal semigroup._ Proof.: Let \(S\) be a semigroup which is a retract ideal extension of a left zero semigroup by a zero semigroup. By Theorem 9, \(S\) is a left legal semigroup satisfying the identity \(ab=a^{2}\). Since \(a^{3}=a^{2}\) for every \(a\in S\), the semigroup \(S\) satisfies the identity \(a^{2}b=a^{2}\). Then \(S\) is semilattice indecomposable by Theorem 13. By Theorem 13 or Corollary 14, the semigroup \(S\) defined in Example 3 is a semilattice indecomposable left legal semigroup. At the end of the section we prove two theorems on left (resp., right, weakly) separative left legal semigroups. **Theorem 15**.: _The following two conditions on a left legal semigroup \(S\) are equivalent._ 1. \(S\) _is right separative._ 2. \(S\) _is weakly separative._ Proof.: \((i)\Rightarrow(ii)\): Let \(S\) be a right separative left legal semigroup. Assume \(a^{2}=ab=b^{2}\) for elements \(a,b\in S\). From \(ab=b^{2}\) we get \[bab=b^{3}=b^{2},\] and hence \[ba=bab=b^{2}=a^{2}.\] Then \[ab=b^{2}\quad\text{and}\quad ba=a^{2}.\] Since \(S\) is right separative, we get \(a=b\). Consequently \(S\) is weakly separative. \((ii)\Rightarrow(i)\): Let \(S\) be a weakly separative left legal semigroup. Assume \(ab=b^{2}\) and \(ba=a^{2}\) for elements \(a,b\in S\). Using the first equation, we get \[ba=bab=b^{3}=b^{2},\] and hence \[a^{2}=ba=b^{2}.\] Since \(S\) is weakly separative, we get \(a=b\). Consequently \(S\) is right separative. **Remark 5**.: _It is clear that every left zero semigroup containing at least two elements is left legal and weakly separative, but not left separative. Thus the weakly separativity of a left legal semigroup \(S\) does not imply the left separativity of \(S\), in general._ **Theorem 16**.: _Every left separative left legal semigroup is commutative._ Proof.: Let \(S\) be a left separative left legal semigroup. Then, for every \(a,b\in S\), \[(ab)(ba)=(aba)(bab)=(ab)^{3}=(ab)^{2}.\] Similarly, \[(ba)(ab)=(ba)^{2}.\] Since \(S\) is left separative, we get \(ab=ba\), and hence \(S\) is commutative. On the lattice of all left legal semigroup varieties The class of all varieties of semigroups forms a lattice under the following operations: for varieties \(\mathcal{X}\) and \(\mathcal{Y}\), their join \(\mathcal{X}\vee\mathcal{Y}\) is the variety generated by the set-theoretical union of \(\mathcal{X}\) and \(\mathcal{Y}\) (as classes of semigroups), and their meet \(\mathcal{X}\wedge\mathcal{Y}\) coincides with the set-theoretical intersection of \(\mathcal{X}\) and \(\mathcal{Y}\). A variety \(\mathcal{V}\) of semigroups is said to be a left legal semigroup variety, if every semigroup belonging to \(\mathcal{V}\) is left legal. The set of all left legal semigroup varieties form a sublattice of the lattice of all semigroup varieties. This sublattice denoted by \(\mathcal{L}(\mathbf{LLS})\). Let \(\mathcal{A}\) (resp., \(\mathcal{B}\)) denote the class of all semigroups that are retract extensions of a left regular band (resp., left zero semigroup) by a zero semigroup. By Theorem 7 and Theorem 9, \(\mathcal{A}\) and \(\mathcal{B}\) are subvarieties of the variety \(\mathcal{LLS}\). Using Lemma 2, \(ab=a^{2}\) implies \(ab=ab^{2}=a^{2}b\) for arbitrary elements \(a\) and \(b\) of a left legal semigroup. Then \(\mathcal{B}\subseteq\mathcal{A}\). We also use the following notations. * \(\mathcal{LRB}\): the variety of all left regular bans (\(a^{2}=a\), \(aba=ab\)); * \(\mathcal{LNB}\): the variety of all left normal bands (\(a^{2}=a\)\(axy=ayx\)); * \(\mathcal{SL}\): the variety of all semilattices (\(a^{2}=a\), \(ab=ba\)); * \(\mathcal{LZ}\): the variety of all left zero semigroups (\(ab=a\)); * \(\mathcal{ZM}\): the variety of all zero semigroups (\(ab=0\)); * \(\mathcal{T}\): the trivial variety (\(a=b\)). It is clear that \(\mathcal{ZM}\subseteq\mathcal{B}\), \(\mathcal{LZ}\subseteq\mathcal{B}\cap\mathcal{LNB}\), and \(\mathcal{SL}\subseteq\mathcal{LNB}\subseteq\mathcal{LRB}\subseteq\mathcal{A}\). Since \(\mathcal{A}\subseteq\mathcal{LLS}\), all of the above varieties are in the lattice \(\mathcal{L}(\mathbf{LLS})\). An element \(a\) of a lattice \((L;\vee,\wedge)\) with a zero \(0\) is said to be an atom of \(L\) if \(0<a\) and \(0\leq b<a\) implies \(0=b\) for arbitrary \(b\in L\). An element \(x\) of a lattice \((L;\vee,\wedge)\) is called a neutral element of \(L\) if, for all \(y,z\in L\), the elements \(x\), \(y\) and \(z\) generate a distributive sublattice of \(L\). **Theorem 17**.: _The varieties \(\mathcal{SL}\), \(\mathcal{LZ}\), \(\mathcal{ZM}\) are atoms, the varieties \(\mathcal{T}\), \(\mathcal{SL}\), \(\mathcal{ZM}\), \(\mathcal{SL}\vee\mathcal{ZM}\), and \(\mathcal{LLS}\) are neutral elements of the lattice \(\mathcal{L}(\mathbf{LLS})\)._ Proof.: By [9, Theorem 2.6], \(\mathcal{SL}\), \(\mathcal{LZ}\) and \(\mathcal{ZM}\) are atoms of the lattice of all semigroup varieties (see also the Theorem of Section IV of [7]). Thus they are atoms of the lattice \(\mathcal{L}(\mathbf{LLS})\). It is clear that the varieties \(\mathcal{T}\) and \(\mathcal{LLS}\) are neutral elements of the lattice \(\mathcal{L}(\mathbf{LLS})\). By [25, Proposition 4.1], varieties \(\mathcal{SL}\), \(\mathcal{ZM}\), and \(\mathcal{SL}\vee\mathcal{ZM}\) are neutral elements of the lattice of all semigroup varieties. Thus they are neutral elements of the lattice \(\mathcal{L}(\mathbf{LLS})\). The relative position of the above-mentioned varieties in the lattice \(\mathcal{L}(\mathbf{LLS})\) is illustrated in Figure 1. \(\mathcal{X}\twoheadrightarrow\mathcal{Y}\) denotes that the variety \(\mathcal{X}\) is a subvariety of the variety \(\mathcal{Y}\).
2305.16072
VEDA: Uneven light image enhancement via a vision-based exploratory data analysis model
Uneven light image enhancement is a highly demanded task in many industrial image processing applications. Many existing enhancement methods using physical lighting models or deep-learning techniques often lead to unnatural results. This is mainly because: 1) the assumptions and priors made by the physical lighting model (PLM) based approaches are often violated in most natural scenes, and 2) the training datasets or loss functions used by deep-learning technique based methods cannot handle the various lighting scenarios in the real world well. In this paper, we propose a novel vision-based exploratory data analysis model (VEDA) for uneven light image enhancement. Our method is conceptually simple yet effective. A given image is first decomposed into a contrast image that preserves most of the perceptually important scene details, and a residual image that preserves the lighting variations. After achieving this decomposition at multiple scales using a retinal model that simulates the neuron response to light, the enhanced result at each scale can be obtained by manipulating the two images and recombining them. Then, a weighted averaging strategy based on the residual image is designed to obtain the output image by combining enhanced results at multiple scales. A similar weighting strategy can also be leveraged to reconcile noise suppression and detail preservation. Extensive experiments on different image datasets demonstrate that the proposed method can achieve competitive results in its simplicity and effectiveness compared with state-of-the-art methods. It does not require any explicit assumptions and priors about the scene imaging process, nor iteratively solving any optimization functions or any learning procedures.
Tian Pu, Shuhang Wang, Zhenming Peng, Qingsong Zhu
2023-05-25T14:01:48Z
http://arxiv.org/abs/2305.16072v1
# VEDA: Uneven light image enhancement via a vision-based exploratory data analysis model ###### Abstract Uneven light image enhancement is a highly demanded task in many industrial image processing applications. Many existing enhancement methods using physical lighting models or deep-learning techniques often lead to unnatural results. This is mainly because: 1) the assumptions and priors made by the physical lighting model (PLM) based approaches are often violated in most natural scenes, and 2) the training datasets or loss functions used by deep-learning technique based methods cannot handle the various lighting scenarios in the real world well. In this paper, we propose a novel vision-based exploratory data analysis model (VEDA) for uneven light image enhancement. Our method is conceptually simple yet effective. A given image is first decomposed into a contrast image that preserves most of the perceptually important scene details, and a residual image that preserves the lighting variations. After achieving this decomposition at multiple scales using a retinal model that simulates the neuron response to light, the enhanced result at each scale can be obtained by manipulating the two images and recombining them. Then, a weighted averaging strategy based on the residual image is designed to obtain the output image by combining enhanced results at multiple scales. A similar weighting strategy can also be leveraged to reconcile noise suppression and detail preservation. Extensive experiments on different image datasets demonstrate that the proposed method can achieve competitive results in its simplicity and effectiveness compared with state-of-the-art methods. It does not require any explicit assumptions and priors about the scene imaging process, nor iteratively solving any optimization functions or any learning procedures. contrast, image enhancement, residual image, vision-based exploratory data model. ## I Introduction Many image/video processing applications, such as surveillance, daily photography and aerial imaging, require high-quality images that preserve scene contrasts faithfully [1]. However, images taken in unevenly-lit scenes often fail to meet this requirement due to limited dynamic ranges. To address this problem, uneven light image enhancement methods are highly demanded and extensive researches have been made over the past decades. In the following, we classify and briefly review the related studies from the viewpoint of uneven light image enhancement. ### _Histogram specification based enhancement_ Histogram specification (HS) is one of the most widely used techniques. Early HS based methods attempt to achieve desired output histogram shapes under the assumption that visually-pleasing images have ideal histogram characteristics. However, due to the lack of universal criteria for determining the ideal histogram for natural images, these methods often lead to detail loss and over-stretching of contrasts. Later studies improve the performance by applying restrictions on contrast stretch, such as hue and range preservation [2], gamma correction [3], saliency preservation [4], and contrast limitation [5]. Recently, swarm optimization technique is introduced to the HS, focusing on brightness and feature preservation [6]. However, the enhanced results are prone to unnatural artifacts. ### _Physical lighting model based enhancement_ Taking the imaging process of natural images into account is a viable approach for image enhancement, and many existing methods vary in their construction of physical lighting models. An especially worth citing category of techniques is enhancement methods based on Retinex theory, which assumes that an image is the element-wise product of the illumination image and the reflectance image. Early Retinex-based methods take the reflectance image as the enhanced result [7], but the results often suffer from unnatural appearance. To overcome this issue, subsequent advances aim to modify the estimated illumination image instead of removing it. Wang et al. [8] propose a naturalness preserved enhancement method by designing a bright-pass filter to recover the illumination image. Liang et al. recover illumination by iteratively solving a diffusion filtering equation. This approach is effective in preserving texture details [9]. Wang and Luo present a multi-layer model to decompose an image into a reflectance layer and a cascaded sequence of illumination [10]. Contrast can be enhanced by adjusting the dynamic range of each illumination layer. The variational approach for Retinex, originally proposed by Kimmel et al. [11], provides a unified framework to formulate the illumination estimation as an optimization problem. Since decomposing an image into the illumination and the reflectance is a mathematically ill-posed problem, and there are no ground truth reflectance and illumination images for natural scenes, various assumptions on the illumination [11], or both the illumination and the reflectance [12], [13], [14] are imposed on the Retinex framework. Within this framework, the subsequent studies differ mainly in modeling such assumptions and priors through different norms and regularization terms, focusing on different aspects of illumination recovery, such as edge-aware smoothing, illumination structure preservation, and local texture extraction [12, 14, 15, 16, 17]. Variation-based methods require solving complex objective functions. Unlike Retinex-based techniques, some studies assume that the observed image is the result of an ideal image being degraded by the air light scattering model, which is typically used in image dehazing methods. Following this idea, several methods employ dehazing techniques and related priors to enhance images by treating the inverted unevenly-lit images as hazy images [18, 19]. Yu and Zhu recently propose a method to enhance images by iteratively estimating the ambient light and the light-scattering attenuation rate [20]. In another recent work, Wang et al. propose an enhancement method by estimating the absorption light and the transmittance [21]. Both methods produce impressive results. The performance of the PLM-based methods depends on the assumptions or priors about real scenes. For example, a common assumption is that scene lighting varies smoothly in space. However, this assumption is easily violated in unevenly-lit scenes [22]. In addition, since image enhancement is closely related to subjective preferences, assumptions or priors based on physical lighting may lead to discrepancies between the enhanced results and the human perception of scenes. ### _Deep-learning based enhancement_ Recent advances in convolutional neural networks(CNNs) have shown that deep-learning techniques can help with image enhancement. Based on paired training images, some highly-cited studies include: Wei et al. propose RetinexNet to enhance low-light images [23]. Wang et al. propose a convolution neural network to enhance underexposed images by learning the illumination map from paired low-light/normal images [24]. Xu et al. construct a hierarchical feature mining network to enhance low-light images [25]. Zhang et al. propose KIND and its upgraded network KIND++ [26, 27]. KIND and KIND++ decompose the input image into the reflectance and illumination images and adjust both images to achieve the enhanced result. In contrast to the paired image based methods, EnlightenGAN proposed by Jiang et al. is an unsupervised deep-learning based image enhancement method trained on unpaired datasets [28]. Guo et al. propose a deep curve estimation network named ZeroDCE and its light-weighted version ZeroDCE++ [29, 30]. ZeroDCE and ZeroDCE++ achieve image enhancement by learning elegantly designed mapping curves instead of learning image-to-image mapping, so the training procedure does not require paired data. More recently, Zhao et al. propose a zero-reference RetinexDIP network to achieve image enhancement by learning the reflectance and illumination images [31]. Based on a semantically contrastive learning network, Liang et al. propose an impressive SCL-LLE method for image enhancement [32]. The performance of learning based methods heavily hinges on the training datasets, the carefully fine-tuned loss functions, the image-specific mapping curves, and the expensive hardware resources. Is it possible to enhance unevenly-lit images neither requiring learning procedures nor imposing sophisticated priors or assumptions on scenes? The ease with which the HVS perceives real scenes across a wide range of lighting variations suggests that visual models may help to solve this problem of naturally enhancing the images taken in unevenly-lit conditions. In this paper, we propose an enhancement method using a vision-based exploratory data analysis model (VEDA). The major contributions of our work are as follows: * We propose a new perspective on image decomposition for image enhancement. The key idea is to decompose the image into two images: the output image of a retinal model (RM), which contains the scene contrasts, and the residual image, which contains most of the scene lighting variations. The enhanced image is obtained by manipulating the two images. To the best of our knowledge, little work has been reported on decomposing an image into the contrast and the residual for image enhancement. Our method is strikingly simple. The processing flow consists of merely a few simple computations, without any explicit assumptions or priors about physical lighting or surface reflectance, without iteratively solving any complex optimization functions, and without any learning procedures. * We propose a residual image based weighted averaging strategy to combine enhanced results at multiple scales. A similar residual image based weighting scheme can also be leveraged to reconcile noise suppression and detail preservation. * Experiments on a variety of image datasets demonstrate the performance of our method is competitive with several state-of-the-art methods. The remainder of this paper is organized as follows: Section II illustrates the proposed method. Section III presents the experimental results. Section IV conducts a discussion. The conclusion is drawn in Section V. ## II Image enhancement by the VEDA ### _Motivation and Overview_ The motivation for our method is twofold: firstly, it is widely accepted that the receptive field (RF) in the retina primarily responds to local contrast changes rather than absolute light levels. This mechanism allows the retina to function analogously to an excellent image processing model capable of perceiving details in natural scenes across the wide range of light levels. Secondly, a well-known concept in exploratory data analysis is to view a signal as the sum of the output of a particular model and the residual between the model output and the input [33]. Accordingly, we regard an input image as the superimposition of two images: one image is the output of a retina model, and the other one is the residual image, simply given by image = RM + residual. The RM preferentially extracts the scene contrasts, while the residual image contains the lighting variations that cannot be fully processed by the RM. Thus, the enhanced image can be achieved by adjusting these two images and recombining them. Fig. 1 shows the flowchart of the proposed method. The first step is to convert input image \(\mathbf{T}\) to the logarithmic domain by \(\mathbf{I}=\log\left(\mathbf{T}\right)\), as the logarithm of the incident light is a rough approximation to the perceived brightness [11]. Second, the contrast image \(\mathbf{R}_{P}\) is extracted from \(\mathbf{I}\) by the RM, and then the residual image \(\mathbf{L}_{R}\) is obtained by subtracting \(\mathbf{R}_{P}\) from \(\mathbf{I}\). Third, since the residual image contains most of lighting variations, \(\mathbf{L}_{R}\) is manipulated through a luminance modification (LM) unit to achieve \(\mathbf{L}_{P}\), referred to as the perceived residual image in this paper. Finally, the enhanced intensity image \(\mathbf{T}_{E}\) is generated by converting back the sum of \(\mathbf{L}_{P}\) and \(\mathbf{R}_{P}\) to the intensity domain. Note that all computations are element-wise and are applied to the V channel in the HSV color space to avoid color shifts. ### _Extracting the contrast image by the retinal model_ A variety of RMs have been proposed to explain various characteristics of the HVS [34, 35, 36, 37, 38, 39, 40, 41]. After testing many of them, we find that the center-surround shunting equation, which simulates the rate of change in neuron activities [42, 43], provides the best performance within the scope of this paper. A simplified version of this time-varying equation takes the form: \[\frac{d}{dt}\mathbf{r}=-m\mathbf{r}+\left(g-\mathbf{r}\right)\mathbf{C}-\left( g+\mathbf{r}\right)\mathbf{S}_{\sigma} \tag{1}\] where \(\mathbf{r}\) denotes the neuron activity of the center-surround RF, \(m\) is the decay rate, \(g\) is the positive gain, \(\mathbf{C}\) is the center input stimulus which generally has one pixel width for practical image processing, namely \(\mathbf{C}=\mathbf{I}\), and \(\mathbf{S}_{\sigma}\) is the surround input stimulus given by \[\mathbf{S}_{\sigma}=\mathbf{I}*w_{\sigma}\] where \(*\) is the convolution operator, \(w_{\sigma}\) is a Gaussian kernel with a standard deviation \(\sigma\). We set the Gaussian kernel width \(W_{S}\) as \(W_{S}=6ceil\left(\sigma\right)+1\) according to the Pauta criterion, where \(ceil\left(\bullet\right)\) is the function that rounds the argument value to the nearest integer greater than or equal to the value. Eq. (1) is an initial value problem of an ordinary differential equation. Given \(t=0\), \(\mathbf{r}=0\), its solution is \[\mathbf{r}=g\frac{\mathbf{I}-\mathbf{S}_{\sigma}}{m+\mathbf{I}+\mathbf{S}_{ \sigma}}\left\{1-\exp\left[-\left(m+\mathbf{I}+\mathbf{S}_{\sigma}\right)t \right]\right\} \tag{2}\] Eq. (2) simulates the adaptation mechanism of the RF neurons in response to light stimuli. When the neuron activity reaches to the steady state, i.e., \(t\rightarrow\infty\), the time-decaying term vanishes, and the output of the RM, denoted as \(\mathbf{R}_{P}\), is achieved by \[\mathbf{R}_{P}=\mathbf{r}=g\frac{\mathbf{I}-\mathbf{S}_{\sigma}}{m+\mathbf{I} +\mathbf{S}_{\sigma}} \tag{3}\] Thus, the RM extracts the scene contrasts by the ratio of the difference-of-Gaussian and the biased sum-of-Gaussian. ### _Yielding the residual image_ The residual image \(\mathbf{L}_{R}\) is obtained by subtracting the contrast image \(\mathbf{R}_{P}\) from \(\mathbf{I}\) \[\mathbf{L}_{R}=\mathbf{I}-\mathbf{R}_{P} \tag{4}\] Figs. 2(a), (b), and (c) show an example of decomposing an image into the contrast image and the residual image, respectively. It can be observed that the contrast image retains the visual details, while the residual image mainly retains the scene lighting variations. The dynamic range of scene lighting is usually vast. Hence, we develop a luminance modification (LM) function to compress the dynamic range of the residual image. The LM function should be progressive, suppressing larger luminance values more heavily than smaller ones [44]. In addition, the LM function is desired to be analogous to the visual response to light intensity. After examining a number of functions that satisfy these goals, we find that power law [45] performs the best, which is a linear function in the logarithmic domain: \[\mathbf{L}_{P}=LM\left(\mathbf{L}_{R}\right)=\gamma\mathbf{L}_{R}+k \tag{5}\] where \(\gamma\) and \(k\) are both constants, and \(\mathbf{L}_{P}\) is the perceived residual image. Fig. 2(d) shows the perceived residual image. Compared with (c), the brightness in shaded areas is improved. ### _Achieving the enhanced image_ The single-scale enhanced image \(\mathbf{T}_{E}\) is obtained by recombining the contrast image and the perceived residual image, and converting the result back to the intensity domain, given by \[\mathbf{T}_{E}=\exp\left(\mathbf{R}_{P}+\mathbf{L}_{P}\right) \tag{6}\] Fig. 1: The flowchart of the proposed method. All computations are element-wise. The enhanced result of Fig. 2(a) is shown in (e). Our method succeeds in bringing out details from the shadowed areas while maintaining good contrasts elsewhere. Complex images contain contrasts at multiple scales [46]. Therefore, we develop a multi-scale strategy to produce the final result \(\mathbf{T}_{MSE}\) by taking the weighted average of enhanced images at all scales: \[\mathbf{T}_{MSE}=\sum\nolimits_{n=1}^{N}\varphi_{n}\mathbf{T}_{E,n} \tag{7}\] where \(N\) is the number of scales, \(\mathbf{T}_{E,n}\) is the enhanced image at the \(n\)th scale, and \(\varphi_{n}\) is the weight associated with the \(n\)th scale. Since the HVS is more likely to see details in brightly-lit regions than in weakly-lit regions, larger weights are assigned to bright regions: \[\varphi_{n}=\frac{\mathbf{L}_{R,\;n}}{\sum_{n=1}^{N}\mathbf{L}_{R,\;n}} \tag{8}\] where \(\mathbf{L}_{R,\;n}\) is the residual image at the \(n\)th scale. The difference between \(\mathbf{T}_{E,n}\) and \(\mathbf{T}_{E}\) is essentially governed by the surround images associated with different values of \(\sigma\). We construct the multi-scale surround images by the sequence \(\sigma\in\{\sigma_{1},\sigma_{2},\cdots,\sigma_{n},\cdots\sigma_{N}\}\), where \(\sigma_{n}=4^{n-1}\sigma_{1}\). In this paper, we set \(\sigma_{1}=1\) to detect small details and the maximum number of scales \(N=3\) due to the limited size of the RF. In addition, the values of \(g\), \(m\), \(\gamma\), and \(k\) are invariant across scales, respectively. The influence of the four free parameters is discussed in Section III-B. Figs. 3(b) to (d) show the enhanced results for (a) at different scales. Larger \(\sigma\) values achieve better overall contrast than smaller \(\sigma\) values, while smaller \(\sigma\) values produce better local details. It is worth noting that the difference-of-Gaussian in Eq. (3) tends to yield halo artifacts around sharp edges, as shown in (c) and (d). The weighted multi-scale averaging is capable of a high degree of halo removal, but cannot eliminate them entirely. To better remove halos, we produce the surround image \(\mathbf{S}_{\sigma}\) by the weighted guided image filter(WGIF) [47] instead of the Gaussian filter. The result produced by the WGIF is shown in (f), where the halos in (e) are effectively eliminated. In summary, the entire procedure of the proposed method is outlined in Algorithm 1. ``` 0: Image \(\mathbf{T}\), parameters \(m\), \(g\), \(\gamma\), and \(k\). 0: Enhanced image \(\mathbf{T}_{MSE}\). 0: Begin 1: Convert \(\mathbf{T}\) to the logarithmic domain by \(\mathbf{I}=\log\left(\mathbf{T}\right)\); 2: Construct the surround images \(\mathbf{S}_{\sigma}=\mathbf{S}_{\sigma 1},...,\mathbf{S}_{\sigma n}\) using the weighted guided image filter at different scales; 3:for each \(\mathbf{S}_{\sigma}\)do 4: Extract the contrast image \(\mathbf{R}_{P}\) via Eq. (3); 5: Calculate the residual image \(\mathbf{L}_{R}\) via Eq. (4); 6: Achieve the perceived residual image \(\mathbf{L}_{P}\) via Eq. (5); 7: Obtain the enhanced image \(\mathbf{T}_{E}\) via Eq. (6); 8:endfor 9: Calculate the weight \(\varphi_{n}\) via Eq. (8); 10: Obtain \(\mathbf{T}_{MSE}\) via Eq. (7); ``` **Algorithm 1** Image enhancement based on the VEDA ## III Experimental results In this section, we first present the experimental settings. Then, we study the effect of involved parameters on results. Finally, we make qualitative and quantitative comparisons with several state-of-the-art methods to demonstrate the performance of our method. ### _Experimental settings_ **Computational environment:** All non-deep-learning methods are run in the MATLAB 2019a environment on a PC with 16G RAM and 2.9GHz Intel i7-10700k CPU. All compared deep-learning based methods are deployed on dual NVIDIA TITAN GTX GPUs. Fig. 3: Enhanced results at multiple scales and result with WGIF. (a) Input. (b) \(\sigma=1\). (c) \(\sigma=4\). (d) \(\sigma=16\). (e) Multi-scale. (f) Result using WGIF. Larger \(\sigma\) values provide better overall contrast than smaller ones, but tend to produce halos. The multi-scale form produces better visual quality than any single scale, but halos are still visible, as indicated by the yellow arrows. WGIF effectively removes halos. Please zoom in to see details. Fig. 2: Decomposition example. (a) Input. (b) Contrast image. (c) Residual image. (d) Perceived residual image. (e) Enhanced result. **Compared methods:** The compared methods include: DFE [9], LIME [15], MLLP [10], PLME [20], ALSM [21], PnPRetinex(PnpRtx) [16], NRMOE [48], EnlightenGAN(EGAN) [28], SCL-LLE [32], RetinexDIP(RtxDIP) [31], KIND++ [27], and ZeroDCE [29]. For a fair comparison, all results are produced by publicly available codes with parameters set as exactly as given in their papers. **Datasets:** We compare all methods on two test sets. The first test set, denoted as Testset-1, contains 266 images collected from a variety of publicly available datasets without reference images, including NPE (85 images) [8], MLLP (76 images) [10], VV (24 images) [49], DICM (64 images) [50] and MEF (17 images) [51]. The second test set (Testset-2) is built on the Part2 subset of SICE dataset [52], which consists of 229 multi-exposure sequences and their corresponding reference image (Ref.Image). We select the first three under-exposed images in each multi-exposure sequence of Part2 subset for testing since the compared methods are not specially designed for over-exposed images. The test images are resized to 25\(\%\) of their original size due to the memory limitation of a previous study [16]. **Objective Metrics:** To the best of our knowledge, there is no widely accepted measure to quantitatively assess the quality of enhanced images since image quality assessment (IQA) is highly related to subjective preferences. We evaluate all compared methods on Testset-1 by three representative image quality assessment (IQA) metrics: NIQE [53], BIQI [54], and NFERM [55]. NIQE estimates the deviations between the target image and a statistical model of natural scenes. BIQI measures the human perception of the naturalness of an image. NFERM scores the image quality based on the free-energy based brain theory and HVS features. For these three metrics, smaller values represent better image quality. For Testset-2, three fully referenced IQA metrics: peak signal-to-noise ratio(PSNR), structure similarity(SSIM) [56] and lightness order error(LOE) [8] are also adopted. Higher PSNR and SSIM values indicate better image quality and lower LOE values indicate better degree of naturalness preservation [8]. ### _Parameter study_ There are four free parameters in the proposed method: two power law related parameters \(\gamma\) and \(k\) in Eq. (5), the decay rate \(m\), and the gain \(g\) in Eq. (3). Firstly, both \(\gamma\) and \(k\) modify the shape of the LM function, which should be a progressively attenuating curve, thus, \(\gamma<1\). Smaller \(\gamma\) values tend to bring out contrasts in shadow regions, while larger \(\gamma\) values result in over-exposure of bright regions. To determine their reasonable values, we run experiments on DCIM dataset and compute the three averaged IQA metrics of the results for different \((\gamma,k)\) pairs, where \(\gamma\) ranges from 0.1 to 0.9 in steps of 0.1, and \(k\) ranges from \(log(5)\) to \(log(55)\) in steps of 5. We set \(m=g=1\) to avoid extensive parameter tuning. Fig. 4(a) shows an example with different \((\gamma,k)\) pairs. Results demonstrates that the pair \((\gamma,k)=(0.6,log(10))\) achieves the best average objective assessment score, as evidenced by the lowest NIQE, BIQI and NFERM values in Fig. 4(b). Secondly, since the RM produces ratio-type contrast images rather than high-pass filtered images, the value of \(m\) should satisfy \(m\!\ll\!(\mathbf{I}\!+\!\mathbf{S}_{\sigma})\). An example is presented in the first row of Fig.5, where \((\gamma,k)=(0.6,log(10))\) and \(g=1\). It can be seen that the \(m\) values change from 0.1 to 20 with minor effects on the output quality. The IQA curves in Fig.5 show that larger \(m\) values produce slightly higher NIQE, BIQI, and NFERM values, indicating slightly lower image quality. We achieve an appropriate balance between the quantitative scores and visual quality by setting \(m=1\). Finally, the gain parameter \(g\) controls the amplitude of the contrast image. Fig.6 presents the visual results and IQA curves when \(g\) changes from 0.5 to 2.0 in steps of 0.25 with constant \(m=1\) and \((\gamma,k)=(0.6,log(10))\). Larger \(g\) values lead to stronger image enhancement, but are more likely to result in over-enhancement. As can be seen in Fig.6, results with \(g=1\) have the lowest average NIQE and NFERM values, and almost the lowest BIQI value. Based on the visual quality and the quantitative evaluation, we set \((\gamma,k)=(0.6,log(10))\), \(g\!=\!1\), and \(m\!=\!1\) for our method. ### _Comparisons_ **1) Qualitative assessment** Figs. 7 to 12 are representative comparisons. Referring to these results, we make the following comments: 1. Both ALSM and LIME can significantly improve contrast, but tend to over-enhance images, resulting in unnatural appearance, see Figs.8(c), and (d) to 12(c), and (d). 2. MLLP is capable of a good degree of highlighting details, but may produce noticeable plaque artifacts (see Figs.7(e) and 10(e). 3. NRMOE tends to introduce color distortion in the results, as shown in Figs.9(g), 11(g), and 12(g). 4. DFE, PnpRetinex, and PLME are all effective in enhancing images with vivid color. Comparatively, DFE and PnpRetinex are better at bringing out details, as demonstrated in Figs.10(b), (f), and (h), and Figs.11(b), (f), and (g), respectively. However, DFE may give rise to prominent halos around abrupt bright edges (see the marked region in Fig.11(b)). PnpRetinex tends to generate slightly over-sharpened edges, as seen in Figs.8(f) and 9(f). 5. All deep-learning based methods can significantly enhance contrast in shadow areas. However, the results frequently suffer from unstable image quality. EnlightenGAN, KIND++, and RetinexDIP easily generate unwanted artifacts when highlighting shadow areas (see Figs.7(i), (j), and (m) to 10(i), (j), and (m), respectively). Both ZeroDCE and SCL-LLE usually produce results with reduced color saturation. The reason may be that their limited training datasets are incapable of covering the various natural scenes. Compared to these enhancement methods, the proposed method offers a better balance between improving contrast and preserving naturalness without introducing undesired artifacts or color shifts. #### 2) User study We conduct a user study with 13 observers to quantify the subjective assessment of the compared methods on Testset-1. The observers are trained from three aspects: 1) no severe artifacts such as over-, under-exposed regions, and halos are introduced, 2) the color rendition of the scene is perceptually natural, and 3) enhanced details are visually-pleasing [10]. The user study is designed as follows: 1) an original image and one of its randomly ordered enhanced results are simultaneously displayed on the screen. The original is used as the reference in each trial. 2) The observers evaluate the enhanced images based on their understanding of the scene. The observers can switch enhanced images back and forth in each trial to make the final rating [57]. 3) The quality score ranges from 1 to 5 (worst to best quality) with a step-size of one. Results achieved by different methods may have the same score, which means that the visual differences between the results are not enough for the observer to make a preferred rating. These scores are then averaged to yield the Mean Opinion Score (MOS) value for each method. The MOS values for each image set are listed in Table I. The proposed method obtains the second highest score in NPE dataset, and the highest scores in the other four datasets. It also achieves the highest average MOS value across all test images, tied for first place with PnpRetinex. This small-scale user study provides additional support that the proposed method outperforms other compared methods in terms of visual quality. #### 3) Quantitative assessment Table II shows the evaluation results on Testset-1. The proposed method obtains the lowest BIQI values on all datasets. Fig. 4: The influence of \((\gamma,k)\) pairs. (a) Results of different \((\gamma,k)\) pairs. The \(\gamma\) values increase from 0.1 to 0.9 in steps of 0.1 from left to right, and the \(k\) values increase from \(log(5)\) to \(log(55)\) in steps of 5 from bottom to top. The image in the yellow box is of the best objective assessment score. Therefore, our method yields results with better naturalness perception than other compared methods. In terms of the NIQE metric, the proposed method achieves the lowest average values on NPE, MLLP, and DCIM, and the second lowest values on VV and MEF, slightly higher than DFE and EnlightenGAN, respectively. This means that our method can compete with state-of-the-art methods in producing results similar to natural images. Our method ranks behind PnpRetinex on NPE, MLLP, DCIM, and VV in terms of average NFERM values, and behind PLME and PnpRetinex on MEF. This indicates that our method is comparable to the state-of-the-art methods in terms of image quality based on the HVS-aware features of natural scenes. Table III presents the quantitative assessment results on Testset-2. The proposed method achieves the best BIQI score, the second-best NIQE score, and the third-best NFERM score. This means that our method performs consistently in terms of the three no-reference IQA metrics. Our method also obtains the lowest LOE value for this dataset, indicating the best performance on naturalness preservation. ZeroDCE obtains the highest PSNR and SSIM values, and EnlightenGAN obtains the second highest SSIM value, while our method ranks second and third, respectively. The reason may be that ZeroDCE and EnlightenGAN are trained on the Part1 of SICE dataset and have learned how to enhance the test images with parameters trained from the training set. Figs.13 and 14 present two examples from Testset-2 for visual comparison. We can observe that ZeroDCE and EnlightenGAN suffer from color deviation. The same drawback occurs in the results of SCL-LLE and NRMOE. KIND++ and RetinexDIP yield results with noticeable artifacts. We consider that the results obtained by DFE, PnpRetinex, MLLP, and our method are closer to the reference images than those by the other methods. **4) Comparison on speed** We compare the average computation time of all compared methods over 100 images with size of 1368*912. As shown in Table IV, the proposed method is the fastest method on CPU platform and the averaged computation time of our method is 0.23s. The deep-learning based methods ZeroDCE and SCL-LLE are faster than our method, but they require expensive GPU resources. ## IV Discussion ### _Comparison with Retinex decomposition_ Assuming that an image is the product of an illumination image and a reflectance image, Retinex-based enhancement methods deal with the challenge of decomposing a given image into the illumination image and the reflectance image. Computing such a decomposition for real scenes is an ill-posed problem. Consequently, any attempt to solve it must make some simplifying assumptions about the scene, such as the spatial smoothness of illumination, or the piece-wise consistency/similarity of albedo [15, 16]. These assumptions about the scene imaging process are easily violated in real scenes. An example is shown in Fig.15(a). The lower half of (a) is the reflection image from a highly reflective surface (e.g., a mirror or a still pond). The nearly identical reflectance of the pond surface suggests that the incident illumination of this reflection region is not spatially smooth. In addition, the light source regions in (a) go against the assumption that an image is the product of illumination and reflectance. The images in Fig.15(b) to (d), from left to right, are the reflectance, illumination, and enhanced images produced by three Retinex-based methods DFE, LIME, and PnpRetinex, respectively. It can be Fig. 5: The influence of \(m\). The \(m\) values change from 0.1 to 20 with minor effects on the output quality. Fig. 6: The influence of \(g\). Larger \(g\) values lead to stronger image enhancement, but are more likely to result in over-enhancement. seen that the pond regions in the three illumination images are locally smooth. The images in (e), from left to right, are the contrast, residual, and enhanced images produced by our method. The contrast image looks similar to the reflectance images, while the residual image is similar to the illumination images. This phenomenon could be explained by two widely accepted and closely related assumptions about the HVS [58]: 1) the HVS is mostly sensitive to surface reflectance, and 2) the HVS primarily responds to local scene contrasts. It might be argued that the contrast, to some extent, is a function of perceived reflectance. Although the contrast and residual images are similar to the reflectance and illumination images, their physical implications are fundamentally different. Our model attempts to decompose the light variations from an image in a functionally similar manner to the visual perception of scenes instead of making physical assumptions about the imaging process of the scene. ### _Generality of the proposed method_ The core perspective of our method is to decompose an image into the contrast image and the residual image. An interesting question is whether other contrast definitions can be applied to the proposed model. We discuss this question through three well-known contrasts: the Weber contrast, the Fig. 8: Comparison 2. ALSM and LIME achieve over-exposed results. There are evident artifacts in the results of KIND++, and RetinexDIP. EnlightenGAN, NRMOE, ZeroDCE, and SCL-LLE produce results with varying degrees of color distortion. DFE, PnpRetinex, PLME, and the proposed method generate visually-pleasing results. Please zoom in to see details. Fig. 7: Comparison 1. The images obtained by LIME and MLLP are significantly over-enhanced. The result of RetinexDIP contains noticeable artifacts. Other methods yield graceful results. Please zoom in to see details. Michelson contrast, and the root mean square (RMS) contrast [59]. The three ratio-type contrasts are defined as follows: * Weber contrast \[\mathbf{C_{Weber}}=\frac{\mathbf{I}-\mathbf{I}_{S}}{\mathbf{I}_{S}}\] (9) * Michelson contrast \[\mathbf{C}_{Michelson}=\frac{\mathbf{I}_{\max}-\mathbf{I}_{\min}}{\mathbf{I}_{ \max}+\mathbf{I}_{\min}}\] (10) * RMS contrast \[\mathbf{C}_{RMS}=\sqrt{\frac{1}{M_{\Omega}}\sum_{\Omega}\left(\frac{\mathbf{I }-\bar{\mathbf{I}}}{\bar{\mathbf{I}}}\right)^{2}}\] (11) In Eqs. (9) to (11), \(\mathbf{I}\) denotes the image intensity, \(\mathbf{I}_{S}\) is the local background intensity, \(\mathbf{I}_{\max}\) and \(\mathbf{I}_{\min}\) are the highest and lowest intensities in a local image patch, respectively, \(\bar{\mathbf{I}}\) is the mean intensity of the local area \(\Omega\), and \(M_{\Omega}\) is the number of pixels in \(\Omega\). As far as we know, directly applying the three contrasts to enhance unevenly-lit images is less reported. In this paper, we treat these three contrast functions as three different retinal models to generate the contrast images. By Fig. 10: Comparison 4. DFE, PapRetinex, and the proposed method produce more visually-appealing results than the other methods. However, the result of DFE suffers from slight halo artifacts. Please zoom in to see details. Fig. 9: Comparison 3. DFE, MLLP, PapRetinex, PLME, RetinexDIP, and the proposed method produce better results than the other methods in terms of fidelity. Please zoom in to see details. testing a number of images, we find that the Weber contrast produces better-enhanced results in terms of fidelity than the other two contrasts and approaches comparable performance to Eq. (3). However, the Weber contrast sometimes tends to lose the details in bright regions. A representative example is shown in Fig. 16. It might be argued that: 1) Eq. (9) can be regarded as a simplified version of Eq. (3). 2) Eq. (3) and Eq. (9) are capable of estimating the contrast in natural images more accurately than the other two contrasts. ### _Post-processing for noise suppression_ Noise in dark regions is easily amplified after enhancement. Therefore, denoising is sometimes required in some applications. Given that noise in different regions of an unevenly-lit image is often amplified to different degrees, we propose a fusion scheme to suppress noise in dark regions while preventing details in bright regions from being over-smoothed, given by \[\mathbf{T}_{F}=\mathbf{L}_{NR}\mathbf{T}_{E}+(1-\mathbf{L}_{NR})\,\mathbf{T}_ {DE} \tag{12}\] where \(\mathbf{T}_{F}\) is the final recomposed result, \(\mathbf{L}_{NR}\) is the residual image normalized to [0,1], and \(\mathbf{T}_{DE}\) is the denoised result of \(\mathbf{T}_{E}\). This paper adopts the widely used denoising method BM3D [60] to achieve \(\mathbf{T}_{DE}\). Fig. 17 exhibits an example of noise suppression. Compared to the images in (b) and (c), the result in (d) shows that fine details in the bright regions are nicely preserved, while noise in the dark regions is effectively smoothed out. We would like to mention that the denoising in Eq.(12) can be appended as a post-processing step to any enhancement method for unevenly-lit images. Fig. 11: Comparison 5. DFE produces noticeable halos around abrupt bright edges, see the marked region. PnpRetinex, PLME, SCL-LLE, RetinexDIP, and the proposed method produce more natural-looking results than other methods. Fig. 12: Comparison 6. The results of ALSM, LIME, and PLME suffer from varying degrees of overexposure. ZeroDCE,SCL-LLE, and NRMOE produce rather pale images. The results of MLLP, EnlightenGAN and KIND++ contain noticeable artifacts. DFE, PnpRetinex, RetinexDIP, and the proposed method yield similar results. We must emphasize that noise suppression usually results in a loss of detail in shadows. Suppressing noise while preserving details remains an ongoing research topic in image enhancement. In this paper, we focus on improving contrasts rather than suppressing noise to avoid distraction. Introducing noise estimation methods to determine whether to suppress noise could be one of our following works. ### _Limitations_ It will be of interest to improve the proposed method in our future work in the following aspects: 1) Our method is not good at enhancing over-exposed images because the residual image is adjusted by a fixed gamma curve. The gamma function cannot bring out details in both dark and bright regions. A possible way to alleviate this issue is to train a CNN to fit light enhancement curves under various lighting conditions. 2) The proposed method does not take into account more characteristics of the HVS, such as frequency selectivity and directional selectivity [59]. Incorporating more visual mechanisms related to natural image analysis into our model may help to improve the fidelity of the enhanced results. ## V Conclusion In this paper, we have proposed a biological vision based exploratory data model to enhance images taken under unevenly-lit conditions. The proposed model decomposes the input image into its contrast image and residual image. The perceptually important details in the scene are preserved in the contrast image, while the lighting variations are retained in the residual image. The enhanced result can be achieved by manipulating the two images and recombining them. The major advantage of the proposed method is its simplicity and effectiveness. Unlike existing enhancement methods based on physical lighting models or deep-learning techniques, the proposed method does not require any explicit assumptions and prior knowledge of the natural scenes, nor any learning procedures. Despite its Fig. 14: Comparison 2 of results from SICE. Fig. 13: Comparison 1 of results from SICE. simplicity, experimental results demonstrate that the proposed method is comparable to several state-of-the-art methods.
2304.01627
Self-Supervised Image Denoising for Real-World Images with Context-aware Transformer
In recent years, the development of deep learning has been pushing image denoising to a new level. Among them, self-supervised denoising is increasingly popular because it does not require any prior knowledge. Most of the existing self-supervised methods are based on convolutional neural networks (CNN), which are restricted by the locality of the receptive field and would cause color shifts or textures loss. In this paper, we propose a novel Denoise Transformer for real-world image denoising, which is mainly constructed with Context-aware Denoise Transformer (CADT) units and Secondary Noise Extractor (SNE) block. CADT is designed as a dual-branch structure, where the global branch uses a window-based Transformer encoder to extract the global information, while the local branch focuses on the extraction of local features with small receptive field. By incorporating CADT as basic components, we build a hierarchical network to directly learn the noise distribution information through residual learning and obtain the first stage denoised output. Then, we design SNE in low computation for secondary global noise extraction. Finally the blind spots are collected from the Denoise Transformer output and reconstructed, forming the final denoised image. Extensive experiments on the real-world SIDD benchmark achieve 50.62/0.990 for PSNR/SSIM, which is competitive with the current state-of-the-art method and only 0.17/0.001 lower. Visual comparisons on public sRGB, Raw-RGB and greyscale datasets prove that our proposed Denoise Transformer has a competitive performance, especially on blurred textures and low-light images, without using additional knowledge, e.g., noise level or noise type, regarding the underlying unknown noise.
Dan Zhang, Fangfang Zhou
2023-04-04T08:30:50Z
http://arxiv.org/abs/2304.01627v1
# Self-Supervised Image Denoising for Real-World Images with Context-aware Transformer ###### Abstract In recent years, the development of deep learning has been pushing image denoising to a new level. Among them, self-supervised denoising is increasingly popular because it does not require any prior knowledge. Most of the existing self-supervised methods are based on convolutional neural networks (CNN), which are restricted by the locality of the receptive field and would cause color shifts or textures loss. In this paper, we propose a novel Denoise Transformer for real-world image denoising, which is mainly constructed with Context-aware Denoise Transformer (CADT) units and Secondary Noise Extractor (SNE) block. CADT is designed as a dual-branch structure, where the global branch uses a window-based Transformer encoder to extract the global information, while the local branch focuses on the extraction of local features with small receptive field. By incorporating CADT as basic components, we build a hierarchical network to directly learn the noise distribution information through residual learning and obtain the first stage denoised output. Then, we design SNE in low computation for secondary global noise extraction. Finally the blind spots are collected from the Denoise Transformer output and reconstructed, forming the final denoised image. Extensive experiments on the real-world SIDD benchmark achieve 50.62/0.990 for PSNR/SSIM, which is competitive with the current state-of-the-art method and only 0.17/0.001 lower. Visual comparisons on public sRGB, Raw-RGB and greyscale datasets prove that our proposed Denoise Transformer has a competitive performance, especially on blurred textures and low-light images, without using additional knowledge, e.g., noise level or noise type, regarding the underlying unknown noise. Image denoising, self-supervised, real-world, Transformer, dual-branch. + Footnote †: preprint: 25 January 2023, accepted 3 February 2023, date of publication 10 February 2023, date of current version 15 February 2023. ## I Introduction During the image acquisition process of image sensors, CCD and CMOS, various noises are introduced due to the influence of sensor material properties, working environment, electronic components and circuit structure.In addition, due to the imperfection of transmission media and recording equipment, digital images are often attacked by various noises. There are basically four types of common noise in images: Gaussian noise, Poisson noise, multiplicative noise, and salt and pepper noise. Image denoising is an inevitable step in image processing, and its denoising effect has a huge impact on the subsequent image processing process. Traditional image denoising algorithms [1, 2] are slow and less robust. With the development of deep learning, image denoising algorithms have made great progress. Although some progress has been made in traditional methods [3], supervised denoising models [4, 5, 6, 7] have relatively better denoising effects on public datasets. However, supervised image denoising requires noisy-clean data pairs, which are very difficult to obtain in practical applications. The most common approach is to add Additive White Gaussian Noise (AWGN) or other simulated real-world noise to a clean image and artificially synthesise a noisy image to form noisy-clean pairs [4, 7, 8, 9, 10]. However, there is an unavoidable gap between the noise synthesised by noise modelling and the real-world noise. Therefore, the denoising performance of this synthesised type of denoising model will be greatly reduced when denoising real-world images. Under these circumstances, many self-supervised training
2309.01803
Importance of overnight parameters to predict Sea Breeze on Long Island
The sea breeze is a phenomenon frequently impacting Long Island, New York, especially during the spring and early summer, when land surface temperatures can exceed ocean temperatures considerably. The sea breeze influences daily weather conditions by causing a shift in wind direction and speed, limiting the maximum temperature, and occasionally serving as a trigger for precipitation and thunderstorms. Advance prediction of the presence or absence of the sea breeze for a certain location on a given day would therefore be beneficial to weather forecasters. To forecast sea breeze occurrence based on the previous night's weather conditions, we used a novel algorithm called the $D$-Basis. We analyzed sea breeze data from a recent four year period (2017-2020) at a single weather station several miles inland from the coast. High or constant station pressure, high or constant dew point, and onshore wind from the previous night were found to be strong predictors of sea breeze formation the following day. The accuracy of the prediction was around 74\% for June 2020. Unlike other prediction methods which involve the comparison of sea surface and land surface temperatures in near real time, our prediction method is based on the parameters from the prior night, allowing it to potentially aid in advanced forecasting of the sea breeze.
Kira Adaricheva, Jase E. Bernhardt, Wenxin Liu, Briana Schmidt
2023-09-04T20:39:26Z
http://arxiv.org/abs/2309.01803v1
# Importance of overnight parameters to predict Sea Breeze on Long Island ###### Abstract The sea breeze is a phenomenon frequently impacting Long Island, New York, especially during the spring and early summer, when land surface temperatures can exceed ocean temperatures considerably. The sea breeze influences daily weather conditions by causing a shift in wind direction and speed, limiting the maximum temperature, and occasionally serving as a trigger for precipitation and thunderstorms. Advance prediction of the presence or absence of the sea breeze for a certain location on a given day would therefore be beneficial to weather forecasters. To forecast sea breeze occurrence based on the previous night's weather conditions, we used a novel algorithm called the \(D\)-Basis. We analyzed sea breeze data from a recent four year period (2017-2020) at a single weather station several miles inland from the coast. High or constant station pressure, high or constant dew point, and onshore wind from the previous night were found to be strong predictors of sea breeze formation the following day. The accuracy of the prediction was around 74% for June 2020. Unlike other prediction methods which involve the comparison of sea surface and land surface temperatures in near real time, our prediction method is based on the parameters from the prior night, allowing it to potentially aid in advanced forecasting of the sea breeze. 1 Footnote 1: Key Words: sea breeze, weather forecasting, Long Island, \(D\)-basis algorithm, association rules analysis ###### Contents * 1 Introduction * 2 Data * 3 Detailing the definition of sea breeze * 4 Approach * 4.1 Parameter Selection * 4.2 \(D\)-basis algorithm and ranking of attributes by the relevance * 4.3 Code development for the data conversion * 5 Data conversion * 6 Results * 6.1 Initial stage of the project * 6.2 Second stage of the project * 6.3 Computation of the Forecasting Value * 6.4 Computation of Forecasting Values on the random subset of the data * 6.5 Comparison with synthetic data Conclusions and Discussion * A Appendix: The description of the \(D\)-basis algorithm * A.1 \(D\)-basis algorithm * A.2 Ranking attributes relevant to a target attribute * A.3 Formula for the parameter of relevance ## Impact Statement The sea breeze is a phenomenon frequently influencing Long Island, especially during warm season months, due to differences in land and ocean surface temperatures. Local weather impacts from the sea breeze can include a wind shift and decreasing air temperature. Given its importance to Long Island's climatology, a tool to forecast the presence or absence of the sea breeze in near real time is desirable. To do so, we used a novel algorithm, the \(D\)-Basis, to predict sea breeze occurrence during June 2017-2020, based on weather and climate data from the previous 24 hours. The algorithm was able to predict the sea breeze with relatively high accuracy and can be adapted as a tool for operational forecasters in the future. ## 1 Introduction Due to its location situated between the Atlantic Ocean and Long Island Sound, a sea breeze is common on Long Island during warm season months. The sea breeze originating from the south shore of Long Island, which faces the Atlantic Ocean, can be especially potent, moving several miles inland and sometimes reducing the near-surface air temperature by 5-10 degrees Celsius, see Novak and Colle 2006. For example, Colle, Olson, and Tongue 2003 describes a representative event occurring on 7 June 2001, in which a sea breeze boundary moved into the middle of Long Island by the late afternoon, with a temperature gradient of 3-5 degrees Celsius across the immediate coastline. Further, that study objectively cataloged Long Island sea breeze instances during the 2000 and 2001 warm seasons, with an event required to have an approximately 6 degree Celsius temperature gradient between a near offshore buoy and a surface observing station near the coast, along with light winds during the morning. That climatology indicated that sea breeze events were by far most common in the month of June during those years, owing to a large difference between land surface and ocean temperatures typical during the late spring and early summer. More recently, McCabe and Freedman 2023 objectively identified Long Island sea breeze events between 2010 and 2020 using a variety of surface, near-surface, and lower atmospheric weather observations. Those variables included near-surface temperature and moisture, sea surface temperature, station pressure, and wind speed and direction at both 10 meters and 100 meters above the surface. That study determined an average of 32 sea breeze days annually, with a maximum in July and the surrounding warm season months. Given the high frequency of the sea breeze on the south shore of Long Island during the early portion of the warm season, and its ability to reduce air temperature and increase wind speed in the lower atmosphere, improved advance prediction of this phenomena could be advantageous to weather forecasters. The impact of the sea breeze is further magnified by Long Island's high population density, so the ability to predict for a given day whether the sea breeze will reach a certain point inland would help with projecting energy demand, outdoor work and recreation conditions, and other societal impacts. Thus, the goal of this study is to apply the \(D\)-basis algorithm to the problem of short term sea breeze prediction for the south shore of Long Island. We hypothesize that by using antecedent conditions the night before a potential sea breeze event, the \(D\)-basis algorithm can forecast whether or not the sea breeze will reach a certain weather station on Long Island with a high degree of accuracy. An important advance of this project compared to earlier applications of the \(D\)-basis, or other analysis tools involving association rules, is the development of the new methodology of processing of retrieved rules. Given the multiple runs of the algorithm and the ranking of the attributes associated with the particular outcome (in our case: either a sea breeze or non-sea breeze day), we aggregate the results and produce a unique numerical value, which determines the forecast. ## 2 Data To objectively determine the presence of the sea breeze, 5-minute near-surface weather observations were acquired from the Hofstra University WeatherSTEM network in Nassau County, New York, specifically the station at Hofstra's Soccer Stadium (Figures 1 and 2). Weather data at the station is recorded every minute, however, some weather observations, such as wind speed and direction, are highly variable or occasionally unavailable at that temporal resolution. Thus, all one minute observations were averaged into 5-minute intervals so that time intervals with more readings would not weigh unequally in later groupings. for the months of June 2017, June 2018, June 2019, and June 2020. Of the weather data considered, temperature and wind direction were used to objectively determine whether a particular day was a sea breeze event, while the remaining variables were included for later analysis as potential predictors in the algorithm. A candidate day was classified as a sea breeze day if the maximum temperature occurred between 10 AM and 3:30 PM local time and was followed by a period with at least 70% of the 5-minute wind direction observations coming from an onshore direction (i.e., between 70 and 250 degrees, perpendicular to the south shore of Long Island) for the subsequent two hours. Those criteria were chosen based on the previous literature showing that the sea breeze boundary frequently reaches Hofstra University's location in the middle of Long Island by late afternoon. Further, once the boundary passed through Hofstra, it could be expected that the wind would come from a direction mainly perpendicular to the shore. 120 days (four months) of data were checked for the occurrence of the sea breeze, with 54 days being classified as sea breeze days, while eight were so-called'very moist' days, as discussed later in this section, and the remaining 58 days were categorized as non-sea breeze days. Some days were automatically categorized as non-sea breeze days, based on the Spatial Synoptic Classification (SSC), an air mass climatology index, see Sheridan 2002. The SSC analyzes surface observations to classify the _weather type_ of a given day for a weather station. The nearest weather station to Hofstra University for which the SSC is available: JFK airport, in Queens, New York City - was used, with certain _weather type_ days automatically being considered non-sea breeze days. Specifically, days with an SSC number of "66" (Moist Tropical Plus) or "67" (Moist Tropical Double Plus) were classified as _very moist days_ and were not eligible to be categorized as sea breeze. This was due to the fact that when very tropical air masses are present over Long Island, they are almost always accompanied by deep synoptic flow from the south or southwest, the same direction as the sea breeze. Thus, it is rare for a meaningful sea breeze boundary to develop on those days, since the larger scale flow is already coming from a similar direction. Figure 4: 5-minute interval data data Figure 3: Raw data After the classification of _very moist days_ was completed, the remaining sea breeze and non-sea breeze days were investigated to determine whether the \(D\)-basis algorithm and chosen predictors could successfully forecast the presence or absence of the sea breeze using data from the preceding night. ## 3 Detailing the definition of sea breeze We began with the aforementioned definition of a sea breeze day being a day where _the highest temperature occurs between 10 am and 3:30 pm_ in locations near the coast. The goal of the algorithm development was to specify which other weather variables could provide advanced notice of an impending sea breeze day. Doing so could allow those weather parameters to be leveraged using the computer algorithm to forecast the sea breeze in advance. Indeed, while there is no universally accepted numerical definition of the sea breeze for a given locality, having a binary definition (i.e., yes sea breeze or no sea breeze) was necessary to test the algorithm developed. Moreover, it is easier to remove false positives than to account for false negatives in developing the algorithm, so the definition of the sea breeze was slightly expanded after initial testing, to allow for more days to be classified as potential sea breeze days. For example, the high temperature cutoff was initially set at 3 PM, but later moved back slightly to 3:30 PM. One method used to filter out false positives was the implementation of a wind direction requirement into the sea breeze definition. Since the Atlantic Ocean is roughly due south of Long Island, the five-minute averaged wind direction in the two hours following the maximum temperature being recorded was required to be from the southern half of the wind rose (i.e., between 70 and 250 degrees) for at least 70% of the time. That requirement helped to eliminate false positives stemming from instances such as a strong cold frontal passage, which could result in the high temperature occurring much earlier than normal during the day, but due to a reason other than the sea breeze. On the other hand, the definition should also not be too specific to a location or time of year so that it does not work as accurately in other settings. One way to mitigate against that issue was to use changes in weather variables, instead of absolute values, though the use of the latter was still necessary in relation to air pressure. Thus, the following was used as the basic definition for a sea breeze: **Definition**.: Sea Breeze 1. Occurs in a coastal area. 2. Occurs in the warm season, when the land temperature is sufficiently warmer than the water temperature. 3. Occurs in absence of larger scale weather phenomena. 4. Occurs near surface temperature of the day happens between 10 AM and 3:30 PM (local time, not precisely generalizable to all locations). 5. For two hours after the highest temperature, the wind direction must be coming from the direction of the body of water for at least 70% of the time (i.e., roughly perpendicular to shore). ## 4 Approach ### Parameter Selection According to a formula presented in Biggs and Graves 1962, the lake breeze forecasting index \(\epsilon\) is proportional to the square near surface wind speed \(|U|\) and inversely proportional to the land-sea temperature contrast \(\Delta T\): \[\epsilon=\frac{|U|^{2}}{C_{p}\Delta T},\] where \(C_{p}\) is a heat coefficient of dry air at constant pressure. \(\Delta T=T_{land}-T_{sea}\) measures the temperature difference, with \(T_{land}\) taken at the point far enough inland. Many other forecasting mechanisms are reported in Miller et al. 2003, where the wind and temperature measurements were of central importance. Most of their forecasting relied on the morning measurement to predict the sea breeze happening on the same day. In our study, we relied solely on measurements produced by a Hofstra University weather station, located considerably inland compared to south shore of Long Island, and not sea surface temperature observations. We compared the data of one of the years with similar measurements of another weather station, near Wantagh, New York, much closer to the shore, but the difference was insignificant. The prediction method we developed involved the examination of station pressure, dew point, and wind direction during the prior night from 7pm to 7am on the morning of the predicted day. Initial focus was on the dynamic of pressure and dew point: rising, dropping or constant, and at later stage of the project the absolute values of pressure were added to the list of attributes. According to Laird and Kristovich 2004 three quarters of sea breeze events happened with cross shore component \(|U_{x}|\) being less than 2 m/sec, thus, wind speed might not have considerable effect on the sea breeze. This attribute was not considered in current study, especially given the relative distance of the weather station from the shore. Pressure, pressure dynamic, dew point dynamic and wind direction were chosen as predictor variables based on the weather variables most closely associated with sea breeze days for Long Island. It was also important to consider the development of those variables overnight, prior to sea breeze initiation, and we used 2-hour intervals when observing the dynamics of changes in parameters. Those overnight parameters, one for each 2-hour interval between 7pm and 7am prior to day of prediction, were classified as follows: * pressure rising or dropping during the interval; * dew point rising or dropping during the interval; * onshore wind direction (70-250 degree) dominating (\(>60\%\)) during the interval; * station pressure above or below the climatological average sea-level pressure of 29.92 inches mercury (1013 mb) during the interval (given Long Island's elevation very close to sea level). Generally, the sea breeze occurs in June in the absence of a strong synoptic low pressure system (pressure not dropping) or cold frontal passage (dew point not dropping). On the mesoscale, persistence was also selected as an indicator for sea breeze through the onshore wind requirement. At the synoptic scale, the SSC weather type was used to remove days where a sea breeze would not be expected to occur (i.e., due to persistent onshore caused by a synoptic weather system). ### \(D\)-basis algorithm and ranking of attributes by the relevance In this work we explored an approach in weather forecasting leveraging the discovery of association rules in the binary data. One of recent explorations of this sort was done in Coulibaly, Kamsu-Foguem, and Tangara 2021. The \(D\)-basis is a new algorithm described in Adaricheva and Nation 2017 that discovers the _implications_\(S\to d\) in a table with entries 0 and 1. This algorithm belongs to the family of analytic tools based on _association rules_ and/or _implications_, but it also involves secondary statistical analysis of the retrieved rules. In the current study, \(S\) is a subset of attributes/columns (weather observation attributes) and \(d\) is another column (e.g., an indicator of sea breeze day). Note that non-binary attributes can be expressed as a combination of binary ones. For example, attributes with five values can be given by five binary columns. Attributes that have values of real numbers within some interval can be discretized by splitting the range into a few sub-intervals and assigning a new attribute to each subinterval of values. The rows of the table represent the days of the observation. Implications are association rules that hold in all rows of the table. In our context, for every day/row, if all attributes in set \(S\) occur (marked by 1 in that row), the attribute \(d\) occurs as well. The algorithm is controlled by several parameters, which filter the retrieved rules \(S\to d\) with the fixed target attribute \(d\). The most important are the _minimal support_ and _row deletion_. We say that an observation (a row of the table) _validates_ rule \(S\to d\), when all attributes in \(S\) as well as \(d\) are present in the observation, that is, the entries in the row are marked by 1, corresponding to columns in \(S\) and \(d\). The _support of the rule_ is the number of observations/rows where the rule is validated. When the test runs with parameter of _minimum support_\(=k\), only the rules with the support of at least \(k\) are returned. Further discussion of the parameters of the test is given in appendix section A.2. Using the sets of rules retrieved on the two runs of the algorithm: one on the target attribute \(d\), and another on its negation \(\neg d\), which is not necessarily present in the data, so it needs to be created - permits computation of a real non-negative number assigned to each attribute \(a\) different from \(d\). It is called the _relevance_ of \(a\) with respect to \(d\) and denoted \(rel_{d}(a)\). The higher \(rel_{d}(a)\), the more frequently attribute \(a\) appears in set \(S\) for rules \(S\to d\) compared to rules \(S\to\neg d\). All attributes different from \(d\), therefore, may be ranked by the relevance with respect to \(d\), and our method would investigate the attributes with highest ranks with respect to \(d=\)_sea breeze day_ or \(d=\)_non-sea breeze day_, which we call _normal day_ for the rest of the paper. Note that the relevance is computed within the sets of retrieved rules, one for target \(d\) and another for target \(\neg d\). Changing the parameters of the tests will change the sets of rules, which may change the relevance. For this reason we run multiple tests, then aggregate the results. More detailed description of the \(D\)-basis algorithm is given in appendix section A.1, and the definition and computation of the relevance in section A.3. ### Code development for the data conversion The \(D\)-basis code was developed in github [https://gitlab.com/npar/dbasis](https://gitlab.com/npar/dbasis) and was first made publicly available at the time of publication in Nation et al. 2021. The weather data conversion for the \(D\)-basis entry was performed in the R programming language following the rules (3)-(5) of the Sea Breeze Definition in section 3. The files related to this project are located in [https://gitlab.com/npar/seabreeze](https://gitlab.com/npar/seabreeze) ## 5 Data conversion Since the \(D\)-basis algorithm acts on binary data, the weather variables collected from the Hofstra weather station were converted into binary form. Each day in the converted data was represented by a row of the table, while the columns represented a weather attribute which was marked by 1 when it was observed on a particular day/row, and by 0 when the attribute was not observed. Several attributes were the target attributes representing whether or not a sea breeze was observed, or if the day was excluded due to the synoptic climatology (i.e., a very moist day): 71: Next Day Sea Breeze (1=SB, 0=Normal or Very Moist) 72: Complement of 71 (1=Normal or Very Moist, 0=SB) 73: Next Day Normal Day (1=Normal, 0=SB or Very Moist) 74: Complement of 73 (1=SB or Very Moist, 0=Normal) For example, the first row of the table has entry 1 in column 71, because on June 1, 2017 a sea breeze was observed (and it also had 0 in column 72 and 73 and 1 in column 74). Note that "Next Day" applies to June 1, because the observation of some weather attributes were made on May 31, 2017. The rows in the converted data were sorted as follows: Rows 1-30: May 31 2017-June 29 2017 Rows 31-60: May 31 2018-June 29 2018 Rows 61-90: May 31 2019-June 29 2019 Rows 91-120: May 31 2020-June 29 2020 The first 6 attributes given in Table 1 were possible descriptors of the day prior to prediction, based on the SSC categorization of that day. For example, the first row had entry 1 in column 2 and 0 entry in columns 1 and 3-6, because May 31, 2017, was classified as a Normal Day (i.e., no sea breeze). The attributes 7-16, as in Table 2, measured weather parameters for the time interval 7-9 pm the previous evening, with entry value 1=yes or 0=no. The pressure trend and dew point trend of each 2-hour interval were determined simply by subtracting the end value from the start value. For pressure a difference of 0.00 would designate it as "consistent", while for dew point, a change was classified to have occurred when the difference was in interval (-1,1). Pressure above 29.92 in Hg was categorized when all 5-minute intervals during the 2-hour period were above or on that climatological threshold; otherwise, when any 5-minute interval during the 2-hour period is dropped below the threshold, it is determined as "Pressure below 29.92 in Hg." Similar attributes, in the same sequence of 10, described the measurements during the following two-hour time intervals: Columns 17-26 between 9 - 11 pm, Columns 27-36 between 11 pm - 1 am, Columns 37-46 between 1 - 3 am, Columns 47-56 between 3-5 am, and Columns 57-66 between 5-7 am. Columns 67-70, as described in Table 3, were added to further confirm the initial hypothesis that high pressure, together with non-decreasing pressure, may be a significant factor in sea breeze prediction, given that a sea breeze is likely to form in the absence of strong fronts or cyclones, which may be indicated by low or falling pressure. Hence, some combinations of the six 2-hour intervals were introduced. If 4 or more intervals possessed non-falling pressure, column 67 of that day was marked as 1. If 4 or more intervals had no time instance with pressure lower than 29.92 inches Hg, column 68 was marked as 1. These observations associated with anticyclonic conditions were selected for inclusion because they imply weak flow at the surface and aloft. Such conditions have been shown to be a key indicator of sea breezes along the US East Coast in previous studies, such as Hughes and Veron 2018 and Cetola 1997. \begin{table} \begin{tabular}{|c|c|c|} \hline Column \# & Column description & Entry values \\ \hline 1 & Sea Breeze & 1=yes, 0=no \\ \hline 2 & Normal Day & 1=yes, 0=no \\ \hline 3 & Very Moist Days & 1=yes, 0=no \\ \hline 4 & SSC Dry Day & 1=yes, 0=no \\ \hline 5 & SSC Moist Day & 1=yes, 0=no \\ \hline 6 & SSC Transition Day & 1=yes, 0=no \\ \hline \end{tabular} \end{table} Table 1: Attributes for Day Categories \begin{table} \begin{tabular}{|c|c|c|} \hline Col \# & Column description & Details \\ \hline 7 & Pressure rising & Pressure increased by the end of interval \\ \hline 8 & Pressure falling & Pressure decreased by the end of interval \\ \hline 9 & Pressure consistent & No change in pressure \\ \hline 10 & Wind North & at least 60\% of the time interval \\ \hline 11 & Wind South & at least 60\% of the time interval \\ \hline 12 & Dew point rising & Dew point increased by more than 1 degree F \\ \hline 13 & Dew point falling & Dew point decreased by more than 1 degree F \\ \hline 14 & Dew point consistent & Dew point changed by less than 1 degree F \\ \hline 15 & High pressure & Pressure above 29.92 in Hg \\ \hline 16 & Low pressure & Pressure below 29.92 in Hg \\ \hline \end{tabular} \end{table} Table 2: Attributes of weather observations ## 6 Results ### Initial stage of the project The initial phase of this project was to conduct \(D\)-basis testing on 3 years of June data ranging from 2017-2019 to verify the effectiveness of the initial determinants in predicting the presence or absence of the sea breeze. Three attributes were tested: the change in pressure, the change in dew point, and the dominant wind direction. After computing the relevance, the variables that were most important for predicting a _sea breeze_ day were _pressure rising, winds from the south (i.e., on-shore), and dew point either rising or staying consistent_. The attributes that were important for predicting _normal_ days were _pressure falling, wind from the north (offshore), and dew point falling_. The following tables collect the results on each weather variable with respect to 6 time intervals of 2 hours between 7pm and 7am. Table 4 shows, for example, that all 6 time intervals are important for the pressure rising attribute because they are included in the top 17 (out of 58 total) attributes ranked by the relevance, when listed in order from highest to lower. The time intervals that are not in the top ranked for southerly wind are always earlier in the night. This implies that southerly winds in the later hours of the night prior to sea breeze development are more consistently important. The combination of dew point rising and dew point consistent for prediction of the sea breeze covered all time intervals. Results of predicting a normal day are in Table 5. The earlier hours of the prediction interval were more important for pressure and dew point, while the later hours were more important for the wind when predicting a normal day. ### Second stage of the project The strong separation of patterns of the winds from the south (north), pressure rising (falling) and dew point not falling (falling) when occurring prior to a sea breeze (normal) day served as the basis to formulate the prediction method in the second stage of the project. The main new features in the second phase of the project were: 1. adding absolute values of pressure above/below the climatological threshold of 29.92" Hg (one standard atmosphere), during six 2-hours intervals; 2. correcting wind direction, given that the shoreline is not in a perfect west to east orientation; 3. adding 30 more observation days for June 2020; \begin{table} \begin{tabular}{|c|c|c|} \hline Sea breeze forecast & Relevance & Rank \\ \hline Attributes: & \# of intervals & in the top \\ \hline Pressure rising & 6 & 17 \\ \hline Wind South & 6 & 24 \\ \hline Dew point rising & 3 & 22 \\ \hline Dew point consistent & 5 & 28 \\ \hline \end{tabular} \end{table} Table 4: Number of time intervals in the sea breeze forecast \begin{table} \begin{tabular}{|c|c|} \hline Col \# & Column description \\ \hline 67 & When \(>3\) of 6 intervals have rising or constant pressure \\ \hline 68 & When \(>3\) of 6 intervals have pressure above 29.92 inches Hg \\ \hline 69 & Pressure rising or high pressure \\ \hline 70 & Pressure falling or low pressure \\ \hline \end{tabular} \end{table} Table 3: High pressure attributes 4. running \(D\)-basis with minimum support = 6,8,11. The decision to include the absolute pressure as an additional parameter came after careful observation of the graphs of pressure behavior over 120 days. To examine the pressure trend for sea breeze days, one can observe the daily pressure charts. Each dot on the daily trend graph represents a 5 minute interval during the 9pm to 7am time frame, where the purple dashed line represent the 29.92" Hg standard that we later decided to use to determine whether each 2 hour interval is considered as _high pressure_, Figures 4(a) and 4(b). Two pressure trend diagrams are chosen to demonstrate the general trend for a sea breeze day versus a normal day. On Figure 4(a) there is an example of overnight pressure chart between June 25-26 in 2018, with June 26 being a sea breeze day. On Figure 4(b) there is an example of overnight pressure chart between June 20-21, 2018, with June 21 being a normal day. The normal day trend is sometimes seen slightly decreasing/flat or increasing, but rarely remains completely above the purple dashed line. June 2018 is a good example to account for the 29.92" Hg pressure standard. In the graph on Figure 6, each dot represents the average pressure of each day in June from 9pm-7am; the sea breeze days (red dots/circles) are mostly above the purple dash line, while the normal days (blue dots/squares) and very moist days (black dots/triangles) are mostly below the purple line with few exceptions. \begin{table} \begin{tabular}{|c|c|c|} \hline Normal day forecast & Relevance & Rank \\ \hline Attributes: & \# of intervals & in the top \\ \hline Pressure falling & 5 & 20 \\ \hline Wind North & 4 & 13 \\ \hline Dew point falling & 5 & 24 \\ \hline \end{tabular} \end{table} Table 5: Number of time intervals in the normal day forecast Figure 5: Patterns of pressure measurement between 9 pm and 7am As was observed, the two distinctive trends for normal days and sea breeze days deserve potentially more quantitative validation. Hence, we also created two more columns combining the absolute pressure values and the trend. To consolidate the 6 intervals, we created column 68 for when more than half of the 6 intervals have high pressure. Also to support both increasing and constant pressure, we created column 67 that returns 1 if at least 4 intervals have pressure rising or constant pressure. Lastly, column 69 is created as [67 OR 68] to consolidate the attributes of high pressure, pressure rising as a possible predictor of a sea breeze day. Similarly, column 70 for possible prediction of normal days was defined as [\(\neg\) 67 OR \(\neg\)68], so it would mark a day 1 when at least 4 intervals have low pressure or falling pressure on that day, see Table 3. In addition, a color code scheme for several groups of parameters was designed to better visually represent the relevance of the attributes. For each \(D\)-basis run, the top 15 attributes of the highest relevance with their relevance value listed on the side were included. Those attribute numbers were then classified into ten colors based on the same attribute for different time periods. The color blue showed attributes for pressure, green for wind direction, yellow for dew point, and orange for the absolute pressure. Figure 7 below indicates the mapping of the ten colors to column numbers with the time periods specified. Coincidentally, there are ten parameters for each 2-hour interval, which enables the column numbers for each parameters to be exactly ten numbers apart. Figure 6: June 2018 average pressure dot graph Figure 7: Color coding of attributes The color scheme described above was then applied to the top 15 attributes with highest relevance, computed at minimal support=6,8 and 11, with row removal. The left side of Figure 8 lists the top attributes, together with normalized values of relevance, when target is the Sea breeze, and the right side is the relevance when the the target is the Normal day. Similar results are shown for the tests without row removal on Figure 9. The prevalence of dark colors on the left and light colors on the right demonstrates a clearly observable pattern. For the normal day prediction on the right side, most columns are of lighter colors except for one dark blue and several medium yellow. Generally, the color coded view confirms the potential predictors for both sea breeze days and normal days. Figure 8: Color coded aggregated relevance from the test with row removal It is also observed that the columns 69 and 70 both appear in Figures 8 and 9. Column 69 even ranked in the top 3 relevant to sea breeze, when testing with minimal support of 11 in both graphs. On the other hand, column 70 appeared with the high rank for normal day, when testing with minimal support 8 and 11 in both graphs. Since the color-coded relevance charts clearly demonstrate the importance of the dark colored attributes in predicting sea breeze, we attempted to incorporate those numbers into our prediction model to achieve better prediction results. ### Computation of the Forecasting Value In this section we describe a novel approach to use leading attributes and their \(D\)-basis based relevances to compute a numerical Forecasting Value (FCV) which would predict either a sea breeze or normal day using measurements from the Hofstra weather station taken between 7pm on the previous day until 7am of the forecast day, which is a generalizable approach that could be developed for any coastal area by adjusting the coefficients of the formula. If we choose attributes \(a_{1},a_{2},\ldots,a_{s}\) as the most relevant for the prediction of sea breeze, and use relevances for these attributes when targeting \(d=\)'sea breeze day', we determine coefficients \(k_{1},k_{2},\ldots k_{s}\). Similarly, we can choose \(a_{s+1},a_{s+2},\ldots a_{2s}\) as the most relevant attributes in predicting a normal day, and their relevances when targeting \(d=\)'normal day' are converted into coefficients \(k_{s+1},k_{s+2},k_{2s}\). This yields a formula for computation of FCV: \[\text{FCV}(\overline{x})=k_{1}*x_{1}+k_{2}*x_{2}+\cdots+k_{s}*x_{s}-k_{s+1}*x _{s+1}-k_{s+2}*x_{s+2}-\cdots-k_{2s}*x_{2s}\] Figure 9: Color coded aggregated relevance from the test without row removal Here \(\overline{x}=\langle x_{1},x_{2},\ldots,x_{2s}\rangle\) is a vector describing attributes for day \(x\): \(x_{i}=1\), if attribute \(a_{i}\) is observed on the day \(x\), and \(x_{i}=0\) otherwise. We forecast that \(x\) is a sea breeze day, when \(\text{FCV}(\overline{x})>0\), and a normal day otherwise. What follows is the proposed technique of the choice of attributes and computation of the corresponding coefficients \(k_{1},\ldots,k_{2s}\). We chose to compute these coefficients based on relevance values given by the \(D\)-basis. We varied the minimum support in several runs of the \(D\)-basis: when minimal support is lower, more implications are returned; when minimal support is increased, there are less implications, but they are more valuable as they are manifested in more observations. The minimal support was chosen at levels 6, 8 and 11, thus, the highest support was corresponding to 10% of the days of the observation. We also found that running \(D\)-basis with the row removal, that is, partly removing the rows corresponding to the very moist days, provides better prediction values. After all attributes were ranked by the relevance in each run, the relevance values were normalized by setting the largest relevance value as the standard 1 so that all normalized relevance values were placed in intervals of values between 0 and 1. This way, a standard measure was available for comparison across \(D\)-basis runs of different target columns and minimal supports. Then, for each attribute, the normalized relevance values were averaged over the three runs with different minimum supports, where more weight was given to the values from the runs of larger minimum support. To demonstrate the effectiveness of the normalized relevance values, we composed a dot graph with their averaged values as the yellow line graph. All dots in Figure 10 are within the 0 to 1 range on the vertical axis. The yellow dots at the tops of peaks of line graph represent 6:8:11 ratio averages of the relevance numbers from three runs of algorithm with different minimal supports. More precisely, if \(Rel(6,a)\), \(Rel(8,a)\) and \(Rel(11,a)\) are normalized relevance values for attribute \(a\), when targeting, for example, the sea breeze day in the \(D\)-basis run with minimum support 6, 8 and 11, respectively, then the average value of relevance for this attribute would be computed with the following formula: \[rel(a)=\frac{6*Rel(6,a)+8*Rel(8,a)+11*Rel(11,a)}{25}\] These values would produce a new ranking of the attributes, one for sea breeze target, and another for the normal day target. Then subset \(a_{1},\ldots,a_{s}\) of the top \(s\) attributes was chosen from the sea breeze rank, and another subset \(a_{s+1},\ldots,a_{2s}\) was chosen from the rank of the normal day. The initial choice was \(s=10\), but we also considered smaller values of \(s\). Figure 10: 0-1 normalized relevance numbers for three minimal support levels As the result, the top 10 attributes with aggregated relevance values, associated with the sea breeze, are given in Figure 11. Per color code described in section 6.2, these all have either dark colors or medium colors, and half of attributes are either _high pressure_ or _constant dew point_ attributes. The top 10 attributes associated with normal day are all non-dark colors, as seen on Figure 12. _Offshore wind_ appears 4 times, for time periods between 11pm and 7am, two others are _low pressure_. To illustrate the computation, we switch to smaller value of attributes \(s=5\), and show how the formula for \(\text{FCV}(x)\) would be used to predict the sea breeze on June 4, 2017. The top 5 attributes \(a_{i}\) for the sea breeze and normal day are listed in Tables 6 and 7, respectively, and the corresponding coefficients \(k_{i}\), are listed in Figures 11 and 12. Vector \(\overline{x}\) corresponding to this entry of the data contains two sub-vectors: \(x_{sb}\) for the components corresponding to attributes of the sea breeze \[\overline{x_{sb}}=\langle x_{35},x_{44},x_{17},x_{34},x_{51}\rangle=\langle 1,0,1,1,0\rangle\] and sub-vector \(\overline{x_{nor}}\) for the attributes of the normal day: \[\overline{x_{nor}}=\langle x_{30},x_{60},x_{53},x_{8},x_{26}\rangle=\langle 1,1,0,0,0\rangle\] \begin{table} \begin{tabular}{|c|c|c|} \hline \(a_{i}\) & attribute \# & Description \\ \hline \(a_{1}\) & 35 & high pressure 11 pm - 1 am \\ \hline \(a_{2}\) & 44 & dew point constant 1-3 am \\ \hline \(a_{3}\) & 17 & pressure rising 9 pm - 11 pm \\ \hline \(a_{4}\) & 34 & dew point constant 11 pm - 1 am \\ \hline \(a_{5}\) & 51 & onshore wind 3-5 am \\ \hline \end{tabular} \end{table} Table 6: Sea breeze attributes for an example in FCV computation Figure 11: Top 10 attributes for a Sea Breeze day Figure 12: Top 10 attributes for a Normal day Thus \(\overline{x}=\langle 1,0,1,1,0,1,1,0,0,0\rangle\). Observations on June 4, 2017 show the presence of three attributes associated with the sea breeze and two associated with normal day predictions, and actual addition and subtraction of corresponding coefficients \(k_{i}\) produce positive number: \[\text{FCV}(\overline{x})=0.663478*1+0.596512*0+0.460862*1+0.406937*1+0.393462*0\] \[-0.781797*1-0.645949*1-0.527335*0-0.527244*0-0.448329*0\] \[=1.531277-1.427746=0.103531>0\] Thus, we predict this day to be a sea breeze day. It turns out to be a sea breeze day, indeed. Changing the number of top attributes used in prediction may result in various success rates. When using all the attributes, we correctly predict 78 days out of the 112 non-moist days. Since using all attributes gives the overall best result, we focus more on this choice. The false negatives and false positives are further analyzed through Table 8. Other success measurements could also be computed: \[\text{Sensitivity/Recall}=38/(38{+}14)=38/52=73.1\%\] \[\text{F1 score}=40/(40+\frac{1}{2}\cdot(14+20))=70.2\%\] \[\text{Precision rate}=38/(38{+}20)=65.5\%\] Furthermore, the success rate is further broken down into each year in Table 9. Table 10 shows the successful prediction rate on non-moist days for various choice of the number of top attributes. ### Computation of Forecasting Values on the random subset of the data To explore the prediction method of Forecasting Values demonstrated above, the process was repeated after 30 rows were removed from the data. The rankings of the Forecasting Values were compared between the two runs to validate the overall consistency and usefulness of the method. The expectation was that the rankings might be slightly different, but the overall important attributes would be similar. The 30 rows were chosen using the _random.sample_ function in Python: \[\text{sorted(random.sample(range(3,123),30)).}\] It selected 30 non-repeating random numbers in the range 3-123, which are numbers for the rows of the \(D\)-basis entries corresponding to 120 days of observation. \begin{table} \begin{tabular}{|c|c|c|} \hline \(a_{i}\) & attribute \# & Description \\ \hline \(a_{8}\) & 30 & offshore wind 11 pm - 1 am \\ \hline \(a_{9}\) & 60 & offshore wind 5-7 am \\ \hline \(a_{10}\) & 53 & dew point falling 3 am - 5 am \\ \hline \(a_{11}\) & 8 & pressure falling 7-9 pm \\ \hline \(a_{12}\) & 26 & low pressure 9 pm- 11 pm \\ \hline \end{tabular} \end{table} Table 7: Normal day attributes for an example in FCV computation \begin{table} \begin{tabular}{|c|c|c|} \hline & Predicted SB & Predicted Normal \\ \hline Actual SB & 40 & 14 \\ \hline Actual Normal & 20 & 38 \\ \hline \end{tabular} \end{table} Table 8: Contingency table of sea breeze forecast This set of 30 random days included 15 sea breeze days, 2 very moist days and 13 normal days. The distribution resembled the overall distribution of those day categories in the entire data set, which made this randomly selected subset reasonable for random subset testing. The entries for those 30 rows were changed to 0, so that they were not taken into consideration in the subsequent \(D\)-Basis runs. This is similar to the procedure of row removal in \(D\)-Basis built into its functionality. Since the total number of the entry dropped from 120 down to 90, the parameter of minimal support in the \(D\)-Basis was also scaled down in order to preserve the consistency of the testing. The scaling down from minimal support values of 6, 8, 11 for 120 effective rows down to 90 effective rows was as follows: \[\frac{6}{120}=\frac{m_{1}}{90}\to m_{1}=4.5\to m_{1}\approx 4\] \[\frac{8}{120}=\frac{m_{2}}{90}\to m_{2}=6\] \[\frac{11}{120}=\frac{m_{3}}{90}\to m_{3}=8.25\to m_{3}\approx 9\] For \(m_{1}\), we decided to use the floor function of 4.5 as its ceiling 5 is too close to \(m_{2}\). Thus, the values of minimal supports were finalized to \(4,6,9\). Since in the testing with full data the row removal of the very moist days brought to the better results, this removal was done on the subset of 90 days as well. Then, the same process was repeated to compute the Forecasting Values for that modified input matrix. The success rates on non-stormy days when using modified list of top attributes are given in Table 11. Similarly, the attributes associated with normal days were still light-colored, see Figure 14, and low pressure became more dominant than offshore wind, compared to full data results on Figure 10. It is observed that 'high pressure' tends to come up on top of the list for sea breeze as in Figure 14. 4 'high pressure' attributes are now in the top 10 list, including the previous 3 'high pressure' attributes: 35,15,45. Meanwhile, for the normal day list, we still see the 'offshore wind' attributes, and more of the 'low pressure' attributes, including those appearing on the full data: 26 and 46. \begin{table} \begin{tabular}{|c|c|c|} \hline Top \# of attributes utilized & \# of days predicted correctly & Success rate out of 112 non-moist days \\ \hline All attributes & 78 & 69.6\% \\ \hline Top 10 & 77 & 68.8\% \\ \hline Top 9 & 75 & 67.0\% \\ \hline Top 8 & 75 & 67.0\% \\ \hline Top 7 & 76 & 67.9\% \\ \hline Top 6 & 75 & 67.0\% \\ \hline Top 5 & 76 & 67.9\% \\ \hline Top 4 & 75 & 67.0\% \\ \hline Top 3 & 75 & 67.0\% \\ \hline Top 2 & 74 & 66.1\% \\ \hline Top 1 & 73 & 65.2\% \\ \hline \end{tabular} \end{table} Table 10: Success rates of top attributes \begin{table} \begin{tabular}{|c|c|c|c|} \hline Year & \# of days predicted & \# of non-moist days & success rate \\ \hline 2020 & 22 & 30 & 73.3 \% \\ \hline 2019 & 17 & 27 & 63.0\% \\ \hline 2018 & 21 & 29 & 72.4 \% \\ \hline 2017 & 18 & 26 & 69.2\% \\ \hline \end{tabular} \end{table} Table 9: Success rate yearly ### Comparison with synthetic data An earlier study Segal et al. 2018 compared the performance of the \(D\)-basis algorithm on real data with the random data of similar characteristics. It showed that there was a much lower probability of rules of high support occurring in random data of same size and density, compared to real data. In particular, it showed more uniform distribution of total support figures among all attributes. That result brought the average of relevances across all attributes close to 1, for the wide range of density of random data. For example, the testing of several thousand tables of size \(20\times 32\) in Segal et al. 2018 showed the average relevance values between 1 and 2, i.e., close to insignificant, for the range of densities of entry 1 in the tables between 0.3-0.6. We note that the majority of real data densities does fit into this range, given the rules of conversions between raw data and its binary representation. \begin{table} \begin{tabular}{|c|c|c|} \hline Top \# of attributes utilized & \# of days predicted correctly & Success rate out of 84 non-moist days \\ \hline All rows & 54 & 64.3\% \\ \hline Top 10 & 54 & 64.3\% \\ \hline Top 9 & 54 & 64.3\% \\ \hline Top 8 & 54 & 64.3\% \\ \hline Top 7 & 55 & 65.5\% \\ \hline Top 6 & 55 & 65.5\% \\ \hline Top 5 & 54 & 64.3\% \\ \hline Top 4 & 52 & 61.9\% \\ \hline Top 3 & 55 & 65.5\% \\ \hline Top 2 & 52 & 61.9\% \\ \hline Top 1 & 55 & 65.5\% \\ \hline \end{tabular} \end{table} Table 11: Success rate on a random subset of observations Figure 14: Top 10 attributes for a Normal day on a random subset Figure 13: Top 10 attributes for a Sea Breeze day on a random subset To confirm the actual signal from the weather data in our analysis, we ran a similar comparison with random data of similar parameters. The synthetic data was made of the unchanged target columns, while the rest of columns were randomized based on original density of entry 1 at 0.402738. The results of three runs of synthetic data on table of size \(120\times 72\), targeting column 71, which was not changed compared to real data, are given in Table 12. Since there was only one rule generated for minimal support 11, the relevance values were not computed in random data test. Thus, only the relevance for minimal supports 6 and 8 were computed and compared with the real data. The histograms on Figures 15 and 16 show resulting distribution of relevance values \(rel_{d}(a)\) across all attributes \(a\) in the data. The values of \(a\) are placed along the \(x\)-axis, and the component along the \(y\)-axis is the relevance value computed in the synthetic data for the target column \(d=\)71. For the \(D\)-basis test with minimal support \(=6\), the majority of the relevance values were insignificant and only a few attributes stood out. But as the minimum support increased to 8 and then 11, the number of rules dwindles down considerably and most attributes do not appear in the rules. In the real data, since each 2-hour time interval has 10 attributes, the peaks in about every 10 attributes are observed; those attributes correspond to the important weather observations occurring in each 2-hour interval. Conversely, the pattern for the synthetic data does not possess any trend as observed by eye and are significantly smaller in magnitude as well. In the test with the minimal support \(=8\) significantly less fluctuations in the synthetic data is observed. More importantly, the grey line representing the synthetic data shows considerably smaller total support values, and only 9 attributes have non-zero relevance values, see Figure 16. \begin{table} \begin{tabular}{|c|c|c|} \hline Minimal Support & Synthetic Data (\# of rules) & Real Data (\# of rules) \\ \hline 6 & 203 & 1035 \\ \hline 8 & 5 & 266 \\ \hline 11 & 1 & 33 \\ \hline \end{tabular} \end{table} Table 12: Comparison between number or rules in synthetic data and real data Figure 15: Relevance values in real data vs synthetic data in test with minimal support=6 In conclusion, from the two figures we observe that the real data demonstrates a considerably strong signal reflected in the high frequency of some attribute appearing in the rules, compared to the flat and uniform frequences of most attributes in the rules, when the ones in the table randomly permuted. ## 7 Conclusions and Discussion In this study we relied on data collected by weather stations located relatively far inland instead of using the measurement of the sea temperature. This differs our approach from the standard reliance on temperature gradient between sea and land, when predicting the sea breeze. In the present analysis, we included measurements during the 12 hours overnight prior to the day of prediction, averaging the behavior over two-hour intervals. Thus, our methods could be useful in developing a prediction algorithm for other locations where sea surface temperatures are unavailable. To determine the predicted presence or absence of the sea breeze, we used the \(D\)-basis algorithm. That algorithm retrieves specific implications and association rules between the columns of the binary table, namely, those whose conclusion is a selected column. In our case it was either the column marking sea breeze days occurring during month of June between 2017-2020 on Long Island, New York, or normal days of the same period. The third category of days is very moist, when the synoptic scale features typically control the onshore wind flow, and not the smaller scale sea breeze. The \(D\)-basis algorithm allows for an analysis in the presence of entries for which the partial failure of implications occurs, thus, very moist days were partially and randomly omitted on multiple runs of the algorithm. Among the weather observations included in the initial analysis were dynamic parameters: increasing, decreasing, or constant pressure and dew point, as well as the direction of wind in relation to coast line of the south shore of Long Island. Later we added an absolute value of station pressure distinguishing high pressure and low pressure, using a threshold of 29.92" Hg. Using that definition, high pressure, especially during the first half of the overnight observations between 7pm-1 am, appeared as a prominent predictor of a sea breeze, as well as a constant dew point and early morning (3-5 am) onshore winds. In contrast, a combination of early morning offshore winds, falling or low pressure, and rising dew point appeared as predictors of a normal, non-sea breeze, day. The strongest association was found with observations made between 7 pm and 1 am the previous evening. Those weather variables most closely corresponding with sea breeze formation match the climatological expectations. High pressure typically implies a lack of larger scale synoptic controls on wind direction, such as fronts and cyclones, while a constant dew point also indicates a stable air mass and lack of surface boundaries. We also determined that by selecting all weather attributes, and converting their frequencies coming from retrieved rules in the \(D\)-basis algorithm, that we could set a linear combination of binary variables for each column with predetermined coefficients. Each day's observation resulted in values of either 1 or 0 for each binary variable \(x_{i}\), depending on the presence or absence of a particular weather attribute on that day. The total value of this linear combination, when positive, indicates the Figure 16: Relevance values in real data vs synthetic data in test with minimal support=8 presence of most sea breeze predictors, and therefore, forecasts a sea breeze day, otherwise, it forecasts a non sea breeze day. Prediction of sea breeze days with this formula was about 70% accurate: with only 14 wrong predictions over a total 54 sea breeze days. Moreover, the success rate for the year 2020 alone was about 74%. This success rate is lower than the 85% rate in McCabe and Freedman 2023, which approached the same region and time period. However, that particular study included both 'classic' and 'hybrid' sea breeze days, the latter of which we attempted to eliminate from consideration in the present study, through the synoptic climatology requirement, which means the success rates are not directly comparable. This is due to the fact that the McCabe and Freedman 2023 study was less restrictive in the sense that days where synoptic influences impacted the sea breeze were counted as sea breeze days and thus an accurate prediction made by their algorithm. These contrasting definitions of a Long Island sea breeze event reflect the notion that there is no universally accepted definition for what constitutes a sea breeze event, and further, that the definition and associated identification algorithm can vary based on its application. For example, when attempting to forecast the effect of enhanced wind speeds from the sea breeze on wind energy potential, the broader definition employed in McCabe and Freedman 2023 would be more appropriate. Conversely, if attempting to isolate the mesoscale influence of the sea breeze on near-surface weather conditions, a more restrictive definition, such as the one used in the present study, would be necessary. It should also be noted that different combinations of predicting attributes are possible too. For example, using only two attributes resulted in a similar success rate in prediction. Moreover, an adjustment of coefficients in the forecasting formula could be undertaken when more observation are collected over time, thus allowing the formula to potentially be more accurate. Last, in our study, we did not distinguish between sea breeze days with respect to the classification of a prior day. Indeed, a simple persistence forecast (i.e., today will be the same as yesterday) can also be somewhat effective when predicting whether a sea breeze will occur. Nevertheless, the algorithm developed here was still more accurate than a persistence forecast would have been, and future iterations are likely to be even more accurate when a larger sample of data can be ingested, allowing for this tool to have value to weather forecasters in the region. AcknowledgmentsWe appreciate the technical support of Dr. Oren Segal in the Hofstra University Department of Computer Science for his involvement in various projects related to the \(D\)-basis algorithm. All tests and computations were performed using the Virtual Machine at the Hofstra University Fred DeMatteis School of Engineering and Applied Science's Computing Center. We also appreciate the initiative and support of Justin Cabot-Miller, who helped establish this project while an undergraduate research assistant at Hofstra.
2306.00340
Singular Vectors of Sums of Rectangular Random Matrices and Optimal Estimators of High-Rank Signals: The Extensive Spike Model
Across many disciplines from neuroscience and genomics to machine learning, atmospheric science and finance, the problems of denoising large data matrices to recover signals obscured by noise, and of estimating the structure of these signals, are of fundamental importance. A key to solving these problems lies in understanding how the singular value structure of a signal is deformed by noise. This question has been thoroughly studied in the well-known spiked matrix model, in which data matrices originate from low-rank signals perturbed by additive noise, in an asymptotic limit where the size of these matrices tends to infinity but the signal rank remains finite. We first show, strikingly, that the singular value structure of large finite matrices (of size $\sim 1000$) with even moderate-rank signals, as low as $10$, is not accurately predicted by the finite-rank theory, thereby limiting the application of this theory to real data. To address these deficiencies, we analytically compute how the singular values and vectors of an arbitrary high-rank signal matrix are deformed by additive noise. We next study an asymptotic limit corresponding to an $\textit{extensive}$ spike model, in which the rank of the hidden signal is proportional to the size of the data matrix, while both tend to infinity. We map out the phase diagram of the singular value structure of the extensive spike model as a joint function of signal strength and rank. We further exploit these analytics to derive optimal rotationally invariant denoisers to recover hidden $\textit{high}$-rank signals from data, as well as optimal invariant estimators of the signal covariance structure. Overall, our results provide fundamental theory governing how high-dimensional signals are deformed by additive noise, together with practical formulas for optimal denoising and covariance estimation.
Itamar D. Landau, Gabriel C. Mel, Surya Ganguli
2023-06-01T04:44:33Z
http://arxiv.org/abs/2306.00340v2
# Singular vectors of sums of rectangular random matrices and ###### Abstract Across many disciplines spanning from neuroscience and genomics to machine learning, atmospheric science and finance, the problems of denoising large data matrices to recover signals obscured by noise, and of estimating the structure of these signals, is of fundamental importance. A theoretical keystone to solving these problems is understanding how the singular value structure of a signal is deformed in the presence of noise. This question has been thoroughly studied in the well-known spiked matrix model, in which data matrices originate from low-rank signal matrices perturbed by additive noise matrices, in an asymptotic limit where the size of these matrices tends to infinity but the signal rank remains finite. We first show, strikingly, that the singular value structure of large finite matrices (of size \(O(1000)\)) with even moderate-rank signals, as low as 10, is not accurately predicted by the finite-rank theory, thereby limiting the application of this theory to real data. To address these deficiencies, we analytically compute how the singular values and vectors of an arbitrary signal matrix are deformed by additive noise. We apply these analytical results to study a different asymptotic limit corresponding to an _extensive_ spike model, in which the rank of the hidden signal is _proportional_ to the size of the data matrix, while _both_ tend to infinity. We map out the phase diagram of the singular value structure of the extensive spike model as a joint function of signal strength and rank. We further exploit these analytics to derive optimal rotationally invariant denoisers to recover the hidden _high_-rank signal from the data, as well as optimal invariant estimators of the signal covariance structure. Our extensive signal rank results yield several conceptual differences relative to the finite-rank case: (1) the singular value distribution of the data exhibits three phases as the signal strength increases: a unimodal bulk phase, a disconnected phase, and a new bimodal connected bulk phase separating them; (2) the signal singular vectors can be partially estimated _even_ in the unimodal bulk phase, and thus the phase transitions in the data singular value spectrum do not coincide with a detectability threshold for the signal singular vectors, unlike in the finite-rank theory; (3) signal singular values interact nontrivially to generate data singular values in the extensive-rank model, whereas they are non-interacting in the finite-rank theory; (4) as a result, the more sophisticated data denoisers and signal covariance estimators we derive, that take into account these nontrivial extensive-rank interactions, significantly outperform their simpler, non-interacting, finite-rank counterparts, even on large data matrices of moderate rank. Overall, our results provide fundamental theory governing how high-dimensional signals are deformed by additive noise, together with practical formulas for optimal denoising and covariance estimation. ###### Contents * I Introduction * II A Motivation: The Inadequacies of Finite-Rank Spiked Matrix Model * III Mathematical Preliminaries * IV The Singular Value Decomposition of Sums of Rectangular Matrices * IV.1 A Subordination Relation for the Sum of Rectangular Matrices * IV.2 Deformation of Singular Vectors Due to Additive Noise * IV.3.1 Arbitrary Signal with Gaussian Noise * IV.3 SVD of the Extensive Spike Model * IV.3.1 Singular Value Spectra of the Extensive Spike Model * IV.3.2 Singular Vector Subspace Overlap in the Extensive Spike Model * V Optimal Rotationally Invariant Estimators * V Denoising Rectangular Data * V Estimating the Signal Covariance * VI Discussion ###### Abstract We consider a _global_ linear system of equations that are This lack of numerical accuracy of the finite-rank theory for large but finite size matrices of moderate rank could have a significant impact on the 3 problems of spectral understanding, data denoising, and signal covariance estimation across the empirical sciences, where the effective rank of signals is expected to vary significantly, and potentially be even quite high. Therefore it is imperative to develop a new theory that more accurately describes data containing higher rank signals. We develop that theory by generalizing the finite-rank theory to an _extensive-rank_ theory in which the rank of the signal matrix is proportional to the size of the signal and data matrices, working in an asymptotic limit where _both_ the size and rank approach infinity. We note that it is not immediately obvious how to extend existing finite-rank results to the extensive regime. The finite-rank theory [20, 21, 22] makes use of algebraic formulas for matrices with low-rank perturbations that do not generalize, and so one must resort to more elaborate tools from RMT and free probability. Along these lines, powerful theoretical methods have been developed in recent years for studying the eigen-decomposition of sums of square Hermitian matrices [30], and deriving techniques for optimally estimating arbitrary square-symmetric matrices from noisy observations [31, 32, 33, 34, 35, 36]. However the situation for rectangular matrices, relevant to data from many fields including the examples of neuroscience, genomics and finance stated above, lags behind that of square matrices. While the singular value spectrum of sums of rectangular matrices has been calculated using free probabilistic techniques [37, 38, 39, 40], there are currently no methods for determining the deformation of the singular vectors of a rectangular signal matrix due to an additive noise matrix. The outline of our paper is as follows. In Section II we motivate our work with an illustrative numerical study of the spiked matrix model, showing that the finite-rank theory fails to accurately predict the outlier singular values and singular vector deformations in data matrices containing even moderate-rank signals. In section III we introduce tools from RMT we will need to derive our results, including Hermitianization, block matrix resolvents, Stieltjes transforms, their inversion formulae, and block R-transforms. In Section IV we study how the singular values and singular vectors of an arbitrary rectangular signal matrix are deformed under the addition of a noise matrix to generate a data matrix. To do so, we derive a subordination relation that relates the resolvent of the Hermitianization of a data matrix to that of its hidden signal matrix in Section IV.1. We next employ this subordination relation to derive expressions for the overlap between data singular vectors and the signal singular vectors in Section IV.2. We then apply these results to study the extensive spike model in which the rank of the signal spike is assumed to grow linearly with the number of variables (and observations) in Section IV.3. There we map out the phase diagram of the SVD as a joint function of signal strength and rank ratio, finding an intriguing result that certain phase transitions in the singular value spectrum of the data do _not_ coincide with the detectability of the signal, as it does in the finite-rank model. Finally, in Section V we exploit the expressions for singular vector overlaps in order to derive optimal rotationally invariant estimators for both data denoising (Section V.1) and signal covariance estimation (Section V.2). We find that unlike in the finite-rank model, in the extensive-rank model signal singular values interact nontrivially to generate data singular values. As a result, we obtain more sophisticated optimal data denoisers and signal covariance estimators that take into account these nontrivial extensive rank interactions, and which furthermore significantly outperform their simpler, non-interacting, finite-rank counterparts. We note that during the preparation of this manuscript, a set of partially overlapping results appeared on a pre-print server [41]. In our discussion section, we describe the relation and additional contributions of our work relative to that of [41]. ## II A motivation: the inadequacies of finite-rank spiked matrix model Formally, let \(Y\) be an \(N_{1}\times N_{2}\) signal matrix. We can think of each of the \(N_{1}\) rows of \(Y\) as a variable, and each of the \(N_{2}\) columns as a distinct experimental condition or time point, with \(Y_{ij}\) representing the clean, uncorrupted value of variable \(i\) under condition \(j\). Now consider a noisy data matrix \(R\), given by \[R=Y+X, \tag{1}\] where \(X\) is a random \(N_{1}\)-by-\(N_{2}\) additive noise matrix. \(X\) is assumed to have well-defined limiting singular value spectrum in the limit of large \(N_{1}\) with fixed aspect ratio, \(c=\nicefrac{{N_{1}}}{{N_{2}}}\). Furthermore we assume the probability distribution \(P_{X}(X)\) over \(X\) is rotationally invariant. This means \(P_{X}(X)=P_{X}(O_{1}XO_{2})\) where \(O_{1}\) and \(O_{2}\) are orthogonal matrices of size \(N_{1}\)-by-\(N_{1}\) and \(N_{2}\)-by-\(N_{2}\) respectively. These assumptions guarantee the asymptotic freeness of \(X\) and \(Y\). For a general definition of freeness, see [40]. We are interested in understanding the relationship between the singular value decomposition (SVD) of the clean signal matrix \(Y\). In general we will write the SVD of the data as \(R=\hat{U}_{1}\hat{S}\hat{U}_{2}\). As a motivating example, we consider a version of the spiked matrix model [20, 21, 22] in which signal matrix \(Y\) is given by \[Y=sU_{1}U_{2}^{T}, \tag{2}\] where each \(U_{a}\) is an \(N_{a}\times K\) matrix with orthonormal columns, and \(s\) is the signal strength. This signal model can be thought of as a rank \(K\) spike of strength \(s\) in that its singular value spectrum has \(K\) singular values all equal to \(s\). In the finite-rank setting, where \(K\) remains finite and \(O(1)\) as \(N_{1},N_{2}\to\infty\), there is a signal detectability phase transition [20; 22] in the singular value structure of the data matrix \(R\). For \(s<s_{crit}\), where \(s_{crit}\) is a critical signal strength that depends on the singular value spectrum of the noise matrix \(X\), the entire signal in \(Y\) is swamped by the additive noise \(X\) and cannot be seen in the data \(R\). More precisely, the singular value spectrum of the data \(R\) is _identical_ to the singular value spectrum of the noise \(X\). Furthermore, _no_ left (right) singular vector of the data matrix \(R\) has an \(O(1)\) overlap with the \(K\) dimensional signal subspace corresponding to the column space of \(U_{1}\) (\(U_{2}\)). However, for \(s>s_{crit}\) the singular value spectrum of the data \(R\) is now not only composed of a noise bulk, identical to the spectrum of \(X\), as before, but also acquires \(K\) outlier singular values all equal to \(\hat{s}\). The location of the data spike at \(\hat{s}\), occurs at a slightly larger value than the signal spike at \(s\). This reflects singular value inflation in the data \(R\) relative to the signal \(Y\), due to the addition of noise \(X\). Furthermore, each singular vector of the data \(R\) corresponding to an outlier singular value acquires a nontrivial \(O(1)\) overlap with the \(K\) dimensional signal subspace of \(Y\) even in the asymptotic limit \(N_{1},N_{2}\to\infty\). The location of the outlier data singular values and their corresponding singular vector overlaps with the signal subspace have been calculated for _finite_\(K\) and general rotationally invariant noise matrices \(X\)[22]. In the special case where the elements of \(X\) are i.i.d. Gaussian, explicit formulas can be derived (see Appendix??? for a review). This signal detectability phase transition in the finite-rank spiked model is depicted in Fig. 1 for an i.i.d. Gaussian noise matrix \(X\). Notably, according to the finite-rank theory, the \(K\) spikes do not interact. More precisely, above the critical signal strength, the \(K\) identical singular values of \(Y\) are predicted to all map to \(K\) identical outlier singular values of the data matrix \(R\). Furthermore, the overlaps of the \(K\) corresponding data singular vectors with the signal subspace are predicted to be all equal and completely independent of \(K\). More generally, if the signal \(Y\) consists of \(K\)_different_ rank 1 spikes each with a unique signal strength \(s_{l}\) for \(l=1,\ldots,K\), the corresponding location of the data spike \(\hat{s}_{l}\) can be computed by inserting each \(s_{l}\) into a single local singular value inflation function \(\hat{s}(s)\) (depicted in Fig. 1), without considering the location of any other signal spike \(s_{l^{\prime}}\) for \(l^{\prime}\neq l\). In this precise sense, at finite \(K\) the spikes do not interact; one need not consider the position of any other signal spikes to compute how any one signal spike is inflated to a data spike. The same non-interacting picture is true for singular vector overlaps (Fig. 1B). This lack of interaction between different spikes in the Figure 1: **Background: Signal-Detectability Phase Transition in the Finite-Rank Spiked Matrix Model.****A.** The color plot in the bottom panel shows the singular value spectrum of the spiked matrix model given by the finite-rank theory in the asymptotic limit with aspect ratio, \(c=\frac{N_{1}}{N_{2}}=0.7\) (see Appendix A for formulas). The singular value of the data matrix, \(R\), is in the x-axis, and the strength of the single non-zero singular value of the signal matrix, \(Y\), is in the y-axis. The “bulk” spectrum of the data is equivalent to the bulk spectrum of the noise matrix, \(X\). The bounds of that spectrum are displayed in dashed grey. Above the critical signal, \(s_{crit}=c^{\nicefrac{{1}}{{4}}}\) (black), the data has an outlier singular value shown in red. The top two panels show theory curves corresponding to two horizontal slices, with \(s=0.85,1.5\), together with a histogram of singular values each of a single instantiation with \(N_{2}=2000\). The top panel has a single outlier eigenvalue very close to the theory prediction. The panel below shows a data spectrum that is indistinguishable from noise. **B.** The overlap of the top left singular vector of the data with the left singular subspace of the signal, given by the finite-rank theory. The overlap becomes non-zero at exactly the same critical signal, \(s_{crit}\), at which an outlier singular value appears in the data. \(X\) is Gaussian i.i.d. with variance \(\nicefrac{{1}}{{N_{2}}}\) throughout. signal as they are corrupted to generate data spikes, allows optimal denoising operations on the data, based on the finite-rank theory, to be remarkably simple. For example, estimators for both signal denoising [23; 28; 29], which corresponds to trying to directly estimate the signal \(Y\) given the corrupted data \(R\), and covariance estimation [42], which corresponds to estimating the true covariance matrix \(C=YY^{T}\) from the data \(R\), both involve applying a _single_ shrinkage function, that nonlinearly modifies each data singular value of \(R\) in a manner that acts _independently_ of any other singular value. This shrinkage function applied to each data singular value \(\hat{s}\) in a sense optimally undoes the inflation of singular values \(s\rightarrow(\hat{s})\) in going from signal \(Y\) to data \(R=Y+X\). Moreover, the reason the shrinkage can act independently on each data singular value is directly related to the property of the finite-rank theory that each signal singular value \(s\) is inflated _independently_ through the same inflation function \(\hat{s}(s)\). In this work, however, we find that the assumptions and resulting consequences of the finite-rank theory may constitute a significant limitation for the practical application of this model to both explain the properties of noise corrupted data, as well as to optimally denoise such data. To illustrate, we test the finite-rank theory for various values of \(K\), with \(N_{1}\) and \(N_{2}\) fixed. In Figure 2 we show simulation results in which we find substantial deviations between simulations and finite-rank theory predictions, for both the location of the leading data singular value outlier, and the data-signal singular vector overlap, for \(K\) as small as \(10\) with \(N_{1}=1000\). Thus even for moderate numbers of spikes, and relatively large matrices, the finite-rank theory cannot explain the SVD of the data well. Furthermore, as we shall see below, denoising large data of even such moderate ranks, does not perform well, precisely because the finite-rank theory yields inaccurate predictions for the singular value and vector structure of the data well, and because these predictions are used to derive denoisers. Thus motivated by the search for better denoisers of higher rank data, below we extend the finite-rank theory to a completely different asymptotic limit of extensive rank in which the rank \(K\) of the data is proportional to the number of variables \(N_{1}\) as both become large. We show that our extensive-rank theory both: (1) more accurately explains the SVD of large data matrices of even moderate rank, and (2) provides better denoisers in these cases, than the finite-rank theory. And interestingly, our extensive-rank theory reveals qualitatively new phenomena that do not occur at finite-rank, including highly nontrivial interactions between the extensive number of signal singular values, as they become corrupted to generate data singular values, under additive noise. ## III Mathematical preliminaries We begin by reviewing some basic concepts from random matrix theory and introducing some notation that we will need. Let \(M\) be an \(N\times N\) Hermitian matrix \(M\). We denote by \(G_{M}(z)\) the matrix resolvent of \(M\): \[G_{M}(z):=(zI-M)^{-1}. \tag{3}\] We define the normalized trace as \[\tau[M]:=\nicefrac{{1}}{{N}}\mathrm{Tr}[M]. \tag{4}\] The Stieltjes transform \(g_{M}(z)\) is the normalized trace of \(G_{M}(z)\): \[g_{M}(z):=\tau\left[\left(zI-M\right)^{-1}\right]. \tag{5}\] In this work, we will be interested in the singular values and vectors of rectangular matrices. To apply Hermitian matrix methods to a rectangular matrix \(R\in\mathbb{R}^{N_{1}\times N_{2}}\), we will work with its Hermitianization, \[\mathbf{R}:=\left[\begin{array}{cc}0&R\\ R^{T}&0\end{array}\right], \tag{6}\] which is an \(N\times N\) Hermitian matrix, with \(N=N_{1}+N_{2}\). The eigenvalues and eigenvectors of \(\mathbf{R}\) can be written \(\pm s\), \(\frac{1}{\sqrt{2}}\left(\begin{array}{c}\mathbf{u_{1}}\\ \pm\mathbf{u_{2}}\end{array}\right)\), where \(s\) is a singular value of \(R\), and \(\mathbf{u_{1}},\mathbf{u_{2}}\) are the corresponding left and right singular vectors. This will allow us to extract information about the singular value decomposition of a rectangular matrix \(R\) from the eigen-decomposition of the Hermitian matrix \(\mathbf{R}\). Hermitianization leads naturally to a Hermitian _block resolvent_, which is a function of two complex scalars \(z_{1}\) and \(z_{2}\) rather than one: \[\mathbf{G}^{R}(\mathbf{z}):=\left[\begin{array}{cc}z_{1}I_{N_{1}}&-R\\ -R^{T}&z_{2}I_{N_{2}}\end{array}\right]^{-1}, \tag{7}\] where \(\mathbf{z}=(z_{1},z_{2})\) is a complex vector. We can invert \(\mathbf{G}^{R}\) explicitly, and write each block in terms of a standard square-matrix resolvent. \[\mathbf{G}^{R}(\mathbf{z})=\left[\begin{array}{cc}z_{2}G_{RR^{T}}(z_{1}z_{2})&RG_{R ^{T}R}(z_{1}z_{2})\\ R^{T}G_{RR^{T}}(z_{1}z_{2})&z_{1}G_{R^{T}R}(z_{1}z_{2})\end{array}\right]. \tag{8}\] Analogously, we define the _block_ Stieltjes transform \(\mathbf{g}^{R}(\mathbf{z})\) as the 2-element complex vector consisting of the normalized traces of each diagonal block of \(\mathbf{G}^{R}\): \[g_{1}^{R}(\mathbf{z}) =\tau_{1}\left[G_{11}^{R}(\mathbf{z})\right]=z_{2}g_{RR^{T}}(z_{1}z_{ 2}) \tag{9a}\] \[g_{2}^{R}(\mathbf{z}) =\tau_{2}\left[G_{22}^{R}(\mathbf{z})\right]=z_{1}g_{R^{T}R}(z_{1}z_{ 2})\,. \tag{9b}\] Here we have introduced notation for the block-wise normalized traces: \[\tau_{a}(M):=\nicefrac{{1}}{{N_{a}}}\text{Tr}[M_{aa}], \tag{10}\] where \(M_{aa}\) is the \(a\)th diagonal block of size \(N_{a}\times N_{a}\). Notationally, we write the full block vectors and matrices \(\mathbf{g}^{R}\) and \(\mathbf{G}^{R}\) in bold, while we indicate the component blocks in standard roman font, with the indices \(a,b\) for both scalar, \(g_{a}^{R}\), and matrix \(G_{ab}^{R}\) blocks. We will also use the fact that the eigenvalues of \(RR^{T}\) and \(R^{T}R\) differ by exactly \(|N_{1}-N_{2}|\) zeros, implying the two elements of \(\mathbf{g}^{R}\) are related by \(g_{2}^{R}(\mathbf{z})=\frac{z_{1}}{z_{2}}cg_{1}^{R}(\mathbf{z})+\frac{1-c}{z_{2}}\). Each element \(g_{a}^{R}(z)\) can be written in terms of the corresponding singular value density: \[g_{1}^{R}\left(z_{1},z_{2}\right) =\int_{-\infty}^{+\infty}\frac{z_{2}}{z_{1}z_{2}-s^{2}}\rho_{1}^{R }(s)\mathrm{d}s \tag{11a}\] \[g_{2}^{R}\left(z_{1},z_{2}\right) =\int_{-\infty}^{+\infty}\frac{z_{1}}{z_{1}z_{2}-s^{2}}\rho_{2}^{ R}(s)\mathrm{d}s, \tag{11b}\] Figure 2: **Finite-Rank Theory Fails To Capture The Spectral Decomposition of The Spiked Rectangular Matrix Model.****A.** Leading empirical singular value, \(\hat{s}_{1}\), as a function of signal singular value, \(s_{1}\), for spikes of various ranks \(K\). Black shows the finite-rank theory (which is independent of the rank of a spike). Matrix size in this and all subsequent panels is \(N_{1}=1000\), \(N_{2}=500\). Numerical results are presented as mean and standard deviation over \(10\) instantiations for each value of \(b\) and \(s\). **B.** Overlap of leading empirical left singular vector, \(\mathbf{u}_{1}\) with the \(K\)-dimensional left singular space of the signal. **C.** To break the degeneracy of the spikes with rank \(K>1\) in panel A, here we consider a single leading signal spike with singular value \(s\), along with \(K-1\) spikes drawn independently and uniformly in \([0,s]\). We plot the leading empirical singular value as a function of \(s\) for various \(K\), compared to finite-rank theory (black) **D.** For the same signal model in panel (C), we plot the overlap of the leading empirical left singular vector with the leading signal singular vector as function of signal strength \(s\), for different \(K\) and for the finite-rank theory (black). We see that the finite-rank theory incorrectly estimates both the singular values and singular vectors of signals of even moderate rank \(K\). See Appendix A for finite-rank theory formulas. where \(\rho_{a}^{R}(s)\) denotes the singular value distribution of \(R\), accounting for \(N_{a}\) singular values. Note that for non-zero \(s\) with finite singular value density, \(\rho_{2}^{R}(s)=c\rho_{1}^{R}(s)\). The special case in which the two arguments are equal, \(z_{1}=z_{2}=z\), will be important and so we abbreviate: \(\mathbf{g}^{R}(z):=\mathbf{g}^{R}(\mathbf{z})\). We can write an inversion relation for the singular value densities using the Sokhotski-Plemelj theorem, which states, \(\lim_{\eta\to 0^{+}}\mathrm{Im}\!\left[\int\frac{f(x)}{x-\eta}\mathrm{d}x \right]=\pi f(0)\). Applying this theorem to \(f(s)=\frac{z}{z+s}\rho_{a}^{R}(s)\) yields: \[\rho_{a}^{R}\left(s\right)=\frac{2}{\pi}\lim_{\eta\to 0}\mathrm{Im}\!\left[g_{a}^{R} \left(s-i\eta\right)\right]\!. \tag{12}\] Finally, we define the _block_\(\mathcal{R}\)-transform as a map \(\mathbb{C}^{2}\to\mathbb{C}^{2}\): \[\mathbf{\mathcal{R}}^{R}(\mathbf{q})=\left(\mathbf{g}^{R}\right)^{-1}(\mathbf{q})-\frac{1}{ \mathbf{q}}, \tag{13}\] where \(\left(\mathbf{g}^{R}\right)^{-1}\) is the functional inverse of the the block Stieltjes transform \(\mathbf{g}^{R}\) as a map \(\mathbb{C}^{2}\to\mathbb{C}^{2}\), satisfying \(\left(\mathbf{g}^{R}\right)^{-1}(\mathbf{q})=\mathbf{z}\) where \(\mathbf{q}=\mathbf{g}^{R}(\mathbf{z})\), and \({}^{1}\!/_{\mathbf{q}}\) is the component-wise multiplicative inverse of \(\mathbf{q}\). The block \(\mathcal{R}\)-transform will arise naturally in our calculation of the subordination relation for the sum of free rectangular matrices, \(R=Y+X\), and as we shall verify, it is additive under matrix sums: \[\mathbf{\mathcal{R}}^{R}(\mathbf{q})=\mathbf{\mathcal{R}}^{Y}(\mathbf{q})+\mathbf{\mathcal{R}}^{X }(\mathbf{q}). \tag{14}\] ## IV The singular value decomposition of sums of rectangular matrices In this section, we characterize how an additive noise matrix \(X\) deforms the singular values and vectors of a signal matrix \(Y\) to generate singular values and vectors of the data matrix \(R=Y+X\) (see (1) and following text). We consider general signal matrices of the form \[Y=U_{1}SU_{2}^{T}, \tag{15}\] where each \(U_{a}\) is an \(N_{a}\times N_{a}\) orthonormal matrix, and \(S\) is \(N_{1}\times N_{2}\) diagonal matrix. We begin by deriving an asymptotically exact subordination formula relating the block resolvents (8) of \(R\) and \(Y\) in the limit \(N_{1},N_{2}\to\infty\) with the aspect ratio \(c={}^{N_{1}}\!/{}_{N_{2}}\) fixed. From this, we extract both the singular spectrum of \(R\), as well as the overlaps between the singular vectors of \(R\) and those of the signal matrix, \(Y\). ### A Subordination Relation for the Sum of Rectangular Matrices Exploiting the rotational invariance of \(P_{X}(X)\), we first calculate the block resolvent of \(R\) as an expectation over arbitrary rotations of the noise \(X\). Thus we write \(R=Y+O_{1}XO_{2}\), where \(O_{a}\) are Haar-distributed orthogonal \(N_{a}\)-by-\(N_{a}\) matrices. We can write the Hermitianization (6) of \(\mathbf{R}\) in terms of the Hermitianized \(\mathbf{X}\) and \(\mathbf{Y}\): \[\mathbf{R}=\mathbf{Y}+\tilde{\mathbf{O}}\mathbf{X}\tilde{\mathbf{O}}, \tag{16}\] where we have written \(\tilde{\mathbf{O}}=\left[\begin{array}{cc}O_{1}&0\\ 0&O_{2}\end{array}\right]\). The main result of this section is the following subordination relation for the expectation of the block resolvent \(\mathbf{G}^{R}\) over the orthogonal matrix \(\tilde{\mathbf{O}}\). \[\mathbb{E}_{\tilde{\mathbf{O}}}\!\left[\mathbf{G}^{R}(\mathbf{z})\right]=\mathbf{G}^{Y}\left( \mathbf{z}-\mathbf{\mathcal{R}}^{X}\left(\mathbf{g}^{R}(\mathbf{z})\right)\right). \tag{17}\] Here we sketch the calculation for the special case \(\mathbf{z}=(z,z)\), which is sufficient for computing the singular values and associated singular-vector overlaps. The full calculation can be found in Appendix B. We first write the analog of a partition function, \[\mathcal{Z}^{R}(\mathbf{Y})=\det(zI-\mathbf{R})^{-\nicefrac{{1}}{{2}}}, \tag{18}\] and note we can write the desired matrix inverse as \[\mathbf{G}^{R}(z)=2\frac{\mathrm{d}}{\mathrm{d}\mathbf{Y}}\log\mathcal{Z}^{R}(\mathbf{Y}). \tag{19}\] We would like to average this over the block-orthogonal matrix \(\tilde{\mathbf{O}}\), yielding a "quenched" average. In Appendix B, we show that in the large \(N\) limit, the quenched and annealed averages are equivalent. In short, viewing \(\log\mathcal{Z}^{R}\) as function of \(\tilde{\mathbf{O}}\), we find it has Lipschitz constant proportional to \(\nicefrac{{1}}{{\sqrt{N}}}\), and then use the concentration of measure of the orthogonal group, \(\mathbb{S}\mathbb{O}\left(N\right)\), with additional concentration inequalities to show that: \[\lim_{N\to\infty}\frac{1}{N}\mathbb{E}_{\tilde{\mathbf{O}}}\!\left[ \log\mathcal{Z}^{R}(\mathbf{Y})\right]=\\ \lim_{N\to\infty}\frac{1}{N}\log\mathbb{E}_{\tilde{\mathbf{O}}}\!\left[ \mathcal{Z}^{R}(\mathbf{Y})\right]\!. \tag{20}\] We now proceed by writing the determinant as a Gaussian integral, \[\mathcal{Z}^{R}(\mathbf{Y})=\int\frac{d\mathbf{v}}{\left(2\pi\right)^{\nicefrac{{N}}{{ 2}}}}\exp\!\left(-\frac{1}{2}\mathbf{v}^{T}\left(zI-\mathbf{R}\right)\mathbf{v}\right)\!, \tag{21}\] and then we substitute \(\mathbf{R}=\mathbf{Y}+\tilde{\mathbf{O}}\mathbf{X}\tilde{\mathbf{O}}^{T}\), extract terms that do not depend on \(\tilde{\mathbf{O}}\), and take the expectation of the terms that do, which yields an intermediate integral, \[I^{X}(\mathbf{v})\equiv\mathbb{E}_{\tilde{\mathbf{O}}}\!\left[e^{\frac{i}{2}\mathbf{v}^{T} \tilde{\mathbf{O}}\mathbf{X}\tilde{\mathbf{O}}^{T}\mathbf{v}}\right]\!, \tag{22}\] This integral is analogous to the Harish-Chandra-Itzykson-Zuber (HCIZ) or spherical integral, which appears in the calculation of the subordination relation for sums of square-symmetric matrices [33; 34; 36]. We compute this integral asymptotically in Appendix C, and highlight key points of the calculation here. First, we observe the key difference between our calculation and the square-symmetric case. In the square symmetric case, the expectation is over a single Haar-distributed orthogonal matrix that rotates \(\mathbf{v}\) arbitrarily, and so the expectation depends only on the norm of \(\mathbf{v}\). In our rectangular case, however, \(\mathbf{\hat{O}}\) has two blocks, and rotates each of the \(N_{1}\)- and \(N_{2}\)-dimensional blocks of \(\mathbf{v}\) separately, so that \(I^{X}(\mathbf{v})\) depends on both of the norms of these two blocks. Therefore, we define the 2-component vector, \(\mathbf{t}\) with components \[t_{a}:=\frac{1}{N_{a}}\left\lVert\mathbf{v}_{a}\right\rVert^{2}. \tag{23}\] Next we calculate the expectation (22) by performing an integral over an arbitrary \(N\) dimensional vector, \(\mathbf{w}\), while enforcing block-wise norm constraints, using the Fourier representation of the delta function and introducing integration variables, \(q_{1}\) and \(q_{2}\). Appealingly, we find the saddle-point conditions for this integral are of the form: \[t_{1}^{*} =q_{2}^{*}g_{XX^{T}}(q_{1}^{*}q_{2}^{*}) \tag{24a}\] \[t_{2}^{*} =q_{1}^{*}g_{X^{T}X}(q_{1}^{*}q_{2}^{*}). \tag{24b}\] That is, the block Stieltjes transform, \(\mathbf{g}^{X}\), arises naturally and the saddle-point is its functional inverse evaluated at the vector of block-wise norms of \(\mathbf{v}\). Inserting the saddle-point solution, we find that asymptotically \[I^{X}(\mathbf{v})=\exp\biggl{(}\frac{N}{2}H^{X}(\mathbf{t})\biggr{)}, \tag{25}\] where, the saddle-point free energy itself, \(H^{X}(\mathbf{t})\), has a gradient with each element proportional to the the block \(\mathcal{R}\)-transform (13): \[\frac{\mathrm{d}H^{X}(\mathbf{t})}{\mathrm{d}t_{a}}=\frac{N_{a}}{N}\mathcal{R}_{a }^{X}(\mathbf{t}). \tag{26}\] Note that given the definition in 22, it is straightforward to see that \(I^{R}(\mathbf{v})=I^{Y}(\mathbf{v})I^{X}(\mathbf{v})\), and thus \(H^{R}(\mathbf{t})=H^{Y}(\mathbf{t})+H^{X}(\mathbf{t})\). Therefore, we have established the additivity of the block \(\mathcal{R}\)-transform as well. We next substitute the result for \(I^{X}(\mathbf{v})\) back into the Gaussian integral over \(\mathbf{v}\) (21), and then introduce another pair of integration variables, \(\hat{\mathbf{t}}\), in order to decouple \(\mathbf{v}\) from its block-wise norms, \(\mathbf{t}\). Performing the Gaussian integral we find \[\mathbb{E}_{\mathbf{\hat{O}}}\bigl{[}\mathcal{Z}^{R}(\mathbf{Y})\bigr{]} \propto\int\mathrm{d}\mathbf{t}\mathrm{d}\hat{\mathbf{t}}\exp\biggl{(}\frac{N}{2}P^{ X,Y}\bigl{(}\mathbf{t},\hat{\mathbf{t}}\bigr{)}\biggr{)}, \tag{27}\] with \[P^{X,Y}\bigl{(}\mathbf{t},\hat{\mathbf{t}}\bigr{)}:= -\frac{1}{N}\bigl{(}N_{1}t_{1}\hat{t}_{1}+N_{2}t_{2}\hat{t}_{2} \bigr{)}+H^{X}(\mathbf{t})\] \[-\frac{1}{N}\log\det\mathbf{G}^{Y}\bigl{(}z-\hat{\mathbf{t}}\bigr{)}. \tag{28}\] Note that the block resolvent of \(Y\) arises here naturally as function of the two-element vector, \(\hat{\mathbf{t}}\), despite the fact that we set out to find \(\mathbf{G}^{R}\) evaluated at the point \((z,z)\). The integrals over \(\mathbf{t}\) and \(\hat{\mathbf{t}}\) yield an additional pair of saddle-point conditions. The first requires \(\hat{\mathbf{t}}^{*}=\mathbf{\mathcal{R}}^{X}(\mathbf{t}^{*})\) and combining with the second gives \[\mathbf{t}^{*}=\mathbf{g}^{Y}\Bigl{(}z-\mathbf{\mathcal{R}}^{X}(\mathbf{t}^{*})\Bigr{)}. \tag{29}\] We have thus found the desired annealed free energy density, \(\frac{2}{N}\log\mathbb{E}_{\mathbf{\hat{O}}}\bigl{[}\mathcal{Z}^{R}\bigr{]}=P^{X,Y }\Bigl{(}\mathbf{t}^{*},\mathbf{\mathcal{R}}^{X}(\mathbf{t}^{*})\Bigr{)}\) (see (28)). Before solving for \(\mathbf{t}^{*}\) we find the resolvent subordination relation by taking the derivative with respect to \(\mathbf{Y}\) (see Appendix B for a more careful treatment), which gives \[\mathbf{G}^{R}(z)=\mathbf{G}^{Y}\Bigl{(}z-\mathbf{\mathcal{R}}^{X}(\mathbf{t}^{*})\Bigr{)}. \tag{30}\] Now to find \(\mathbf{t}^{*}\), we take the block-wise normalized traces to find \(\mathbf{g}^{R}(z)=\mathbf{g}^{R}\bigl{(}z-\mathbf{R}^{X}(\mathbf{t}^{*})\bigr{)}=\mathbf{t}^{*}\), and that completes the derivation of (17). We observe, our saddle-point condition (29) turns out to have been the subordination relation for the block Stieltjes transform: \[\mathbf{g}^{R}(z)=\mathbf{g}^{Y}\Bigl{(}z-\mathbf{\mathcal{R}}^{X}\bigl{(}\mathbf{g}^{R}(z) \bigr{)}\Bigr{)}. \tag{31}\] Note that while \(\mathbf{g}^{R}\) is evaluated at the scalar point \((z,z)\), the argument to \(\mathbf{g}^{Y}\) is the vector subordination function \(\mathbf{\zeta}\equiv z-\mathbf{\mathcal{R}}^{X}\left(\mathbf{g}^{R}(z)\right)\) whose 2 components are distinct in general. The singular value spectrum of the sum of rectangular matrices can thus be obtained by first finding the block Stieltjes transform, either by employing the additivity of the block \(\mathcal{R}\)-transform or by solving the subordination relation (31), and then using the inversion relation (12). ### Deformation of Singular Vectors Due to Additive Noise Turning now to the singular vectors of the data matrix \(R=\hat{U}_{1}\hat{S}\hat{U}_{2}^{\ T}=Y+X\), we quantify the effect of the noise, \(X\), on the signal, \(Y=U_{1}SU_{2}^{T}\), via the matrix of squared overlaps between the clean singular vectors of the signal, \(U_{a}\), and the noise corrupted singular vectors of the data, \(\hat{U}_{a}\), written as \(\left(\hat{U}_{a}^{T}U_{a}\right)^{2}\). In the noiseless case \(X=0\), one has \(\left(\hat{U}_{a}^{T}U_{a}\right)^{2}=I_{N_{a}}\), signifying perfect correspondence between signal and data singular vectors. In the presence of substantial noise, the individual overlaps are generically of order \(\nicefrac{{1}}{{N_{a}}}\), therefore we define the rescaled expected square overlap between a singular vector, \(\hat{\mathbf{u}}_{\mathbf{a}}\) of \(R\) with corresponding singular value \(\hat{s}\), and a singular vector, \(\mathbf{u}_{\mathbf{a}}\) of \(Y\), with corresponding singular value \(s\), where once again \(a=1,2\) for left and right singular vectors, respectively: \[\Phi_{a}(\hat{s},s)=N_{a}\mathbb{E}\big{[}\big{(}\hat{\mathbf{u}}_{\mathbf{a}}^{T}\mathbf{ u}_{\mathbf{a}}\big{)}\big{]}. \tag{32}\] To see how to obtain the expected square overlaps from the block resolvent, \(\mathbf{G}^{R}\), we write each of the diagonal blocks, \(G^{R}_{kk}\) (8), in terms of their eigen-decomposition, and multiply on both sides by a "target" singular vector of \(Y\), say \(\mathbf{u}_{\mathbf{a}}\) with associated singular value \(s\): \[\mathbf{u}_{\mathbf{a}}^{T}G^{R}_{kk}(z)\mathbf{u}_{\mathbf{a}}=\sum_{l=1}^{N_{1}}\frac{z}{z^{ 2}-\hat{s}_{l}^{2}}\big{(}\mathbf{u}_{\mathbf{a}}^{T}\hat{\mathbf{u}}_{\mathbf{a}l}\big{)}^{2}. \tag{33}\] If we choose \(z=\hat{s}-i\eta\) where \(\rho_{1}^{R}(\hat{s})\sim O(1)\), with \(N_{1}^{-1}\ll\eta\ll 1\), and take the imaginary part, then we get a weighted average of the square overlaps of a macroscopic number of singular vectors of \(R\), \(\hat{\mathbf{u}}_{\mathbf{a}l}\), that have singular values close to \(\hat{s}\), with the target singular vector \(\mathbf{u}_{\mathbf{a}}\), each weighted by \(\nicefrac{{\pi}}{{2}}\rho_{a}^{R}(\hat{s}_{l})\). If we first take the limit of large \(N_{1}\) and then take \(\eta\to 0\) we obtain the expectation: \[\lim_{\eta\to 0}\frac{2}{\pi}\text{Im}\big{[}\mathbf{u}_{\mathbf{a}}^{T}G^{R}_{kk}( \hat{s}-i\eta)\mathbf{u}_{\mathbf{a}}\big{]}\to\rho_{1}^{R}(\hat{s})\Phi_{a}(\hat{s},s). \tag{34}\] Now, we use the subordination relation (17) to replace the resolvent of \(R\) with the resolvent of \(Y\) resolvent: \(G^{R}_{kk}(z)=G^{Y}_{kk}(\mathbf{\zeta}(z))\) where we have written the 2-component vector \[\mathbf{\zeta}(z)=z-\mathbf{\mathcal{R}}^{X}\left(\mathbf{g}^{R}(z)\right). \tag{35}\] Since \(\mathbf{u}_{\mathbf{a}}\) is an eigenvector of \(G^{Y}_{kk}(\zeta_{1},\zeta_{2})\) with eigenvalue \(\frac{\zeta_{2}}{\zeta_{1}\xi_{2}-\kappa^{2}}\), we find \[\Phi_{1}(\hat{s},s) =\frac{2}{\pi\rho_{1}^{R}(\hat{s})}\lim_{\eta\to 0}\text{Im} \frac{\zeta_{2}\left(\hat{s}-i\eta\right)}{\zeta_{1}\left(\hat{s}-i\eta\right) \zeta_{2}\left(\hat{s}-i\eta\right)-i\eta-s^{2}} \tag{36a}\] \[\Phi_{2}(\hat{s},s) =\frac{2}{\pi\rho_{2}^{R}(\hat{s})}\lim_{\eta\to 0}\text{Im} \frac{\zeta_{1}\left(\hat{s}-i\eta\right)}{\zeta_{1}\left(\hat{s}-i\eta\right) \zeta_{2}\left(\hat{s}-i\eta\right)-i\eta-s^{2}}. \tag{36b}\] These expressions can be written in terms of the real and imaginary parts of the block \(\mathcal{R}\)-transform of the noise \(X\). In the following section we provide simplified expressions for the important case of Gaussian noise. #### iii.2.1 Arbitrary Signal with Gaussian Noise We show in Appendix D that the block \(\mathcal{R}\)-transform of an \(N_{1}\times N_{2}\) (with \(c=\frac{N_{1}}{N_{2}}\)) Gaussian matrix with i.i.d. entries of variance \(\nicefrac{{\sigma^{2}}}{{N_{2}}}\) is: \[\mathbf{\mathcal{R}}^{X}(\mathbf{g})=\sigma^{2}\left(\begin{array}{c}g_{2}\\ cg_{1}\end{array}\right). \tag{37}\] Note that from the definition of the \(\mathcal{R}\)-transform, one can find that \(\mathcal{R}^{A}_{2}(\mathbf{g})g_{2}=c\mathcal{R}^{A}_{1}(\mathbf{g})g_{1}\), for any rectangular \(A\) with aspect ratio \(c\), and (37) is the only pair of linear functions of \(\mathbf{g}\) that satisfies this constraint. We substitute this into the block Stieltjes transform subordination relation \(\mathbf{g}^{R}(z)=\mathbf{g}^{Y}(\mathbf{\zeta})\) with \(\zeta_{1}=z-\sigma^{2}g_{2}^{R}(z)\) and \(\zeta_{2}=z-c\sigma^{2}g_{1}^{R}(z)\), and then use the identity \(g_{2}^{R}(z)=cg_{1}^{R}(z)+\frac{1-c}{z}\) (for arbitrary rectangular \(R\), the spectra of \(RR^{T}\) and \(R^{T}R\) differ only by a set of 0 eigenvalues). Then using \(g_{1}^{Y}(\mathbf{\zeta})=\zeta_{2}g_{YY^{T}}(\zeta_{1}\zeta_{2})\), we arrive at: \[g_{1}^{R}(z)=\zeta_{2}(z)g_{YY^{T}}\bigg{(}\zeta_{2}(z)\left(\zeta_{2}(z)- \sigma^{2}\frac{1-c}{z}\right)\bigg{)}\,. \tag{38}\] with \[\zeta_{2}(z):=z-c\sigma^{2}g_{1}^{R}(z). \tag{39}\] This is a self-consistency equation for the block Stieltjes transform of \(R\), \(\mathbf{g}^{R}(z)\), that depends on the noise variance \(\sigma^{2}\), the aspect ratio \(c\), and the standard Stieltjes transform of the signal covariance, \(g_{YY^{T}}(z)\). Once this equation is solved, the singular vector overlaps can be obtained as well. We introduce notation for the real and imaginary parts of the block Stieltjes transform: \(g_{1}^{R}(\hat{s})=h_{1}^{R}+if_{1}^{R}\), where we assume that the spectral density at \(\hat{s}\) is finite. Then we insert this into (37) to get the real and imaginary parts of the \(\mathcal{R}\)-transform of \(X\). After defining, for notational ease, \[\nu(z):=\text{Re}\zeta_{2}(z)=z-c\sigma^{2}h_{1}^{R}(z), \tag{40}\] we can simplify the overlaps (36) for the case of Gaussian noise: \[\Phi_{1}(\hat{s},s)= \frac{\nu(\hat{s})\mathcal{B}(\hat{s})-c\sigma^{2}\mathcal{A}(\hat{ s},s)}{\left(\mathcal{A}(\hat{s},s)\right)^{2}+\left(f_{1}^{R}\mathcal{B}(\hat{s}) \right)^{2}} \tag{41a}\] \[\Phi_{2}(\hat{s},s)= \frac{\left(\nu(\hat{s})-\sigma^{2}\frac{1-c}{\hat{s}}\right) \mathcal{B}(\hat{s})-c\sigma^{2}\mathcal{A}(\hat{s},s)}{\left(\mathcal{A}( \hat{s},s)\right)^{2}+\left(f_{1}^{R}\mathcal{B}(\hat{s})\right)^{2}}, \tag{41b}\] where we have \[\mathcal{A}(\hat{s},s)= \nu(\hat{s})\bigg{(}\nu(\hat{s})-\sigma^{2}\frac{1-c}{\hat{s}} \bigg{)}-\Big{(}s+c^{2}\sigma^{4}\big{(}f_{1}^{R}\big{)}^{2}\Big{)} \tag{42a}\] \[\mathcal{B}(\hat{s})= 2c\sigma^{2}\bigg{(}\nu(\hat{s})-\sigma^{2}\frac{1-c}{2\hat{s}} \bigg{)}. \tag{42b}\] These formulas are numerically confirmed in Figure 3, which shows the singular vector overlaps between data and signal, when the signal, \(Y\), is Gaussian as well. The upper three panels illustrate how individual overlaps are not self-averaging - a smooth overlap function emerges only when one averages over many overlaps within a narrow range of singular values. We stress that these formulas for the overlap of data singular vectors with the signal singular vectors do not depend directly on the unobserved signal \(Y\). Rather, they depend only on the noise variance and the block Stieltjes transform, \(g_{1}^{R}(z)\), of the noisy data matrix, \(R\). Furthermore, \(g_{1}^{R}(z)\) can be estimated empirically via kernel methods for the empirical spectral density and its Hilbert transform [32, 35, 36]. This suggests that significant information about the structure of the unobserved extensive signal can be inferred from noisy empirical data, and this will lay the foundation for the optimal estimators derived below. ### SVD of the Extensive Spike Model We now return to the spiked matrix model \(R=Y+X\), with signal \(Y=sU_{1}U_{2}^{T}\), where \(s\) is a scalar, \(U_{a}\) is an \(N_{a}\times K\) matrix with orthogonal columns. But now we assume the rank of the spike \(K=bN_{1}\) grows linearly with the dimensions of \(R\), while the aspect ratio \(c=\nicefrac{{N_{1}}}{{N_{2}}}\) is fixed as before. We will assume the elements of the noise matrix \(X\) are i.i.d. Gaussian: \(X_{ij}\sim\mathcal{N}\Big{(}0,\frac{1}{N_{2}}\Big{)}\). In the following we first discuss the singular values, and then the singular vectors of the extensive-rank model. #### v.3.1 Singular Value Spectra of the Extensive Spike Model \(YY^{T}\) has \(K\) eigenvalues equal to \(s^{2}\) and \(N_{1}-K\) zero eigenvalues. Its Stieltjes transform is \[g_{YY^{T}}(z)=\frac{z+\left(b-1\right)s^{2}}{z\left(z-s^{2}\right)}. \tag{43}\] We can now make use of the self-consistency equation for \(g_{1}^{R}(z)\), (38). Momentarily writing \(g\) in place of \(g_{1}^{R}(z)\) and Figure 3: **Singular Vector Overlaps of Sums of Gaussians.** Color plot (bottom) shows the theoretical prediction of the logarithm of the overlap, \(\log_{10}\Phi_{1}(\hat{s},s)\) (Eq (41a)), between the left singular vectors of \(R=Y+X\) and those of \(Y\), as a bivariate function of the associated singular values \(\hat{s}\) of \(R\) and \(s\) of \(Y\). Dashed line is identity, \(s=\hat{s}\). Here the signal \(Y\) and noise \(X\) are both rectangular Gaussian matrices with aspect ratio \(c=\nicefrac{{N_{1}}}{{N_{2}}}=\nicefrac{{3}}{{2}}\). Elements of \(Y\) are i.i.d. with variance \(\nicefrac{{\sigma_{y}^{2}}}{{N_{2}}}\) where \(\sigma_{y}^{2}=3\). Elements of \(X\) are i.i.d. with variance \(\nicefrac{{\sigma_{x}^{2}}}{{N_{2}}}\) with \(\sigma_{x}^{2}=1\). Top three panels show singular vector overlaps for 3 horizontal slices associated with 3 fixed “target” signal singular values \(s=1,2.5,4\), for 10 realizations of random matrices with \(N_{1}=1500\) and \(N_{2}=1000\). Each grey dot denotes an overlap between a left singular vector of \(R\) with singular value \(\hat{s}\) (position on x-axis) with the left singular vector of \(Y\) with singular value _closest_ to \(s\). Red dots reflect binning the singular values of \(R\) from all 10 realization, with number of bins set to \(\sqrt{N}_{2}\) giving bin width \(\approx 0.13\). Blue is the theoretical prediction from \(\Phi_{1}(\hat{s},s)\) in (41a). Note that as the signal singular value \(s\) increases, \(\Phi_{1}(\hat{s},s)\) as a function of \(\hat{s}\) becomes more concentrated about a value slightly _larger_ than \(s\). This reflects the fact that singular vector structure in the signal \(Y\) at singular value \(s\) is mapped to singular vector structure in the data \(R\) at larger singular values \(\hat{s}\), due to singular value inflation under the addition of noise \(X\). simplifying, we find \[\left[g\!\left(\zeta_{2}-\frac{1-c}{z}\right)-1\right]\!\left[\zeta_{2}\!\left( \zeta_{2}-\frac{1-c}{z}\right)-s^{2}\right]=bs^{2}, \tag{44}\] where we have \(\zeta_{2}=z-c\sigma^{2}g\) as above. This is a quartic polynomial for \(g=g_{1}^{R}(z)\). We solve this numerically for \(z\) near the real line in order to find the density of singular values of \(R\) (see Appendix E for the polynomial coefficients and details of numerical solution). The spectrum in the extensive case differs from the finite rank case most clearly in that outlier singular values at large \(s\) reflecting the signal do not concentrate at a single data singular value. Rather, (see Figure 4 top) for sufficiently strong signal \(s\), the presence of noise blurs the signal singular values into a continuous bulk that is disconnected from the noise bulk. This signal bulk is occurs near the value predicted by the finite-rank theory, but has significant spread. At very weak signals \(s\) there is a single, unimodal bulk spectrum, just as in the finite-rank setting, but in contrast, these weak signals make their presence felt by extending the leading edge of the bulk beyond the edge of the spectrum predicted by the finite rank theory, _even_ when the signal strength \(s\) is below the critical signal strength \(s_{crit}\) predicted by finite-rank model (Figure 4 3rd panel). At intermediate signal strength \(s\), the singular value distribution exhibits a connected bimodal phase not present in the finite-rank model (Figure 4 2nd panel). Thus as \(s\) increases, we see two phase transitions, first from a single unimodal bulk, to a single bimodal bulk, and then to two disconnected bulks. This final splitting of the signal bulk from the noise bulk happens at significantly larger signal \(s\) than in the finite-rank regime (Figure 4 bottom). In the limit of low rank (small \(b\)) the spectrum approaches the finite-rank theory as expected (Figure S1). Interestingly, we find that as rank ratio \(b\) increases, there are three distinct regimes. For sufficiently strong signals (Figure 5A), the signal bulk remains disjoint from the noise bulk for all \(b\). For intermediate signals (Figure 5B), the two bulks merge but the spectrum remains bimodal for all \(b\). Finally, for weak signals (Figure 5C), there is a single connected bulk for all \(b\). #### iv.2.2 Singular Vector Subspace Overlap in the Extensive Spike Model We now turn to the singular vectors of the extensive spike model. For simplicity we focus on the left-singular vectors. Since the \(K\) non-zero singular values of the signal are degenerate, the only meaningful overlap to study is a subspace overlap, or the projection of the data singular vectors, \(\hat{\mathbf{u}}_{\mathbf{1}l}\), onto the entire subspace defined by \(U_{1}\). Figure 4: **Signal-Strength Phase Transition in the SV Density of the Extensive Spike Model.** Each row of the bottom color map shows theoretical predictions for the singular value density of the extensive spike model, \(\rho_{1}^{\rm R}(\hat{s})\), corresponding to different signal strengths \(s\) (along y-axis) at a fixed rank ratio of \(b=\nicefrac{{K}}{{N_{1}}}=0.25\). In all panels in this figure, the aspect ratio is fixed to \(c=\nicefrac{{N}}{{N_{2}}}=0.7\) with \(N_{2}=2000\), and all color maps have same color scale. Features of the finite-rank spike model are shown as lines for comparison. The black dashed line indicates the threshold signal strength \(s_{crit}\) above which the finite-rank model acquires an outlier singular value. The red line indicates the position of this outlier singular value. The vertical grey lines indicate the edges of the bulk spectrum of the finite-rank model. The top 3 panels, corresponding to horizontal slices of the color maps, plot the singular value density at 3 different signal strengths \(s=1,1.5\) and 2. Solid blue curves indicate theoretical predictions from numerically solving (44), while grey histograms indicate the empirical density from a single realization. For comparison, the red spike indicates the position of outlier singular value in the finite-rank theory, while the grey spike indicates the edge of the noise bulk in this theory. Together these panels demonstrate that as \(s\) increases, the singular value density undergoes two phase transitions, the first from a unimodal to a bimodal phase, and the second from a bimodal to a disconnected phase. Therefore we compute \[\big{\|}\tilde{\mathbf{u}}_{\mathbf{1}}^{T}U_{1}\big{\|}^{2}=\sum_{m=1}^{K} \big{(}\tilde{\mathbf{u}}_{\mathbf{1}}^{T}\mathbf{u}_{\mathbf{1}m}\big{)}^{2}. \tag{45}\] Since this is an extensive sum, we expect that it is self-averaging, and should be well predicted by \(b\Phi_{1}(\hat{s}_{l},s)\), where \(\mathbf{\Phi}\) is defined in (41). After solving (44) for the block Stieltjes transform of \(R\), we insert the result in (41) to find \(\Phi_{1}(\hat{s},s)\). In Figure S2 we return to the simulation results presented in Figure 2 and show that the extensive-rank theory predicts both the leading outlier singular value and the subspace overlap of the corresponding singular vector, even when the finite-rank theory fails. In Figure 6 we explore the phase diagram of the extensive-rank model and successfully confirm the predictions of the extensive-rank theory for singular vector overlaps by comparing these predictions to numerical simulations. For strong signal \(s\) (Fig 6 top panel), the overlap of the data singular vectors with the true signal subspace is reasonably approximated by the finite-rank theory [22]. However, for moderate signals (Fig 6 second panel) the data singular vectors interact, competing for the signal subspace. Singular vectors associated with the leading edge of the signal bulk have higher subspace overlap with the signal, while those at the lower edge overlap less. Perhaps most intriguingly, even for weak signals below the finite-rank phase transition at \(s=s_{crit}\) the top empirical singular vectors still overlap significantly with the signal subspace (Fig 6 third panel). Note this overlap is nontrivial and \(O(1)\)_even_ when the singular value spectrum of the data is in the unimodal bulk phase. Figure 7 summarizes the results of this section with a two-dimensional phase diagram in the joint \(s\)-\(b\) plane. It shows the boundaries between three phases of the singular value spectrum: unimodal, bimodal, and disconnected. Additionally, the color map shows the average _excess_ signal subspace overlap of the singular vectors associated with the top \(b\) fraction of singular values. Since by chance, any random vector is expected to have an overlap \(b\) with the signal subspace, we compute the excess overlap as \(\tilde{\Phi}(\hat{s},s)=b(\Phi(\hat{s},s)-1)\). In the color plot we show the excess overlap averaged across the singular Figure 5: **Rank-Ratio Phase Transition in the SV Density of the Extensive Spike Model.** Each row of the bottom color maps show theoretical predictions for the singular value density \(\rho_{1}^{R}(\hat{s})\) corresponding to different rank ratios \(b\) (y-axis) at a fixed signal strength. Top 3 panels indicate matching theory (blue curves) and empirics of a single realization (grey histograms) for 3 rank ratios \(b=0.05,0.25,0.5\). Comparisons to the finite-rank theory are shown using the same conventions as in Fig 4. **A.** For sufficiently strong signal, here \(s=1.8\), the singular value density remains in the disconnected phase for all values values of \(b\). **B.** For intermediate signal strengths, here \(s=1.55\), the SV density undergoes a phase transition from disconnected to bimodal as the rank ratio \(b\) increases. **C.** For subthreshold signals, here \(s=0.9\) with \(s_{crit}=c^{1/4}\approx 0.915\), the density remains connected for all \(b\). vectors associated with the top \(b\) singular values, that is: \[\int_{t}^{\infty}\tilde{\Phi}(\hat{s},s)\rho_{1}^{R}(\hat{s})\mathrm{d}\hat{s}, \tag{46}\] where \(t\) is given by \(b=\int_{t}^{\infty}\rho_{1}^{R}(\hat{s})\mathrm{d}\hat{s}\). The figure demonstrates that in contrast to the finite-rank setting, the phase transitions in the empirical singular value spectrum of the data do _not_ coincide with the detectability of the signal. Rather, the alignment of the data singular vectors with the signal subspace is a smooth function of both signal strength \(s\) and rank ratio \(b\), and nonzero excess overlap can occur even in the unimodal phase. ## V Optimal rotationally invariant estimators We now consider two estimation problems given noisy observations, \(R=Y+X\), of a true signal, \(Y\): 1) denoising \(R\) in order to optimally reconstruct \(Y\), and 2) estimation of the true signal covariance, \(C=YY^{T}\). We focus on the case where both signal \(Y\) and noise \(X\) rotationally invariant (\(P_{Y}(M)=P_{Y}(O_{1}MO_{2})\) for arbitrary orthogonal matrices \(O_{1},O_{2}\), and similarly for \(X\)). In this setting it is natural to consider _rotationally invariant_ estimators \(F\) that transform consistently with rotations of the data: \(F\left(O_{1}RO_{2}\right)=O_{1}F(R)O_{2}\). Such \(F\) can only alter the singular values of \(R\) while leaving the singular Figure 6: **Singular Vector Overlaps in the Extensive Spike Model.** Each row of the bottom color plot shows the theoretical prediction for the overlap of a left singular vector with singular value \(\hat{s}\) of the empirical data matrix \(R=Y+X\), with the entire \(K\) dimensional signal subspace of \(Y\) (i.e. the squared norm of the projection of the singular vector onto this subspace). The prediction is given by \(b\Phi_{1}(\hat{s},s)\), using (41) after numerically solving for \(g_{1}^{R}(\hat{s})\) from (44). Different rows along the y-axis correspond to different signal strengths \(s\) for \(Y\). The aspect ratio for \(X\) and \(Y\) is \(c=\nicefrac{{N}}{{N_{2}}}=0.7\), with \(N_{2}=2000\), while the rank ratio for \(Y\) is fixed at \(b=\nicefrac{{K}}{{N_{1}}}=0.1\). Comparisons to the finite-rank theory (grey lines and red curve) are shown using the same conventions as in Fig. 4. The top 3 panels show horizontal slices for \(s=0.75,1.4,2.0\). Solid black curves indicate the empirical singular value density of the data matrix \(R\) in the extensive-rank model. Blue dots indicate the overlap of singular vectors of singular value \(\hat{s}\) of \(R\) with the entire \(K\) dimensional signal subspace of \(Y\) for a single realization of \(R\). Solid black lines through the blue dots indicate matching theoretical predictions for this overlap. For comparison, the horizontal dashed line indicates the overlap, when nonzero, predicted by the finite-rank theory (which depends only on \(s\) and not \(\hat{s}\)). Together, these plots indicate that the signal subspace of \(Y\) is detectable in the top data singular vectors of \(R\), _even_ at small signal strengths \(s\)_below_ the phase transition in the singular value density of \(R\) from unimodal to bimodal. Figure 7: **Singular Vector Overlaps Disregard Singular Value Phases** Two-dimensional phase diagram shows the average “excess” subspace overlap (46) of the top \(b\) fraction of empirical singular vectors with a signal of strength \(s\) and rank ratio \(b\). The black and grey lines demarcate the phases of the singular value spectrum. The grey line separates the unimodal and bimodal phases, and the black line separates the bimodal-connected phase from the disconnected phase. The signal impacts the data via significant overlaps with the signal subspace well below the boundary between unimodal and bimodal singular value spectra. Aspect ratio \(c=0.7\) vectors unchanged. More generally, when \(Y\) is not rotationally invariant, our results yield the best estimator that only modifies singular values of \(R\). Our problem thus reduces to determining optimal shrinkage functions for the singular values. In the finite-rank case, distinct singular vectors of \(Y\) respond independently to noise, so the optimal shrinkage of \(\hat{s}\) depends only on \(\hat{s}\)[23, 29, 42]. As we show below, this is no longer the case in the extensive-rank regime. The optimal shrinkage for each singular value generally depends on the entire data singular spectrum. ### Denoising Rectangular Data We first derive a minimal mean-square error (MMSE) denoiser to reconstruct the rotationally invariant signal, \(Y\), from the noisy data, \(R\). Under the assumption of rotational invariance, the denoised matrix is constrained to have the same singular vectors as the data \(R\), and thus takes the form \(\tilde{Y}=\hat{U}_{1}\phi\big{(}\hat{S}\big{)}\hat{U}_{2}^{T}\). Using \(\tilde{Y}\)'s eigen-decomposition, the MSE can be written \[\mathcal{E}=\frac{1}{N_{1}N_{2}}\text{Tr}\Big{(}Y-\tilde{Y}\Big{)} \Big{(}Y-\tilde{Y}\Big{)}^{T}\\ =\frac{1}{N_{1}N_{2}}\sum_{m}s_{m}^{2}+\phi^{2}(\hat{s}_{m})-2 \phi(\hat{s}_{m})\hat{\mathbf{u}}_{\mathbf{1}m}^{T}Y\hat{\mathbf{u}}_{\mathbf{2}m}. \tag{47}\] Minimizing with respect to \(\phi(\hat{s}_{m})\) gives the optimal shrinkage function: \[\phi^{*}(\hat{s}_{m})=\hat{\mathbf{u}}_{\mathbf{1}m}^{T}Y\hat{\mathbf{u}}_{\mathbf{2}m}, \tag{48}\] which appears to require knowledge of the very matrix being estimated, namely \(Y\). However, in the large size limit it is possible to estimate \(\phi^{*}(\hat{s}_{m})\) via the resolvent \(G^{R}(z)\). We first write \[\text{Tr}\big{[}\mathbf{Y}\mathbf{G}^{R}(z)\big{]}_{11}= \text{Tr}\big{[}YR^{T}G_{RR}\big{(}z^{2}\big{)}\big{]}\] \[= \sum_{l}\frac{\hat{s}_{l}}{z^{2}-\hat{s}_{l}^{2}}\big{(}\hat{\bm {u}}_{\mathbf{1}l}^{T}Y\hat{\mathbf{u}}_{\mathbf{2}l}\big{)}. \tag{49}\] As \(z\) is brought toward the singular value \(\hat{s}_{m}\) the sum is increasingly dominated by the contribution from \(\hat{\mathbf{u}}_{\mathbf{1}m}^{T}Y\hat{\mathbf{u}}_{\mathbf{2}m}=\phi^{*}(\hat{s}_{m})\). We find \[\phi^{*}(\hat{s})=\frac{2}{\pi\rho_{R}(\hat{s})}\lim_{\eta\to 0}\text{Im} \big{[}\tau_{1}\big{[}\mathbf{Y}\mathbf{G}^{R}(\hat{s}-i\eta)\big{]}\big{]}. \tag{50}\] We next apply the subordination relation (17), yielding a product of \(Y\) with a \(Y\)-resolvent, whose trace is readily found: \[\tau_{1}\big{[}\mathbf{Y}\mathbf{G}^{R}(z)\big{]}=\tau_{1}\big{[}\mathbf{Y}\mathbf{G}^{Y}( \mathbf{\zeta})\big{]}=\zeta_{1}g_{1}^{Y}(\mathbf{\zeta})-1, \tag{51}\] where \(\zeta_{a}(z)=z-\mathcal{R}_{a}^{X}\big{(}\mathbf{g}^{R}(z)\big{)}\), and we have used the identity \(\tau[CG_{C}(z)]=z_{QC}(z)-1\) for arbitrary symmetric \(C\). Since \(g_{1}^{Y}(\mathbf{\zeta})=g_{1}^{R}(z)\) (31), we obtain \[\phi^{*}(\hat{s})=\frac{2}{\pi\rho_{1}^{R}(\hat{s})}\lim_{\eta\to 0}\text{Im} \big{[}\big{(}\hat{s}-\mathcal{R}_{1}^{X}\big{(}\mathbf{g}^{R}(\hat{s}-i\eta) \big{)}\big{)}g_{1}^{R}(\hat{s}-i\eta)\big{]}, \tag{52}\] which depends only on the block Stieltjes transform of the empirical data matrix, \(R\), and the \(\mathcal{R}\)-transform of the noise, \(X\). Importantly, the dependence on the unknown signal \(Y\) is gone, making this formula amenable to practical applications, at least when the noise distribution of \(X\) is known. For i.i.d. Gaussian noise with known variance, \(\frac{\sigma^{2}}{N_{2}}\), we have \(\mathcal{R}_{X}^{X}(\mathbf{g})=\sigma^{2}g_{2}\) and the general relation \(g_{2}(z)=cg_{1}(z)+\frac{1-c}{z}\), so (52) simplifies considerably. Writing the real and imaginary parts, \(g_{1}^{R}(z)=h_{1}^{R}(z)+if_{1}^{R}(z)\), we obtain the following simple expression depending only on the variance of the noise, and the Hilbert transform of the observed data spectral density: \[\phi^{*}(\hat{s})=\hat{s}-2c\sigma^{2}h_{1}^{R}(\hat{s})-\sigma^{2}\frac{1-c}{ \hat{s}}. \tag{53}\] This expression for the Gaussian case was derived previously in [43]. Figure 8 compares (53) to the optimal shrinkage found based on the finite-rank theory [29]. The extensive-rank formulas recover many more significant singular values (Figure 8A). Moreover the mean squared error of \(Y^{*}=\hat{U}_{1}\phi^{*}\Big{(}\hat{S}\Big{)}\hat{U}_{2}\) is superior to that of the finite-rank denoiser, steadily improving as a function of the signal rank, while the finite-rank denoiser actually worsens (Figure 8B). In fact, for our simulations with \(N_{1}=1000\) and \(N_{2}=500\), the extensive-rank denoiser out performed the finite-rank denoised for all \(K>5\), across the range of signal strengths tested. Finally, given an estimate of the noise variance \(\sigma^{2}\), we are able to numerically estimate \(g_{1}^{R}(\hat{s})\) with kernel methods (Appendix F) and compute an empirical shrinkage function that is very close to the theoretical optimum (Figure 8C). ### Estimating the Signal Covariance We now derive an MMSE-optimal rotationally invariant estimator for the signal covariance, \(C=YY^{T}\). Just as in [31, 33], and similarly to our results in the previous section, the optimal estimator be given by \(C^{*}=\hat{U}_{1},\psi^{*}\Big{(}\hat{S}\Big{)}\hat{U}_{1}^{T}\), where: \[\psi^{*}(\hat{s}_{l})=\hat{\mathbf{u}}_{\mathbf{1}l}^{T}C\hat{\mathbf{u}}_{\mathbf{1}l}. \tag{54}\] We observe that the top-left block of the square of the Hermitiatization \(\mathbf{Y}\) is given by \(C\), and so \[\big{[}\mathbf{Y}^{2}\mathbf{G}^{R}(z)\big{]}_{11}=\sum_{l}\frac{z}{z^{2}-\hat{s}_{l}^{ T}}\big{(}\hat{\mathbf{u}}_{\mathbf{1}l}^{T}C\hat{\mathbf{u}}_{\mathbf{1}l}\big{)}. \tag{55}\] Thus, we can calculate the optimal shrinkage function by the inversion relation (12): \[\psi^{*}(\hat{s})=\frac{2}{\pi\rho_{R}(\hat{s})}\lim_{\eta\to 0}\text{Im}\big{[} \tau_{1}\big{[}\mathbf{Y}^{2}\mathbf{G}^{R}(\hat{s}-i\eta)\big{]}\big{]}. \tag{56}\] Now, we apply the subordination relation, \(\mathbf{G}^{R}(z)=\mathbf{G}^{Y}(\mathbf{\zeta}(z))\) with \(\zeta_{a}=z-\mathcal{R}_{a}^{X}\big{(}\mathbf{g}^{R}(z)\big{)}\), which gives \(\mathbf{Y}^{2}\mathbf{G}^{R}(z)=\mathbf{Y}^{2}\mathbf{G}^{Y}(\mathbf{\zeta})\), which has top-left block \(\zeta_{2}YY^{T}G_{YY^{T}}(\zeta_{1}\zeta_{2})\). Again, using the identity \(\tau[CG_{C}(z)]=zg_{C}(z)-1\) for arbitrary symmetric \(C\), we have \[\tau_{1}\left[\mathbf{Y}^{2}\mathbf{G}^{R}(z)\right]=\zeta_{2}\left(\zeta_{1}g_{1}^{Y} \left(\zeta_{1}\zeta_{2}\right)-1\right). \tag{57}\] We therefore conclude for general noise matrix, \(X\): \[\psi^{*}(\hat{s})=\frac{2}{\pi\rho_{R}(\hat{s})}\lim_{\eta\to 0}\text{Im} \left[\big{(}\hat{s}-\mathcal{R}_{2}^{X}\big{(}\mathbf{g}^{R}(\hat{s}-i\eta) \big{)}\big{)}\big{]}\big{[}\big{(}\hat{s}-\mathcal{R}_{1}^{X}\big{(}\mathbf{g}^{ R}(\hat{s}-i\eta)\big{)}\big{)}g_{1}^{R}(\hat{s}-i\eta)-1\big{]}\right]. \tag{58}\] Once again, for i.i.d. Gaussian noise with known variance, \(\frac{z^{2}}{N_{2}}\), our estimator (58) simplifies considerably. Ex Figure 8: **Optimal Denoising of Extensive Spikes.****A** Each row of the bottom color map shows the optimal shrinkage function \(\phi^{*}(\hat{s})\) (53) for denoising data from the extensive spike model. Different rows on the y-axis correspond to different rank ratios \(b=\nicefrac{{K}}{{N_{1}}}\) of the signal \(Y\), while the signal strength \(s\) of \(Y\) is fixed at \(s=1.8\) and the aspect ratio is fixed at \(c=0.7\). The top 3 panels show horizontal slices with \(b=0.25,0,1,0.01\). Blue curves indicate the optimal shrinkage function for the extensive-rank model while orange curves indicate the optimal shrinkage function for the finite-rank model [29] Eq (7) (which does not depend on \(b\)). These panels indicate that the optimal shrinkage function for the extensive-rank model balances singular values more than that of the finite-rank model by more (less) aggressively shrinking larger (smaller) singular values. **B.** Comparison of mean-squared error in rectangular data denoising of \(K\) spikes, as a function of \(K\), using the optimal shrinkage function for the finite-rank model (orange) versus that of the extensive spike model (blue). Even at small spike numbers of \(K=10\) for \(N_{1}=1000\) by \(N_{2}=500\) sized data matrices, the extensive denoiser already outperforms the finite-rank denoiser, and at larger \(K\) the extensive (finite-rank) denoiser gets better (worse). **C.** Empirical shrinkage, and comparison of optimal shrinkage function for two different errors: denoising the rectangular signal matrix, \(\phi^{*}(\hat{s})\) (53) (blue) vs estimating the \(N_{1}\)-by-\(N_{1}\) signal covariance matrix \(\sqrt{\psi^{*}(\hat{s})}\) (59) (green), for signal strength \(s=1.5\) and rank ratio \(b=0.1\), with aspect ratio \(c=0.7\). Darker dots show empirical shrinkage obtained via kernel estimation (see Appendix F) of the block Stieltjes transform from the empirical singular values with \(N_{2}=2000\). **D.** Comparison between multiplicative model (spiked covariance) and additive model (spiked rectangular model) for \(K=50\) with \(N_{1}=1000\) and \(N_{2}=1500\). _Top:_ Eigenvalue spectra of empirical covariance (\(RR^{T}\)) for multiplicative model (blue) and additive model (orange). _Bottom:_ Optimal shrinkage under the wrong model. Empirical spectrum generated by additive model, shrinkage function of multiplicative model [31] Eq (13) (orange) vs the correct, additive model (blue). **E.** Mean-square error as a function of \(K\) using multiplicative model (orange) vs correct, additive model (blue). panding \(g_{1}^{R}\) into real and imaginary parts, \(g_{1}^{R}(\hat{s})=h_{1}^{R}(\hat{s})+if_{1}^{R}(\hat{s})\), and using the optimal shrinkage function found above for rectangular denoising, \(\phi^{*}(\hat{s})=\hat{s}-2c\sigma^{2}h_{1}^{R}-\sigma^{2}\frac{1-c}{\hat{s}}\), we finally obtain \[\psi\left(\hat{s}\right)= \phi(\hat{s})\bigg{(}\phi(\hat{s})+\sigma^{2}\frac{1-c}{\hat{s}} \bigg{)} \tag{59}\] \[-c\sigma^{2}\Big{[}c\sigma^{2}\Big{(}\big{(}h_{1}^{R}(\hat{s}) \big{)}^{2}+\big{(}f_{1}^{R}(\hat{s})\big{)}^{2}\Big{)}-1\Big{]}.\] Just as in optimal data denoising, we find that given an estimate of the noise variance, \(\sigma^{2}\), the optimal shrinkage for covariance estimation depends only on the spectral density of \(R\) and its Hilbert transform, which can be estimated directly from data. In Figure 8 we show the optimal shrinkage function for the extensive spike model, and demonstrate that it can be approximated given only an estimate of the noise variance and the empirical data matrix, \(R\) (Figure 8C). We find that the optimal singular value shrinkage of singular values derived for covariance estimation (59), \(\sqrt{\psi(\hat{s})}\) is substantially different than \(\phi(\hat{s})\) (53) obtained for denoising the rectangular signal (Figure 8C). The denoising shrinkage suppresses the noise more aggressively, but suppresses the signal singular values more as well. Finally, we compare the shrinkage obtained from assuming a multiplicative form of noise instead of the additive spiked rectangular model studied here. In the finite-rank regime, the spiked rectangular model can be instead modeled as a multiplicative model with data arising form a spiked covariance. Concretely, the data in the multiplicative model is generated as \(R_{mult}=\sqrt{C_{mult}}X\), i.e. each column is sampled from a spiked covariance: \(C_{mult}=YY^{T}+I\). In the finite-rank regime, and for Gaussian noise, the two models yield identical spectra and covariance-eigenvector overlaps. The optimal shrinkage for covariance estimation for the multiplicative model for arbitrary \(C_{mult}\) has been previously been reported ([31] Eq (13) for Gaussian noise and [33] Eq IV.8 for more general noise), and here we consider the impact of employing the multiplicative shrinkage formula on data generated from the additive spiked rectangular model. We observe (Figure 8D Top) that for small rank-ratio (\(b=0.05\)) the two models give fairly similar eigenvalue distributions. Nevertheless, applying the optimal multiplicative shrinkage on the additive model data gives poor results: the shrinkage obtained is non-monotonic in the empirical eigenvalue (Figure 8D Bottom). Furthermore, the mean-square error in covariance estimation obtained with the multiplicative shrinkage worsens as a function of rank (Figure 8E). ## VI Discussion While one approach to estimation depends on prior information about the structure of the signal (such as sparsity of singular vectors for example), we have followed a line of work on rotationally invariant estimation that assumes there is no special basis for either the signal or the noise [44; 45]. In this approach, knowledge of the expected deformation of the singular value decomposition (SVD) of the data due to noise allows for the explicit calculation of optimal estimators. In the case of finite-rank signals, where the impact of additive noise on singular values and vectors is known [21; 22], formulas for optimal shrinkage for both denoising [23; 28; 29] and covariance estimation [42] have been found. For extensive-rank signals, however, while formulas for the singular value spectrum of the free sum of rectangular matrices are known [37; 38; 40], there are no prior results for the singular vectors of sums of generic rectangular matrices (though see [41] for contemporaneous results). Even in the setting of square, Hermitian matrices, results on eigenvectors of sums are relatively new [30; 31]. Recent work derived a subordination relation for the product of square symmetric matrices, and applied it to a "multiplicative" noise model in which each observation of high-dimensional data is drawn independently from some unknown, potentially extensive-rank, covariance matrix [33]. In that context, knowledge of the overlaps of the empirical covariance with the unobserved population covariance is sufficient to enable the construction of an optimal rotationally invariant estimator [31; 33; 35; 36]. We have derived analogous results for signals with additive noise: we have computed an asymptotically exact subordination relation for the block resolvent of the free sum of rectangular matrices, i.e. for the resolvent of the Hermitianization of the sum in terms of the resolvents of the Hermitianization of the summands. From the subordination relation, we derived the expected overlap between singular vectors of the sum and singular vectors of the summands. These overlaps quantify how singular vectors are deformed by additive noise. We have calculated separate expressions for both the optimal non-linear shrinkage of singular values for signal denoising, and for the optimal non-linear shrinkage of covariance eigenvalues for covariance estimation. Under the assumption of i.i.d. Gaussian noise these shrinkage functions depend only on the noise variance and the empirical singular value spectrum, which we have shown can be estimated by kernel methods. We have applied our results in order to study the extensive spike model. We found a significant improvement in estimating signals with even fairly low rank-ratios, over methods that are based on the finite-rank theory. Our results may have significant impact on ongoing research questions around spiked matrix models [24; 25; 26; 27], such as the question of the detectability of spikes or optimal estimates for the number of spikes, for example. The subordination relation derived here is closely related to operator-valued free probability, which provides a systematic calculus for block matrices with orthogo nally/unitarily invariant blocks, such as the \(2\times 2\)-block Hermitianizations \(\mathbf{Y},\mathbf{X},\mathbf{R}\). Spectral properties of a matrix are encoded via \(2\times 2\) operator-valued Stieltjes and \(\mathcal{R}\) transforms - whose diagonal elements correspond exactly to the block Stieltjes and \(\mathcal{R}\)-transforms defined here. A fundamental result in this context is an additive subordination relation for the operator-valued Stieltjes transform, which is an identical formula to (B20) [40]. We comment briefly on our derivation of the block resolvent subordination, which is summarized in Section IV.1 and treated fully in Appendix B. First, we note that previous work derived resolvent subordination relations for square symmetric matrices using the replica method [33; 34; 36]. These works assume the replicas decouple which results in a calculation that is equivalent to computing the annealed free energy. Here we used concentration of measure arguments to prove that the annealed approximation is asymptotically correct (Appendix B). In the course of our derivation of the subordination relation we encountered the expectation over arbitrary block-orthogonal rotations of the Hermitianization of the noise matrix (eq (22) in the main text, and Appendix C). As noted in the main text, this expectation plays an analogous role to the HCIZ spherical integral which appears in the derivation of the subordination relation of square symmetric matrices [36]. The logarithm of the rank-1 spherical integral yields the antiderivative of the standard R-transform for square symmetric matrices [46]. To our knowledge, the particular block spherical integral in our work (Appendix C) has not been studied previously. In fact, it is very closely related to the rectangular spherical integral, whose logarithm is the antiderivative of the so-called _rectangular_\(\mathcal{R}\)-transform [37]. In our setting, two such rectangular spherical integrals are coupled, and the logarithm of the result is the antiderivative of the _block_\(\mathcal{R}\)-transform (13) (up to component-wise proportionality constants related to the aspect ratio). While the _rectangular_\(\mathcal{R}\)-transform is additive, its relationship to familiar RMT objects such as the Stieltjes transform is quite involved. In contrast, the block \(\mathcal{R}\)-transform that arises from the block spherical integral is a natural extension of the scalar \(\mathcal{R}\)-transform, with a simple definition in terms of the functional inverse of the block Stieltjes transform. Furthermore, as mentioned above, the block \(\mathcal{R}\)-transform is essentially a form of the more general operator \(\mathcal{R}\)-transform from operator-valued free probability. This formulation is appealing because it provides a direct link between a new class of spherical integrals and operator-valued free probability. We stress that even under the assumption of Gaussian i.i.d. noise, the optimal estimators we obtained in V are not quite _bona fide_ empirical estimators, as they depend on an estimate of the noise variance. This may not be a large obstacle, but we leave it for future work. We do note that while under the assumption of finite-rank signals, appropriate noise estimates can be obtained straightforwardly for example from the median empirical singular value (see [28] for example), this is no longer the case in the extensive regime that we study. In empirical contexts in which one has access to multiple noisy instantiations of the same underlying signal, however, a robust estimate of the noise variance may be readily available. Other recent work has also studied estimation problems in the extensive-rank regime. [47] studied the distribution of pairwise correlations in the extensive regime. [43] studied optimal denoising under a known, factorized extensive-rank prior, and arrived at the same shrinkage function we find for the special case of Gaussian i.i.d. noise (53), and [48] studied both denoising and dictionary learning with known, extensive-rank prior. Lastly, during the writing of this manuscript, the pre-print [41] presented partially overlapping work. They derived the subordination relation for the resolvent of Hermitianizations as well as the optimal rotationally invariant data denoiser, and additionally establish a relationship between the rectangular spherical integral and the asymptotic mutual information between data and signal. However, unlike our work, this contemporaneous work: (1) does not calculate the optimal estimator of the signal covariance; (2) does not explore the phase diagram of extensive spike model and its associated conceptual insights about the decoupling of singular value phase transitions from singular vector detectability that occurs at extensive but not finite rank; (3) does not extensively numerically explore the inaccuracy and inferior data-denoising and signal-estimation performance of the finite-rank model compared to the extensive rank model, a key motivation for extensive rank theory; (4) at a technical level [41] follows the approach of [33] using a decoupled replica approach yielding an annealed approximation, whereas we prove the annealed approximation is accurate using results from concentration of measure; (5) also at a technical level [41] employs the rectangular spherical integral resulting in rectangular \(\mathcal{R}\)-transforms, whereas we introduce the block spherical integral yielding the block \(\mathcal{R}\)-transform, thereby allowing us to obtain simpler formulas. We close by noting that our results for optimal estimators depend on the assumption of rotational (orthogonal) invariance. Extending this work to derive estimators for extensive-rank signals with structured priors is an important topic for future study. The rectangular subordination relation and the resulting formulas for singular vector distortion due to additive noise hold for arbitrary signal matrices. These may prove to be of fundamental importance from the perspective of signal estimation in the regime of high-dimensional statistics, as any attempt to estimate the structure of a signal in the presence of noise must overcome both the distortion of the signal's singular value spectrum _and_ the deformation of the signal's singular vectors. ## Acknowledgments We thank Javan Tahir for careful reading of this manuscript that led to significant improvements. We thank Haim Sompolinsky and Gianluigi Mongillo for helpful comments. S.G. thanks the Simons Foundation and an NSF CAREER award for funding. I.D.L. thanks the Koret Foundation for funding. G.C.M. thanks the Stanford Neurosciences Graduate Program and the Simons Foundation. ## Appendix A Finite-Rank Theory for the Spiked Matrix Model We review formulas from [22] for the finite-rank spiked matrix model, \(R=sU_{1}U_{2}^{T}+X\), where the \(U_{a}\) are \(N_{a}\times K\) with orthonormal columns, and \(X\) is a random \(N_{1}\times N_{2}\) matrix with well-defined singular value spectrum in the large size limit with fixed aspect ratio \(c=\nicefrac{{N_{1}}}{{N_{2}}}\). In the case where the noise \(X\) is i.i.d. Gaussian with variance \(\nicefrac{{1}}{{N_{2}}}\), the critical signal strength below which the signal is undetectable is \(s_{crit}=c^{\nicefrac{{1}}{{4}}}\). The top \(K\) singular values of \(R\) are given by \[\hat{s}_{1\leq K}=\begin{cases}s\sqrt{\left(1+\frac{c}{s^{2}}\right)\left(1+ \frac{1}{s^{2}}\right)}&\text{for}\;s>s_{crit}\\ 1+\sqrt{c}&\text{otherwise}.\end{cases} \tag{38}\] The overlaps of the corresponding singular vectors, \(\hat{\mathbf{u}}_{al}\) for \(l=1...K\), with the signal subspaces, \(U_{a}\), for \(a=1,2\) are given by \[\left\|\hat{\mathbf{u}}_{1l}^{T}U_{1}\right\|^{2}=\begin{cases}\frac{s^{4}-c}{s^{ 4}+cs^{2}}&\text{for}\;s>s_{crit}\\ 0&\text{otherwise}\end{cases} \tag{39}\] \[\left\|\hat{\mathbf{u}}_{2l}^{T}U_{2}\right\|^{2}=\begin{cases}\frac{s^{4}-c}{s^{ 4}+s^{2}}&\text{for}\;s>s_{crit}\\ 0&\text{otherwise}\end{cases}. \tag{40}\] For a generic noise matrix, \(X\), with block Stieltjes transform, \(\mathbf{g}^{X}(z)\), [22] defines the \(D\)-transform, which is the product of the elements of \(\mathbf{g}^{X}(z)\): \[D_{X}(z)=g_{1}^{X}(z)g_{2}^{X}(z) \tag{41}\] Then the critical signal satisfies \[D_{X}(x_{+})=\frac{1}{s_{crit}^{2}}, \tag{42}\] where \(x_{+}\) is the supremum of the support of the singular value spectrum of \(X\). For suprathreshold signals, \(s>s_{crit}\), the empirical singular value outlier, \(\hat{s}\) satisfies \[D_{X}(\hat{s})=\frac{1}{s^{2}} \tag{43}\] and the two overlaps, corresponding to blocks \(a=1\) and \(a=2\) for left and right singular vectors, respectively, are given by \[\left\|\hat{\mathbf{u}}_{a}^{T}U_{a}\right\|^{2}=\frac{-2g_{a}^{X}(\hat{s})}{D_{X }(\hat{s})D_{X}^{\prime}(\hat{s})} \tag{44}\] ## Appendix B Derivation of the Block-Resolvent Subordination Relation Here we calculate the asymptotic subordination relation (17), found in Section IV.1 of the main text, for the block resolvent of the free sum of rectangular matrices \(R=Y+O_{1}XO_{2}\), and \(O_{a}\) Haar-distributed orthogonal matrices of size \(N_{a}\) for \(a=1,2\). We write \(N=N_{1}+N_{2}\) and study the large \(N\) limit with fixed aspect ratio \(c=\nicefrac{{N_{1}}}{{N_{2}}}\). For notational ease we introduce the ratio of each block's size to the entire matrix: \[\beta_{a}:=\frac{N_{a}}{N}, \tag{45}\] We begin by writing \[\mathbf{M}:=zI-\mathbf{R}=zI-(\mathbf{Y}-\bar{\mathbf{O}}\mathbf{X}\bar{\mathbf{O}}), \tag{46}\] where \(\bar{\mathbf{O}}=\left[\begin{array}{cc}O_{1}&0\\ 0&O_{2}\end{array}\right]\). Next we define the partition function, \(\mathcal{Z}^{R}(\mathbf{Y}):=\left(\det\mathbf{M}\right)^{-\nicefrac{{1}}{{2}}}\), which we can write as a Gaussian integral: \[\mathcal{Z}^{R}(\mathbf{Y})=\int\frac{\mathrm{d}\mathbf{v}}{\sqrt{2\pi}^{N}}\exp\left( -\frac{1}{2}\mathbf{v}^{T}\mathbf{M}\mathbf{v}\right). \tag{47}\] We also define the corresponding free energy density \[\mathcal{F}^{R}(\mathbf{Y}):=\frac{2}{N}\log\mathcal{Z}^{R}(\mathbf{Y})\,, \tag{48}\] and as discussed in the main text, the desired block resolvent is \(\mathbf{G}^{R}(z)=\mathbf{M}^{-1}=\frac{\mathrm{d}}{\mathrm{d}\mathbf{Y}}\mathcal{F}^{R}( \mathbf{Y})\). Prior work on the case of square symmetric matrices has employed the replica trick to approximate to compute this quenched average [33; 34; 36; 41]. In our notation, this amounts to approximating \(\log\mathcal{Z}_{R}=\lim_{n\to 0}\frac{\mathcal{Z}_{R}^{n-1}}{n}\), and then computing \(\mathbb{E}_{\mathbf{O}}\big{[}\mathbf{G}^{R}(z)\big{]}=\mathbb{E}_{\mathbf{O}}\big{[} \mathcal{Z}^{n-1}\frac{\mathrm{d}\mathbf{Z}}{\mathrm{d}\mathbf{Y}}\big{]}\) via \(n\) Gaussian integrals. Prior work has assumed that the replicas do not couple, which effectively amounts to computing the annealed average, \(\log\mathbb{E}_{\bar{\mathbf{O}}}\big{[}\mathcal{Z}^{R}(\mathbf{Y})\big{]}\). We show in Appendix B using concentration inequalities that the annealed calculation is in fact asymptotically exact. In particular, as \(N\to\infty\), \[\mathcal{F}^{R}(\mathbf{Y})=\mathbb{E}_{\mathbf{O}}\left[\frac{2}{N}\log\mathcal{Z}^{R} (\mathbf{Y})\right]\to\frac{2}{N}\log\mathbb{E}_{\mathbf{O}}\left[\mathcal{Z}^{R}(\bm {Y})\right], \tag{49}\] that is, the quenched average \(\mathcal{F}^{R}(\mathbf{Y})\) tends toward the annealed average \(\frac{2}{N}\log\mathbb{E}_{\bar{\mathbf{O}}}\left[\mathcal{Z}^{R}(\mathbf{Y})\right]\). Writing out the expectation and separating factors that depend on \(\bar{\mathbf{O}}\), we have \[\mathbb{E}_{\mathbf{O}}\big{[}\mathcal{Z}^{R}(\mathbf{Y})\big{]} =\int\frac{\mathrm{d}\mathbf{v}}{\sqrt{2\pi}^{N}}e^{-\frac{1}{2}\mathbf{v }^{T}(z-\mathbf{Y})\mathbf{v}}\] \[\times\mathbb{E}_{\mathbf{O}}\left[e^{\frac{1}{2}\mathbf{v}^{T}\bar{\mathbf{O}} \mathbf{X}\bar{\mathbf{O}}^{T}\mathbf{v}}\right]. \tag{50}\] The expectation over \(\bar{\mathbf{O}}\) on the right hand side is a rank-1 block spherical integral. In Appendix C, we derive an asymptotic expression for the expectation, which depends only on, \(\mathbf{\mathcal{R}}^{X}\), the block \(\mathcal{R}\)-transform of the noise matrix \(X\), and the block-wise norms of the vector, \(\mathbf{v}\). Introducing the two-element vector, \(\mathbf{t}\) whose \(a^{th}\) entry is \(\frac{1}{N_{a}}\left\|\mathbf{v}_{\mathbf{a}}\right\|^{2}\), we have \[\mathbb{E}_{\mathbf{O}}\Big{[}e^{\frac{1}{2}\mathbf{v}^{T}\bar{\mathbf{O}}\mathbf{X}\bar{\mathbf{O}} ^{T}\mathbf{v}}\Big{]}=\exp\biggl{(}\frac{N}{2}H^{X}(\mathbf{t})\biggr{)}, \tag{100}\] where in anticipation of a saddle-point condition below, we write \(H^{X}(\mathbf{t})\) as a contour integral within \(\mathbb{C}^{2}\) from \(0\) to \(\mathbf{t}\): \[H^{X}(\mathbf{t}):=\int_{0}^{\mathbf{t}}d\mathbf{w}\cdot\Bigl{(}\mathbf{\beta}\odot\mathbf{\mathcal{ R}}^{X}\left(\mathbf{w}\right)\Bigr{)}, \tag{101}\] where \(\mathbf{\beta}=\frac{1}{N}\left(\begin{array}{c}N_{a}\\ N_{b}\end{array}\right)\) and \(\odot\) is element-wise product. This gives \[\mathbb{E}_{\mathbf{O}}\left[\mathcal{Z}^{R}(\mathbf{Y})\right]= \int\frac{d\mathbf{v}}{\bigl{(}\sqrt{2\pi}\bigr{)}^{N}}e^{-\frac{1}{2 }\mathbf{v}^{T}\left(z-\mathbf{Y}\right)\mathbf{v}}\] \[\times\exp\biggl{(}\frac{N}{2}H^{X}(\mathbf{t})\biggr{)}, \tag{102}\] In order to decouple \(\mathbf{v}\) from \(\mathbf{t}\), we introduce integration variables and Fourier expressions for the delta-function constraints \(\delta\Bigl{(}N_{a}t_{a}-\left\|\mathbf{v}_{\mathbf{a}}\right\|^{2}\Bigr{)}\): \[1=\int\mathrm{d}t_{a}\int\frac{\mathrm{d}\hat{t}_{a}}{4\pi i}\exp\biggl{(}- \frac{1}{2}\hat{t}_{a}\Bigl{(}N_{a}t_{a}-\left\|\mathbf{v}_{a}\right\|^{2}\Bigr{)} \biggr{)}. \tag{103}\] We now have \[\mathbb{E}_{\mathbf{O}}\bigl{[}\mathcal{Z}^{R}(\mathbf{Y})\bigr{]}=\int\biggl{(}\prod _{a}\frac{\mathrm{d}t_{a}\mathrm{d}\hat{t}_{a}}{4\pi i}e^{-\frac{1}{2}N_{a}t_ {a}\hat{t}_{a}}\biggr{)}\exp\biggl{(}\frac{N}{2}H^{X}(\mathbf{t})\biggr{)}\int \frac{d\mathbf{v}}{\bigl{(}\sqrt{2\pi}\bigr{)}^{N}}\exp\biggl{(}-\frac{1}{2}\mathbf{v }^{T}\left(zI-\bar{\mathbf{T}}-\mathbf{Y}\right)\mathbf{v}\biggr{)}, \tag{104}\] where we have introduced the diagonal \(N\times N\) matrix, \(\bar{\mathbf{T}}\), which has \(\hat{t}_{1}\) along the first \(N_{1}\) diagonal elements followed by \(\hat{t}_{2}\) along the remaining \(N_{2}\) elements. The integral over \(\mathbf{v}\) is a Gaussian integral with inverse covariance \(\bigl{(}zI-\bar{\mathbf{T}}-\mathbf{Y}\bigr{)}\). Crucially, this covariance is exactly, \(\mathbf{G}^{Y}\bigl{(}z-\hat{\mathbf{t}}\bigr{)}\) the block resolvent of \(Y\) with a shifted argument, \(z-\hat{\mathbf{t}}\). Note that the block resolvent, as a function of two complex numbers, has emerged here in our calculation. The result is the inverse square-root of the determinant: \[\int\mathrm{d}\mathbf{v}e^{-\frac{1}{2}\mathbf{v}^{T}\bigl{(}zI-\mathbf{Y}-\bar{\mathbf{T}} \bigr{)}\mathbf{v}}\propto\det\bigl{(}zI-\mathbf{Y}-\bar{\mathbf{T}}\bigr{)}^{-\frac{1}{2 }}. \tag{105}\] Thus, ignoring proportionality constants we have \[\mathbb{E}_{\bar{\mathbf{O}}}\bigl{[}\mathcal{Z}^{R}(\mathbf{Y})\bigr{]} \propto\int\mathrm{d}\mathbf{t}\mathrm{d}\hat{t}\exp\biggl{(}\frac{N}{ 2}P^{X,Y}\bigl{(}\mathbf{t},\hat{\mathbf{t}}\bigr{)}\biggr{)}, \tag{106}\] with \[P^{X,Y}\bigl{(}\mathbf{t},\hat{\mathbf{t}}\bigr{)}:= -\beta_{1}t_{1}\hat{t}_{1}-\beta_{2}t_{2}\hat{t}_{2}+H^{X}(\mathbf{t})\] \[-\frac{1}{N}\log\det\bigl{(}zI-\mathbf{Y}-\bar{\mathbf{T}}\bigr{)}. \tag{107}\] We expect this integral to concentrate around its saddle point in the large-size limit. We find that taking the derivative of \(P^{X,Y}\bigl{(}\mathbf{t},\hat{\mathbf{t}}\bigr{)}\) with respect to \(t_{a}\) gives the following appealing saddle-point condition for \(\hat{\mathbf{t}}\): \[\hat{\mathbf{t}}=\mathbf{\mathcal{R}}^{X}(\mathbf{t}). \tag{108}\] In order to take the derivatives with respect to \(\hat{t}_{a}\), we find it helpful to write out \(N_{2}\) singular values \(s_{m}\) of \(Y\) (including \(N_{2}-N_{1}\) zeros when \(N_{2}>N_{1}\)). Then \(\bigl{(}zI-\mathbf{Y}-\bar{\mathbf{T}}\bigr{)}\) decouples into \(2\times 2\) matrices of the form \(\left[\begin{array}{cc}z-\hat{t}_{1}&-s_{m}\\ -s_{m}&z-\hat{t}_{2}\end{array}\right]\), and that allows us to write \[\det\bigl{(}zI-\mathbf{Y}-\bar{\mathbf{T}}\bigr{)} =\bigl{(}z-\hat{t}_{1}\bigr{)}^{(N_{1}-N_{2})}\] \[\times\prod_{m=1}^{N_{2}}\bigl{[}\bigl{(}z-\hat{t}_{1}\bigr{)} \bigl{(}z-\hat{t}_{2}\bigr{)}-s_{m}^{2}\bigr{]}.\] Then we find that taking the derivative of (107) gives the final saddle-point condition: \[t_{1} =\bigl{(}z-\hat{t}_{2}\bigr{)}g_{YY^{T}}\bigl{(}\bigl{(}z-\hat{t }_{1}\bigr{)}\bigl{(}z-\hat{t}_{2}\bigr{)}\bigr{)} \tag{109}\] \[t_{2} =\bigl{(}z-\hat{t}_{1}\bigr{)}g_{Y^{T}Y}\bigl{(}\bigl{(}z-\hat{t}_{ 1}\bigr{)}\bigl{(}z-\hat{t}_{2}\bigr{)}\bigr{)}. \tag{110}\] We can write this concisely in vector notation: \[\mathbf{t}^{*}=\mathbf{g}^{Y}\Bigl{(}z-\mathbf{\mathcal{R}}^{X}(\mathbf{t}^{*})\Bigr{)}. \tag{111}\] Thus we have in the large \(N\) limit, the desired free energy is \(\mathcal{F}^{R}(\mathbf{Y})=P^{X,Y}\Bigl{(}\mathbf{t}^{*},\mathbf{\mathcal{R}}^{X}(\mathbf{t}^{*}) \Bigr{)}\). Informally, to derive the matrix subordination relation, we differentiate \(\mathcal{F}^{R}\), which, from (107), yields \(\bigl{(}zI-\bar{\mathbf{T}}-\mathbf{Y}\bigr{)}^{-1}=\mathbf{G}^{Y}(z-\mathbf{\mathcal{R}}(\mathbf{t }^{*}))\). But we argued above that \(\frac{\mathrm{d}}{\mathrm{d}\mathbf{Y}}\mathcal{F}^{R}(\mathbf{Y})=\mathbf{G}^{R}(z)\), which gives the subordination relation. More formally, consider a Hermitian test matrix, \(A\), with a bounded spectral distribution, and then observe that \(\frac{\mathrm{d}}{\mathrm{d}y}\log\det(\mathbf{M}+yA)=\tau\big{[}A\mathbf{M}^{-1}\big{]}\). Thus, we substitute \(\mathbf{Y}\rightarrow\mathbf{Y}+yA\) into the expression for \(\mathcal{F}^{R}\) (144) and differentiate to find \[\lim_{N\rightarrow\infty}\tau\left[A\mathbf{G}^{R}\left(z\right)\right]=\lim_{N \rightarrow\infty}\tau\left[A\mathbf{G}^{Y}\left(z-\mathbf{\mathcal{R}}^{X}\left(\bm {t}^{*}\right)\right)\right], \tag{149}\] Using \(A\) proportional to either \(\left[\begin{array}{cc}I_{N_{1}}&0\\ 0&0\end{array}\right]\) or \(\left[\begin{array}{cc}0&0\\ 0&I_{N_{2}}\end{array}\right]\), we can now take the normalized block-wise traces of both sides, yielding \[\mathbf{g}^{R}(z)=\mathbf{g}^{Y}\Big{(}z-\mathbf{\mathcal{R}}^{X}(\mathbf{t}^{*})\Big{)}. \tag{150}\] Thus, comparing to 148 we have \(\mathbf{t}^{*}=\mathbf{g}^{R}(z)\), and 150 becomes the subordination relation for the block Stieltjes transform. Substituting \(\mathbf{t}^{*}=\mathbf{g}^{R}(z)\) into 149, we obtain the desired resolvent relation \[\tau\big{[}A\mathbf{G}^{R}(z)\big{]}=\tau\Big{[}A\mathbf{G}^{Y}\Big{(}z-\mathbf{\mathcal{ R}}^{X}\big{(}\mathbf{g}^{R}(z)\big{)}\Big{)}\Big{]}, \tag{151}\] for all Hermitian test matrices \(A\) with bounded spectrum, or as written informally in the main text, \(\mathbf{G}^{R}(z)=\mathbf{G}^{Y}\Big{(}z-\mathbf{\mathcal{R}}^{X}\big{(}\mathbf{g}^{R}(z) \big{)}\Big{)}\). ### Proof the Annealed Free Energy Asymptotically Equals the Quenched Free Energy Suppose \(A,B\) are hermitian matrices with bounded spectrum. Define the function \[f\left(O\right):=\frac{1}{N}\log\det\left(z-\left(A+OBO^{T}\right)\right), \tag{152}\] for arbitrary orthogonal \(O\in\mathbb{S}\mathbb{O}\left(N\right)\). For sufficiently large \(z\), the matrix in the determinant is always positive, and this is a smooth function on \(\mathbb{S}\mathbb{O}\left(N\right)\) bounded above and below by constants \(c_{\pm}:=\log\Big{(}z\pm\left(\left\|A\right\|_{op}+\left\|B\right\|_{op} \right)\Big{)}\), where \(\left\|\cdot\right\|_{op}\) is the operator norm. For such \(z\), we prove the following Lipschitz bound below (see Section B.0.1.1 for proof): \[\left|f\left(O_{1}\right)-f\left(O_{2}\right)\right|\leq\frac{\mu}{\sqrt{N}} \left\|O_{1}-O_{2}\right\|_{2}, \tag{153}\] where \(\mu:=\pi\left\|B\right\|_{op}e^{-c_{-}}\) and \(\left\|\cdot\right\|_{2}\) is the Hilbert-Schmidt norm, \(\left\|X\right\|_{2}=\sqrt{\mathrm{Tr}[X^{T}X]}\). In particular, we will be interested in the case that the orthogonal matrix \(O\) is block diagonal with blocks \(O_{a}\in\mathbb{S}\mathbb{O}(N_{a})\), and thus \(O\) is a member of the product space \(\mathbb{S}\mathbb{O}(N_{1})\times\mathbb{S}\mathbb{O}(N_{2})\) with \(N_{1}+N_{2}=N\). The group \(\mathbb{S}\mathbb{O}\left(N_{a}\right)\) with Haar measure and Hilbert-Schmidt metric obeys a logarithmic Sobolev inequality with constant \(\frac{4}{N_{a}-2}\), so the product space has Sobolev constant \(\max_{a}\frac{4}{N_{a}-2}=\frac{4}{\gamma N-2}\), where \(\gamma:=\min(\frac{N_{1}}{N},\frac{N_{2}}{N})\) ([49], Thms. 5.9, 5.16), and we can apply Theorem 5.5 of [49], yielding \[\mathbb{P}\left[\left|f\left(O\right)-\mathbb{E}_{O}f\left(O\right)\right|\geq \frac{\mu}{\sqrt{N}}r\right]\leq 2\exp\!\left(-\left(\gamma N-2\right)\frac{r^{2}}{8} \right)\!, \tag{154}\] for all \(r\geq 0\). Writing \(H:=\mathbb{E}_{O}\left[f\left(O\right)\right]\) and \(M:=z-A-OBO^{T}\), this implies \[\mathbb{P}\left[\det\left(M\right)\geq e^{NH+\sqrt{N}\mu r}\right]\leq 2e^{- \left(\gamma N-2\right)\frac{r^{2}}{8}}. \tag{155}\] Since \(\det\left(M\right)\leq e^{Nc_{+}}\), we can upper bound the expectation: \[\mathbb{E}_{O}\left[\det\left(M\right)\right]\leq\left(1-2e^{- \left(\gamma N-2\right)\frac{r^{2}}{8}}\right)e^{NH+\sqrt{N}\frac{\mu r}{2}r} \\ +2e^{-\left(\gamma N-2\right)\frac{r^{2}}{8}}e^{Nc_{+}}. \tag{156}\] Choosing \(r=\sqrt{8c_{+}/\gamma}\), we find that \(\frac{1}{N}\log\mathbb{E}_{O}\left[\det\left(M\right)\right]\) is less than or equal to \[\frac{1}{N}\log\left(\left(1-2e^{-\left(N-2\right)c^{+}/\gamma} \right)e^{NH+\sqrt{N}\frac{\mu r}{2}r}+2e^{2c_{+}\gamma}\right)\] \[\xrightarrow{N\rightarrow\infty}H=\frac{1}{N}\mathbb{E}_{O}\left[ \log\det\left(M\right)\right],\] which shows that the limiting annealed average is less than or equal to the limiting quenched average. We could obtain a lower bound via a similar argument, but we have directly via Jensen's inequality that the quenched average is less than or equal to the annealed average, \(\frac{1}{N}\mathbb{E}_{O}\left[\log\det\left(M\right)\right]\leq\frac{1}{N} \log\mathbb{E}_{O}\left[\det\left(M\right)\right]\), so in the limit they are equal: \[\lim_{N\rightarrow\infty}\frac{1}{N}\mathbb{E}_{O}\left[\log\det \left(M\right)\right]=\lim_{N\rightarrow\infty}\frac{1}{N}\log\mathbb{E}_{O} \left[\det\left(M\right)\right]. \tag{157}\] #### b.0.1 Lipschitz Bound To prove 153, note that the gradient of \(f\) (152) is \[\nabla_{O}f\left(O\right)=-2\frac{1}{N}M^{-1}OB, \tag{158}\] where, as above, \(M=z-\left(A+OBO^{T}\right)\). Thus, the Euclidean norm of the gradient is \[\left\|\nabla_{O}f\left(O\right)\right\| =2\frac{1}{N}\left\|M^{-1}OB\right\| \tag{159}\] \[=\frac{2}{N}\sqrt{\mathrm{Tr}[M^{-2}OB^{2}O^{T}]}. \tag{160}\] \(M^{-2}\) and \(OB^{2}O^{T}\) are positive definite Hermitian matrices, so \(\mathrm{Tr}\big{[}M^{-2}OB^{2}O^{T}\big{]}\leq N\left\|B^{2}\right\|_{op} \left\|M^{-2}\right\|_{op}\). From \(M\)'s definition we have \(\left\|M^{-2}\right\|_{op}\leq\left(z-\left(\left\|A\right\|_{op}+\left\|B\right\| _{op}\right)\right)^{-2}=e^{-2c_{-}}\), and so \[\left\|\nabla_{O}f\left(O\right)\right\|\leq\frac{2\left\|B\right\|_{op}}{e^{c _{-}}\sqrt{N}}. \tag{101}\] This shows that \(f\) changes by at most \(\frac{2\left\|B\right\|_{op}}{e^{c_{-}}\sqrt{N}}\) times the geodesic distance on the group: \(\left|f\left(O_{1}\right)-f\left(O_{2}\right)\right|\leq\frac{2\left\|B\right\| _{op}}{e^{c_{-}}\sqrt{N}}d\left(O_{1},O_{2}\right)_{\mathbb{SO}\left(N\right)}\). The geodesic distance is upper bounded by \(\pi/2\) times the Hilbert-Schmidt distance (i.e. the Frobenius norm; [49], p159), so \[\left|f\left(O_{1}\right)-f\left(O_{2}\right)\right|\leq\frac{\pi\left\|B \right\|_{op}}{e^{c_{-}}\sqrt{N}}\left\|O_{1}-O_{2}\right\|_{2}. \tag{102}\] ## Appendix C Rank-\(1\) Block Spherical HCIZ Integral In this section we introduce the "block spherical integral", which extends the HCIZ integral to the setting of Hermitianizations of rectangular matrices. We consider an \(N_{1}\times N_{2}\) matrix, \(X\). We write \(N=N_{1}+N_{2}\) throughout this appendix, and consider the limit of large \(N\) with fixed \(c=\nicefrac{{N_{1}}}{{N_{2}}}\). For notational ease we will introduce \[\beta_{a}:=\frac{N_{a}}{N}, \tag{103}\] for both \(a=1,2\). In the general-rank setting we write \[I^{X}(\mathbf{T}):=\mathbb{E}_{\tilde{\mathbf{O}}}\bigg{[}\text{exp}\bigg{(}\frac{N}{ 2}\text{Tr}\mathbf{T}\mathbf{\hat{O}}\mathbf{X}\mathbf{\hat{O}}\bigg{)}\bigg{]}, \tag{104}\] where \(\mathbf{\hat{O}}=\left[\begin{array}{cc}O_{1}&0\\ 0&O_{2}\end{array}\right]\) is a block-orthogonal matrix, i.e. both \(O_{a}\) are Haar distributed \(N_{a}\times N_{a}\) matrices, and \(\mathbf{T}\) is an arbitrary \(N\times N\) matrix where we've written \(N=N_{1}+N_{2}\) for notational ease. We here solve the rank-\(1\) case, which arises in Appendix B in the calculation of the subordination relation. In order to match the normalization there, we write \(T=\frac{1}{N}\mathbf{v}\mathbf{v}^{T}\), where the individual elements, \(v_{i}\), are \(O(1)\). We have \[I^{X}(\mathbf{T}):=\mathbb{E}_{\mathbf{\hat{O}}}\bigg{[}\text{exp}\bigg{(}\frac{1}{2} \mathbf{v}^{T}\mathbf{\hat{O}}\mathbf{X}\mathbf{\hat{O}}\mathbf{v}\bigg{)}\bigg{]}. \tag{105}\] We write \(\mathbf{v}=\left(\begin{array}{c}\mathbf{v_{1}}\\ \mathbf{v_{2}}\end{array}\right)\) in block form, and observe that the block-orthogonal \(\mathbf{\tilde{O}}\) constrains the within-block norms of \(\mathbf{v}\). Therefore, we define \[\mathbf{w}_{a}:=O_{a}\mathbf{v}_{a}, \tag{106}\] for \(a=1,2\), and perform integrals over arbitrary \(\mathbf{w}=\left(\begin{array}{c}\mathbf{w_{1}}\\ \mathbf{w_{2}}\end{array}\right)\) while enforcing norm-constraints within blocks. We define the \(2\)-component vector \(\mathbf{t}\): \[\hat{t_{a}}=\frac{1}{N_{a}}\left\|\mathbf{v}_{a}\right\|^{2} \tag{107}\] Then we can write: \[I^{X}(\mathbf{T})=\frac{Z(\mathbf{t},X)}{Z(\mathbf{t},0)}, \tag{108}\] where we have defined \[Z(\mathbf{t},X):= \int\frac{\text{d}\mathbf{w}}{\left(2\pi\right)^{\nicefrac{{N}}{{2} }}}\text{exp}\bigg{(}\frac{1}{2}\mathbf{w}^{T}\mathbf{X}\mathbf{w}\bigg{)}\] \[\times\prod_{a=1,2}\delta\Big{(}\|\mathbf{w}_{a}\|^{2}-N_{a}t_{a} \Big{)}. \tag{109}\] We calculate \(Z(\mathbf{t},X)\) by using the Fourier representation of the delta function, over the imaginary axis: \(\delta(x)=\int_{-i\infty}^{+i\infty}\frac{\text{exp}\left(-\nicefrac{{\pi}}{{ 2}}\right)}{4\pi i}\text{d}q\). This gives \[Z(\mathbf{t},X):= \int\Biggl{(}\prod_{a=1,2}\frac{\text{d}q_{a}}{4\pi i}e^{\frac{1} {2}\left(N_{a}q_{a}t_{a}\right)}\Biggr{)} \tag{110}\] \[\int\frac{\text{d}\mathbf{w}}{\left(2\pi\right)^{\nicefrac{{N}}{{2} }}}e^{-\frac{1}{2}\mathbf{w}^{T}\left(\mathbf{\tilde{Q}}-\mathbf{X}\right)\mathbf{w}}\] where we have introduced the \(N\times N\) diagonal matrix, \(\mathbf{\tilde{Q}}\), which has \(q_{1}\) on its first \(N_{1}\) diagonal elements, and \(q_{2}\) on the remaining \(N_{2}\) elements. The Gaussian integral over \(\mathbf{w}\) now yields \(\det\bigl{(}\mathbf{\tilde{Q}}-\mathbf{X}\bigr{)}^{-\nicefrac{{1}}{{2}}}\). Writing \(N_{2}\) singular values of \(X\) as \(x_{m}\) (which includes \(N_{2}-N_{1}\) zeros in the case \(N_{2}>N_{1}\)), we can write: \[\det\bigl{(}\mathbf{\tilde{Q}}-\mathbf{X}\bigr{)}=q_{1}^{\left(N_{1}-N_{2}\right)} \prod_{m=1}^{N_{2}}\left(q_{1}q_{2}-x_{m}^{2}\right) \tag{111}\] Thus, at this stage we have \[Z(\mathbf{t},X):=\int_{-i\infty}^{i\infty}\frac{\text{d}q_{1}\text{d}q_{2}}{\left( 4\pi i\right)^{2}}\exp\biggl{[}\frac{N}{2}F^{X}(\mathbf{t},\mathbf{q})\biggr{]} \tag{112}\] with \[F^{X}(\mathbf{t},\mathbf{q})= \beta_{1}q_{1}t_{1}+\beta_{2}q_{2}t_{2}+\left(\beta_{2}-\beta_{1} \right)\log q_{1}\] \[-\frac{1}{N}\sum_{m=1}^{N_{2}}\log\bigl{(}q_{1}q_{2}-x_{m}^{2} \bigr{)} \tag{113}\] To find the saddle-point, we take partial derivatives with respect to \(q_{1}\) and \(q_{2}\), and find: \[t_{1} =q_{2}^{*}g_{XX^{T}}(q_{1}^{*}q_{2}^{*}) \tag{114}\] \[t_{2} =q_{1}^{*}g_{X^{T}X}(q_{1}^{*}q_{2}^{*}), \tag{115}\] For notational clarity, in this section we define the functional inverse of the block Stieltjes transform, \(\mathbf{\mathcal{B}}^{X}(\mathbf{g}):=\big{(}\mathbf{g}^{X}\big{)}^{-1}(\mathbf{g})\), which is a \(\mathbb{C}^{2}\to\mathbb{C}^{2}\) map satisfying \[\mathbf{\mathcal{B}}^{X}\big{(}\mathbf{g}^{X}(\mathbf{z})\big{)}=\mathbf{z}. \tag{116}\] Finally, we find the asymptotic value of \(I^{X}(\mathbf{v})\) (113) by also solving the saddle-point for \(Z(\mathbf{t},0)\). For \(X=0\) we have \(g_{XX^{T}\!\!\!z}=z^{-1}\), so that the saddle-point condition for \(Z(\mathbf{t},0)\) is simply \(q_{a}^{*}=t_{a}^{-1}\). This yields \(F^{0}(\mathbf{t},\mathbf{q}^{*})=\sum_{a}\beta_{a}(1+\log t_{a})\) We therefore arrive at our asymptotic approximation for the rank-1 block spherical integral: \[I^{X}(\mathbf{T})=\exp\frac{N}{2}H^{X}\left(\mathbf{t}\right), \tag{117}\] where we have: \[H^{X}(\mathbf{t}) =\sum_{a=1,2}\beta_{a}\big{(}t_{a}\mathcal{B}_{a}^{X}(\mathbf{t})- \log t_{a}-1\big{)}\] \[\quad-\frac{1}{N}\log\det\!\left(\mathbf{\bar{\mathcal{B}}}^{X}(\bm {t})-\mathbf{X}\right) \tag{118}\] where we have written \(\mathbf{\bar{\mathcal{B}}}^{X}(\mathbf{t})\) to indicate the \(N\)-by-\(N\) diagonal matrix with \(\mathcal{B}_{1}^{X}(\mathbf{t})\) along the top \(N_{1}\) diagonal elements, and \(\mathcal{B}_{2}^{X}(\mathbf{t})\) along the remaining \(N_{2}\). We observe an appealing relationship between the rank-1 block spherical integral and the block \(\mathcal{R}\)-transform. By the saddle-point conditions, the partial derivatives of \(F^{X}(\mathbf{t},\mathbf{q}^{*})\) with respect to \(q_{a}\) are zero. Therefore the gradient of \(H^{X}\) with respect to \(\mathbf{t}\) treats \(\mathbf{\mathcal{B}}^{X}\) as constant, and we have simply \[\frac{\mathrm{d}H^{X}(\mathbf{t})}{\mathrm{d}t_{a}}=\beta_{a}\bigg{(}\mathcal{B}_ {1}^{X}(\mathbf{t})-\frac{1}{t_{1}}\bigg{)}=\beta_{a}\mathcal{R}_{a}^{X}(\mathbf{t}). \tag{119}\] We therefore write \(H^{X}(\mathbf{t})\) as a contour integral in \(\mathbb{C}^{2}\): \[H^{X}(\mathbf{t})=\int_{0}^{\mathbf{t}}\mathrm{d}\mathbf{w}\cdot\Big{(}\mathbf{\beta}\odot\bm {\mathcal{R}}^{X}(\mathbf{t})\Big{)}, \tag{120}\] where \(\odot\) is the element-wise product and \(\mathbf{\beta}=\left(\begin{array}{c}\beta_{1}\\ \beta_{2}\end{array}\right)=\frac{1}{N}\left(\begin{array}{c}N_{1}\\ N_{2}\end{array}\right)=\frac{1}{1+c}\left(\begin{array}{c}c\\ 1\end{array}\right)\). ## Appendix D The Block \(\mathcal{R}\)-Transform of Gaussian Noise In this section we calculate the block \(\mathcal{R}\)-transform for the \(N_{1}\times N_{2}\) (with \(c=\nicefrac{{N_{1}}}{{N_{2}}}\)) matrix \(X\) with i.i.d. Gaussian elements: \(X_{ij}\sim\mathcal{N}\!\left(0,\frac{\sigma^{2}}{N_{2}}\right)\). For notational clarity, in this section we write the functional inverse of the block Stieltjes transform as, which is a \(\mathbb{C}^{2}\to\mathbb{C}^{2}\) map satisfying \[\mathbf{\mathcal{B}}^{X}\big{(}\mathbf{g}^{X}(\mathbf{z})\big{)}=\mathbf{z} \tag{121}\] Then the block \(\mathcal{R}\)-transform is defined as \[\mathbf{\mathcal{R}}^{X}(\mathbf{g})=\mathbf{\mathcal{B}}^{X}(\mathbf{g})+\frac{1}{\mathbf{g}} \tag{122}\] where the multiplicative inverse, \(\nicefrac{{1}}{{g}}\), is element-wise. We note that as a result of this definition, there is a relation between the elements the block \(\mathcal{R}\)-transform and the elements of the block Stieltjes transform, for any rectangular matrix \(A\) with aspect ratio \(c\): \[\mathcal{R}_{2}^{A}(\mathbf{g})g_{2}=c\mathcal{R}_{1}^{A}(\mathbf{g})g_{1} \tag{123}\] To find \(\mathbf{\mathcal{B}}^{X}\), we observe that in general the product of the two elements of \(\mathbf{g}^{X}(\mathbf{z})\) is a scalar function that depends only on the product of the elements of \(\mathbf{z}\). That is, \(g_{1}^{X}(\mathbf{z})g_{2}^{X}(\mathbf{z})=z_{1}z_{2}g_{XX^{T}\!\!\!x}(z_{1}z_{2})g_{X^ {T}\!\!\!x}(z_{1}z_{2})\). Therefore, we define \[\Lambda_{X}(z):=zg_{XX^{T}\!\!\!x}(z)g_{X^{T}\!\!\!x}(z) \tag{124}\] We can find \(\mathbf{\mathcal{B}}^{X}(\mathbf{g})\) by first inverting \(\Lambda_{X}(z)\), and then \[\mathcal{B}_{1}^{X}(\mathbf{g})\mathcal{B}_{2}^{X}(\mathbf{g})=\Lambda_{X}^{-1}(g_{1} g_{2}) \tag{125}\] For the Gaussian matrix, \(X\), we have \[g_{XX^{T}\!\!\!x}(z)=\frac{z+\sigma^{2}(1+c)-\sqrt{\big{(}z-x_{+}^{2}\big{)} \big{(}z-x_{-}^{2}\big{)}}}{2z\sigma^{2}} \tag{126}\] with \[x_{\pm}=\sigma\big{(}1\pm\sqrt{c}\big{)} \tag{127}\] From there we find \[\Lambda_{X}(z)=\frac{z-\sigma^{2}(1+c)+\sqrt{\big{(}z-x_{+}^{2}\big{)}\big{(}z- x_{-}^{2}\big{)}}}{2c\sigma^{4}} \tag{128}\] Some further algebra yields \[\Lambda_{X}^{-1}(g)=c\sigma^{4}g+\frac{1}{g}+\sigma^{2}(1+c) \tag{129}\] Thus, we have \[\mathcal{B}_{1}^{X}(\mathbf{g})\mathcal{B}_{2}^{X}(\mathbf{g})=c\sigma^{4}g_{1}g_{2}+ \frac{1}{g_{1}g_{2}}+\sigma^{2}(1+c) \tag{104}\] In addition, from the general relationship between the two elements of the block Stieltjes transform, we have: \[\mathcal{B}_{2}^{X}g_{2}=c\mathcal{B}_{1}^{X}g_{1}+1-c \tag{105}\] We next multiply this by both \(\mathcal{B}_{1}^{X}\) and divide by \(g_{2}\) and equate to (D), which yields a quadratic equation for \(\mathcal{B}_{1}^{X}\). We choose the root that yields \(\mathcal{B}_{1}^{X}\equiv\nicefrac{{1}}{{g_{1}}}\) in the small \(\mathcal{B}g\) limit, and then use (D) to arrive at \[\mathcal{B}_{1}^{X}(\mathbf{g}) =\frac{1}{g_{1}}+\sigma^{2}g_{2} \tag{106}\] \[\mathcal{B}_{2}^{X}(\mathbf{g}) =\frac{1}{g_{2}}+c\sigma^{2}g_{1} \tag{107}\] Finally this yields for the block R-transform: \[\mathbf{\mathcal{R}}^{X}(\mathbf{g})=\sigma^{2}\left(\begin{array}{c}g_{2}\\ cg_{1}\end{array}\right) \tag{108}\] ## Appendix E Block Stieltjes Transform and Singular Value Density of the Extensive Spiked Model We report the quartic equation from Section IV.3 for the first element of the block Stieltjes transform of the \(N_{1}\times N_{2}\) extensive spiked model, \(R=sU_{1}U_{2}+X\), with rank-ratio \(b=\nicefrac{{K}}{{N_{1}}}\) and aspect ratio \(c=\nicefrac{{N_{1}}}{{N_{2}}}\). For notational simplicity, in this section we write \(g:=g_{1}^{R}(z)\). Multiplying out (44) yields a quartic: \[Ag^{4}+Bg^{3}+Cg^{2}+Dg+E=0 \tag{109}\] with \[A =-c^{3} \tag{110}\] \[B =c^{2}\bigg{(}3z-2\frac{1-c}{z}\bigg{)}\] (111) \[C =c\bigg{(}-3z^{2}-\bigg{(}\frac{1-c}{z}\bigg{)}^{2}-5c+4+s^{2} \bigg{)}\] (112) \[D =z^{3}+\big{(}4c-2-s^{2}\big{)}z+\big{(}1-2c+s^{2}\big{)}\frac{1- c}{z}\] (113) \[E =-z^{2}-c+1+(1-b)s^{2} \tag{114}\] In order to obtain the singular value density \(\rho_{1}^{R}(\hat{s})\) numerically, we use the roots method of the NumPy polynomial class in Python (3.9.7), to solve with \(z=\hat{s}-10^{-7}i\), and select the root with the largest imaginary part. To our knowledge there is no guarantee that the root with largest imaginary part is the correct root, but we find this works in practice. ## Appendix F Kernel Estimates of Empirical Spectral Densities In order to employ our optimal estimators in empirical settings (Section V), we need to be able to estimate the block Stieltjes transform, \(g_{1}^{R}(\hat{s})\), from data. Developing optimal algorithms to achieve this is left for future work, but here we use a technique inspired by [36] and [35] to demonstrate proof of principle. We use this kernel on the extensive spike model in Figure 8C. Given \(N_{1}\) empirical singular values, \(\{\hat{s}_{m}\}\) (assuming \(N_{1}<N_{2}\) without loss of generality), we define a smoothed block Stieltjes transform: \[\tilde{g}_{1}^{R}(z):=\frac{1}{N_{1}}\sum_{m=1}^{N_{1}}\frac{z}{z^{2}-\hat{s}_ {m}^{2}-i\eta_{m}} \tag{115}\] where \(\eta_{m}\) is a local bandwidth term given by: \[\eta_{m}=\frac{\hat{s}_{m}}{N^{\nicefrac{{1}}{{2}}}}. \tag{116}\]
2308.07183
On Thompson Problem
In 1987, the second author of this paper reported his conjecture, all finite simple groups $S$ can be characterized uniformly using the order of $S$ and the set of element orders in $S$, to Prof. J. G. Thompson. In their communications, Thompson posed his problem about the judgment of solvability of finite groups $G$. In this paper we give a positive answer for Thompson's problem if the prime graph of $G$ is not connection.
Rulin Shen, Wujie Shi, Feng Tang
2023-08-14T14:42:28Z
http://arxiv.org/abs/2308.07183v2
# On Thompson problem ###### Abstract. In 1987, the second author of this paper reported his conjecture, all finite simple groups \(S\) can be characterized uniformly using the order of \(S\) and the set of element orders in \(S\), to Prof. J. G. Thompson. In their communications, Thompson posed his problem about the judgment of solvability of finite groups \(G\). In this paper we give a positive answer for Thompson's problem if the prime graph of \(G\) is not connection. Key words and phrases:Thompson problem, Same order elements, Disconnected prime graph Project supported by the NNSF of China (Grant No.10571128 and 10871032) and the foundation of Educational Department of Hubei Province in China (Grant No. Q20092905 and Q20111901) ## 1. Introduction Let \(G\) be a finite group. Let \(G\) be a finite group. Let \(\pi_{e}(G)\) be the set of all prime divisors of \(G\). we have \(\frac{m}{(n,m)}\mid|N|\), hence \(\frac{m}{(n,m)}=1\), it follows that \(m|n\). Therefore, \(m=n\). On the other hand, if \(G\) has an element \(y\) with \(o(y)\neq m\) such that \(o(yN)=o(xN)\), then \(o(xN)|o(y)\), namely, \(m|o(y)\), we may assume \(o(y)=mq\) where \(q\neq 1\) and there exists a prime divisor \(r\) of \(q\) such that \(r\in\pi_{j}(G)\) with \(i\neq j\) (otherwise, \(q\) is a divisor of some element of \(\mu_{i}(G)\), we have \(o(yN)=o(y)=mq\) from above arguement, we get a contradiction). This shows that \(\pi_{i}(G)\) and \(\pi_{j}(G)\) are connected in the prime graph of \(G\), a contradiction. Hence, \(o(y)=m\) and \(|M_{\overline{G}}(m)|\leqslant|M_{F}(m)|=|M_{G}(m)|\). Moreover, all elements of coset \(xN\) have order \(m\), so \(|M_{G}(m)|=|M_{\overline{G}}(m)||N|\). For the rest of the part, since \(\overline{G}\) is an almost simple group, \(\overline{G}\) is an extension of \(S\) by \(\overline{G}/S\). We choose a set of coset representatives \(\{y_{1},y_{2},\cdots,y_{l}\}\) of \(\overline{G}/S\), where \(y_{i}\in\overline{G}\). Obviously, all elements of \(\overline{G}\) are of the form \(y_{i}s\) where \(s\in S\) and \(1\leqslant i\leqslant l\). Since \(o(y_{i}S)=o(y_{i}sS)\), we have \(o(y_{i}S)|o(y_{i}s)\). If \(y_{i}S\neq S\), then by the above condition that \(\overline{G}/S\) is a \(\pi_{1}(G)\)-subgroup, we have \(\pi(o(y_{i}S))\subseteq\pi_{1}(\overline{G})\), so there is a prime divisor of \(o(y_{i}s)\) which belongs to \(\pi_{1}(G)\). We assume \(\pi(o(s))\subseteq\pi_{i}(\overline{G})\) with \(i\geq 2\), then any element \(x\) of \(\overline{G}-S\) with the property that \(o(x)\) is not a divisor of \(o(s)\) since \(\pi_{1}(\overline{G})\) and \(\pi_{i}(\overline{G})\) are not connected in the prime graph of \(\overline{G}\). Therefore, \(|M_{\overline{G}}(m)|=|M_{S}(m)|\). In order to complete the proof of Theorem 1.3, we need to introduce some useful results about Frobenius group and \(2\)-Frobenius group. **Lemma 2.3** (see [10] and [11]).: _Suppose that \(F\) is a Frobenius group with kernel \(K\) and complement \(H\), then the following statements hold._ 1. \(K\) _is nilpotent and the Sylow_ \(p\)_-subgroup of_ \(K\) _is cyclic where_ \(p\) _is an odd prime number._ 2. _If_ \(2\in\pi(H)\)_, then_ \(K\) _is abelian. Moreover, if_ \(H\) _is solvable then the Sylow_ \(2\)_-subgroup of_ \(H\) _is either a cyclic group or a (generalized) quaternion group_ \(Q_{2}\)_- and if_ \(H\) _is non-solvable then there exists a subgroup_ \(H_{0}\) _with_ \(|H\cdot H_{0}|\leqslant 2\) _such that_ \(H_{0}\cong Z\times\mathrm{SL}_{2}(5)\)_, where_ \(Z\) _has properties that all Sylow subgroups of_ \(Z\) _are cyclic and_ \((|Z|,30)=1\)_._ **Lemma 2.4** (see [11]).: _Suppose that \(E=ABC\) is a \(2\)-Frobenius group, where \(A\) and \(AB\) are normal subgroups of \(E\), \(AB\) and \(BC\) are Frobenius groups with kernel \(A\), \(B\) and complement \(B\), \(C\), respectively. Then the following statements hold._ 1. \(s(E)=t(E)=2\)_,_ \(\pi_{1}(E)=\pi(A)\cup\pi(C)\) _and_ \(\pi_{2}(E)=\pi(B)\)_._ 2. \(E\) _is solvable,_ \(B\) _is a Hall cyclic subgroup of_ \(E\) _with odd order and_ \(C\) _is a cyclic group._ **Lemma 2.5**.: _Let \(G\) be a finite group, \(E\) a \(2\)-Frobenius group and \(Ord(G)=Ord(E)\). If there is a nonabelian simple group \(S\) such that \(S\leqslant\overline{G}=G/N\leqslant\mathrm{Aut}(S)\), where \(N\) is the maximal normal soluble subgroup of \(G\), \(N\) and \(\overline{G}/N\) are \(\pi_{1}(G)\)-groups, then \(|N|\) divides \(|A|\) and \(\pi(\frac{|A|}{|N|})\subseteq\pi(|C|)\)._ Proof.: Let \(m>1\) be a divisor of any element of \(\mu_{i}(E)\) where \(i\geqslant 2\), from Theorem 2.2(2)(b) we have \(|M_{E}(m)|=|M_{\overline{G}}(m)|\cdot|N|\). It is easy to see that \(\mu_{2}(E)=\{|B|\}\) and \(|M_{E}(|B|)|=|A|\phi(|B|)\). Since \(Ord(G)=Ord(E)\), we have \[|M_{G}(|B|)|=|M_{E}(|B|)|=|M_{\overline{G}}(|B|)||N|=v_{|B|}(\overline{G})\phi (|B|)|N|.\] Therefore \(|A|=v_{|B|}(\overline{G})\cdot|N|\), this implies that \(|N|\) divides \(A\). If there is a prime number \(p\) with \(p\in\pi(\frac{|A|}{|N|})\) such that \(p\not\in\pi(|C|)\), then the number of \(p\)-elements of \(E\) is \(|A|_{p}\). Since \(Ord(G)=Ord(E)\), the number of \(p\)-elements of \(G\) is \(|G|_{p}\) and hence the Sylow \(p\)-subgroup \(P\) of \(G\) is normal. Note that \(N\) is the maximal normal soluble subgroup of \(G\), so \(P\leqslant N\), thus \((p,\frac{|G|}{|N|})=1\). As \(|G|=|A||B||C|\), we can obtain \((p,\frac{|A|}{|N|})=1\), which contradicts our hypothesis. **Lemma 2.6**.: _If \(m\mid n\) and \(\pi(\frac{n}{m})\subseteq\pi(m)\), then \(\pi(m)=\pi(n)\)._ **Lemma 2.7** (Zsigmondy Theorem see [13]).: _Let \(q>1\) be a natural number, and let \(p\) be a primitive prime divisor of \(q^{n}-1\), namely \(p\mid q^{n}-1\) but for any \(1\leqslant i\leqslant n-1\), \(p\) does not divide \(q^{i}-1\). If \(n>1\) and \(n,q\) do not satisfy the following two cases, then \(q^{n}-1\) has a primitive prime divisor._ 1. \(n=2\) _and_ \(q=2^{k}-1\)_, where_ \(k\) _is a natural number._ 2. \(n=6,q=2\)_._ Now we will use Theorem 2.1 and the lemmas introduced above to prove our main theorem. In fact, we prove it with the help of the classification theorem for finite simple groups. First of all, let us consider the case of Frobenius group \(F\), which has kernel \(K\) and complement \(H\). Recall that if \(G\) is a finite group with \(Ord(G)=Ord(F)\) then \(|G|=|F|\), \(\pi_{e}(G)=\pi_{e}(F)\) and \(v_{n}(G)=v_{n}(F)\) where \(n\in\pi_{e}(F)\). **Claim 3.1**.: _Let \(F\) be a frobenius group and \(G\) a finite group with \(Ord(G)=Ord(F)\), then \(G\) is a Frobenius group. Moreover, if \(F\) is solvable, so is \(G\)._ Proof.: We will use Gruenberg-Kegel theorem to finish the proof of Claim 3.1. **Case 1.**\(G\) is a \(2\)-Frobenius group. Let \(G=ABC\), where \(A\), \(AB\) are normal subgroups of \(G\); \(AB\), \(BC\) are Frobenius group with kernels \(A\), \(B\) and complements \(B\), \(C\) respectively. By Lemmas 2.3 and 2.4, we have \(\pi(B)=\pi_{2}(F)\), that is, \(\pi(B)=\pi(K)\) or \(\pi(H)\). Obviously, \(B\) is a cyclic group of odd order by Lemma 2.4. 1. We first assume that \(\pi(B)=\pi(K)\). Since \(K\) is a Hall subgroup of \(F\) and \(|B|=|K|\), there is an element of \(F\) with order \(|B|\), and then \(K\) is cyclic. Note that \(K\) is normal in \(F\), so \(v_{|B|}(F)=1\). On the other hand, \(AB\) is a Frobenius group with complement \(B\), this implies that \(v_{|B|}(G)\ni v_{|B|}(AB)=|A|>1\), hence \(v_{|B|}(G)>v_{|B|}(F)\), a contradiction. 2. We next assume that \(\pi(B)=\pi(H)\). Similarly as above, we can obtain that \(H\) is a cyclic group, hence \(v_{|B|}(F)=|K|\). Since \(A\) and \(AB\) are normal subgroups of \(G\), \(G=ABC=C(AB)=C(BA)=CBA\), therefore any element \(g\in G\) can be written as \(cba\), where \(a\in A,b\in B\) and \(e\in C\), and hence \(B^{g}=B^{cba}=((B^{c})^{b})^{a}=B^{a}\), this implies that the number of conjugate subgroups of \(B\) in \(G\) is equal to the number of \(B\) in \(A\). Note that \(G\) is solvable, so all subgroups of order \(|B|\) are conjugate, and then \(v_{|B|}(G)=v_{|B|}(AB)=|A|\). But \(|H||K|=|F|=|G|=|A||B||C|\) and \(|B|=|H|\), thus \(|A|=|K|/|C|\). Therefore, \[v_{|B|}(F)=|K|<|K|/|C|=|A|=v_{|B|}(G),\] we get a contradiction. **Case 2.** There exists a non-abelian simple group \(S\) such that \(S\leqslant\overline{G}=G/N\leqslant\mathrm{Aut}(S)\). If \(2\in\pi(H)\), then \(K\) is abelian. Let \(r\in\pi(K)\), we have \(|K|_{r}=|G|_{r}=|S|_{r}\) with the help of Theorem 2.2. As \(K\) is a normal abelian subgroup of \(F\), the number of \(r\)-elements of \(K\) equals \(|K|_{r}=|S|_{r}\). But the Sylow \(r\)-subgroup of \(S\) is nontrivial, so the number of \(r\)-element in \(S\) is \(|S|_{r}\), hence \(S\) has only one Sylow \(r\)-subgroup, that is, \(S\) is not simple, it is impossible. Next if \(2\in\pi(K)\), then the Sylow \(p\)-subgroup of \(K\) is normal in \(F\) where \(p\in\pi(K)\). From \(Ord(F)=Ord(G)\) we can see that the Sylow \(p\)-subgroup of \(G\) is also normal in \(G\). \(N=O_{p}(G)\), so \(p=2\) and \(N\) is the normal Sylow \(2\)-subgroup of \(G\). Thus \(|G/N|\) is odd. Using the well-known Feit-Thompson Theorem, \(G/N\) is soluble, this contradicts our assumption that \(G/N\) is non-solvable. We conclude that \(G\) is a Frobenius group by Theorem 2.2. We now assume that \(F\) is non-solvable then there exists a subgroup \(H_{0}\) with \(|H:H_{0}|\leqslant 2\) such that \(H_{0}\cong Z\times\mathrm{SL}_{2}(5)\), all Sylow subgroups of \(Z\) are cyclic and \((|Z|,30)=1\) by Lemma 2.3. This shows that \(|H|=2^{3}\cdot 3\cdot 5\cdot|Z|\) or \(2^{4}\cdot 3\cdot 5\cdot|Z|\). It is not difficult to see that \(H\) has no element of order \(15\) since \(15\not\in\pi_{e}(\mathrm{SL}_{2}(5))\). Now let \(V\) be a complement of \(G\), then \(\pi_{e}(H)=\pi_{e}(V)\) and \(|H|=|V|\) follow from \(Ord(F)=Ord(G)\). If \(G\) is solvable, then \(V\) has a Hall subgroup of order \(15\), this follows that \(15\in\pi_{e}(V)\), a contradiction. Therefore \(G\) is also non-solvable. We now consider \(2\)-Frobenius group. Suppose that \(E\) is a \(2\)-Frobenius group, that is, \(E=ABC\), where \(A\), \(AB\) are normal subgroups of \(E\); \(AB\), \(BC\) are Frobenius groups with kernels \(A\), \(B\) and complements \(B\), \(C\) respectively. At the following claim we say \(p\) is a prime number. **Claim 3.2**.: _Let \(E\) be a \(2\)-Frobenius group and \(G\) a finite group with \(Ord(G)=Ord(E)\), then \(G\) is a \(2\)-Frobenius group._ Proof.: Let \(E=ABC\) be a \(2\)-Frobenius group, as assumed. In the light of Claim 3.1 we assume that there is a nonabelian simple group \(S\) such that \(S\leqslant\overline{G}=G/N\leqslant\mathrm{Aut}(S)\). Since \(B\) is a cyclic Hall subgroup of \(E\), all Hall subgroups of \(E\) with order \(|B|\) are conjugate. Set \(1\neq e=bca\in E\), where \(a\in A,b\in B,c\in C\) we have \(B^{e}=B^{bca}=B^{a}\). Since \(AB\) is a Frobenius group with kernel \(A\) and complement \(B\), so the intersection of any two Hall subgroups of \(E\) of order \(|B|\) is trivial and \(v_{|B|}(E)=|A|\). Obviously, \(v_{|B|}(G/N)=\frac{|A|}{|N|}\) and \(v_{|B|}(S)=\frac{|A|}{|N|}\). If \(S=L_{2}(r^{f})\) where \(r\) is an odd prime number and \(\pi(B)=\{r\}\), then \(|B|=rf\). Therefore, there exists an element of order \(r^{f}\) in \(S\). Observe that the Sylow \(r\)-subgroup of \(L_{2}(r^{f})\) is isomorphic to \(Z^{f}_{f}\), so \(f=1\). Applying Theorem 8.2(b) and (c) of Chapter II in [14], we have the number of Sylow \(r\)-subgroups of \(L_{2}(r)\) is \(r+1\). It implies that \(\frac{|A|}{|N|}=r+1\). Referring to Table I of [15] we have \(G/N=S\), and so \(|A||B||C|=|S||N|\), therefore \(|C|=\frac{r(r^{2}-1)}{2r(r+1)}=\frac{r-1}{2}\). Using Lemma 2.5, we have \(\pi(r+1)\subseteq\pi(\frac{r-1}{2})\). From \((\frac{r+1}{2},\frac{r-1}{2})=1\) we deduce that \(r=3\), i.e., \(S=L_{2}(3)\), this contradicts the fact that \(S\) is simple. If \(S=L_{3}(4)\) and \(\pi(B)=\{3\}\), then \(S\) does not have an element of order \(9\), which is a contradiction since \(E\) has a cyclic subgroup of order \(9\). When dealing with the remaining cases of simple groups, we will use Lemma 1.1 of [10]. If \(S\) is not \(L_{3}(4)\) with \(\pi(B)=\{3\}\) or \(L_{2}(r^{f})\) with \(\pi(B)=\{r\}\), then there exists a cyclic Hall \(\pi_{j}\)-subgroup \(U\) for each connected component \(\pi_{j}\) of \(S\) where \(j>1\). Obviously, \(|U|=|B|\). Since \(v_{d}(S)=v_{d}(E)/|N|=|A|/|N|\) where \(1<d\in\pi_{e}(B)\), the intersection of any two distinct Hall \(\pi_{j}(S)\)-subgroups is trivial in our situation, and hence they are conjugate. From the fact that Sylow subgroups are conjugate we can obtain \(v_{|B|}(S)=|S:N_{S}(U)|\), that is \[\frac{|A|}{|N|}=|S:N_{S}(U)|. \tag{3.1}\] Let \(\theta=|\overline{G}/S|\). By applying \(|G|=|E|\) and (3.1), we have \[\frac{|A|}{|N|}=\frac{|S|}{|N_{S}(U)/U||U|} \tag{3.2}\] and \[|C|=\theta|N_{S}(U)/U|. \tag{3.3}\] In the light of Lemma 2.5, we can give an additional constraint as follows: \[\pi(\frac{|A|}{|N|})\subseteq\pi(|C|). \tag{3.4}\] Assume \(S\) is an alternating group \(A_{n}\) where \(n\geqslant 5\). If the prime graph of \(S\) has \(2\) components, then \(n=p\), \(p+1\), \(p+2\), and \(n\), \(n-2\) are not both prime numbers. Therefore, \(|U|=p\) and \(|N_{S}(U)/U|=\frac{p-1}{2}\). With the help of the automorphism structure of alternating groups we have \(\theta=1\), \(2\) or \(4\). From (3.4) we deduce that \(\pi(\frac{n!}{p(p-1)})\subseteq\pi(p-1)\). By Lemma 2.6, we have \(\pi((p-1)!)=\pi(p-1)\), so \(p=2\) or \(3\), contradicting \(n\geqslant 5\). If the prime graph of \(S\) has \(3\) components, then \(n=p\) and \(p-2\) is also a prime number. It follows that \(|U|=p\) or \(p-2\). We first assume that \(|U|=p\), it is impossible as in the case when the prime graph has \(2\) components. We next assume that \(|U|=p-2\), we have \(|N_{S}(U)/U|=p-3\). From equation (3.4), we can obtain \[\pi(\frac{p!}{2(p-2)(p-3)})\subseteq\pi(p-3).\] Using Lemma 2.6 again \(\pi(\frac{p!}{2(p-2)})=\pi(p-3)\), which is impossible. We now consider \(S\) is a simple group of Lie type. It is clear that \(U\) is a maximal torus of \(S\), and thus \(N_{S}(U)/U\) is isomorphic to a subgroup of the Weyl group \(W\) of \(S\). Since \(|W|\) divides \(|S|\), combining equations (3.2)-(3.4), we have \[\pi(\frac{|S|}{|U||W|})\subseteq\pi(\theta_{0}|W|), \tag{3.5}\] where \(\theta_{0}\) is a multiple of \(\theta\). Note that \(S\) has more than one prime graph components, then by [3] and [7], we can give the following Tables 1-3, where \(|W|\) and the values \(\theta_{0}\) are given in Section 3.6 of [16] and listed in Tables 1-4 of [15], respectively. Observing Tables 1-3 and the equation (3.5) we deduce that \(\pi(\frac{|S|}{|U||W|})\subseteq\pi(\theta_{0}|W|)\), so that if \(S\) is not isomorphic to \(A_{1}(q)\) with \(\theta_{0}=r\), which is listed in Table 2, then \(\pi(\theta_{0})\subseteq\pi(|W|)\), and hence \(\pi(\frac{|S|}{|U||W|})\subseteq\pi(|W|)\). In the light of Lemma 2.6, we have \[\pi(\frac{|S|}{|U|})=\pi(|W|). \tag{3.6}\] Note that \(\pi(\frac{|S|}{|U|})\) is the set of the characteristic and some primitive prime divisors of simple group \(S\) of Lie type. By considering the existence of primitive prime divisors as stated in Lemma 2.7, we can check equation (3.6) for Tables 1-3 one by one to get that \(S\) is isomorphic to \(A_{2}(3)\), \({}^{2}A_{2}(3)\) or \({}^{2}A_{3}(2)\). If \(S={}^{2}A_{2}(3)\) then \(\theta_{0}|N_{S}(U)/U|=3\), \(\frac{|S|}{|N_{S}(U)/U||U|}=2^{5}\cdot 3^{2}\) and if \(S={}^{2}A_{3}(2)\) then \(\theta_{0}|N_{S}(U)/U|=8\), \(\frac{|S|}{|N_{S}(U)/U||U|}=2^{4}\cdot 3^{4}\), these two cases contradict (3.4). Finally if \(S=A_{2}(3)\), then \(\theta_{0}=2\), \(|N_{S}(U)/U|=3\) and \(\frac{|S|}{|N_{S}(U)/U|U|}=2^{4}\cdot 3^{2}\), therefore \(G/N\cong\mathrm{Aut}(A_{2}(3))\) and \(|A|=2^{4}\cdot 3^{2}\cdot|N|\). \(|C|=6\) since \(|G|=|E|\). It is easy to get that the number of elements of \(G\) with order \(13\) is \(2^{6}\cdot 3^{3}\cdot|N|\) by some calculations with the help of Theorem 2.2(2)(b) and [12]. Observe that \(E\) has \(\phi(13)|A|=2^{5}\cdot 3^{3}\cdot|N|\) elements of order \(13\), this implies that \(E\) and \(G\) can not be the same order type. We now suppose that \(S=A_{1}(q)\) where \(q=2^{r}\) or \(3^{r}\) and \(r\) is an odd prime number, then \(\frac{|A|}{|N|}=\frac{9(q-1)}{2}\) and \(|C|=2r\). If \(q=3^{r}\), then by equation (3.4) we get that \(3^{r}-1=2^{e}\) where \(e\) is an integer, this follows that \(r=2\), which contradicts the fact that \(r\) is an odd prime number. If \(q=2^{r}\), then by equation (3.4) again \(2^{r}-1=r^{e}\) where \(e\) is an integer, contracting Fermat's little theorem, which shows that \(2^{r}-1\equiv 1(\mathrm{mod}r)\). Finally we suppose \(S\) is a sporadic simple group or \({}^{2}F_{4}(2)^{\prime}\), using [12] and equations (3.2) and (3.3), we can obtain the following Table 4. \begin{table} \begin{tabular}{c c c c c c} \hline \(S\) & Conditions & \(|U|\) & \(|W|\) & \(\theta_{0}\) & \(\frac{|S|}{|U|W|}\) \\ \hline \(A_{1}(q)\) & \(3<q\equiv(\mathrm{mod}\ \ )\) & \(\frac{q+1}{2}\) & \(2^{e}\) & \(\frac{q+1}{2}\) & \(\frac{q+1}{2}\) \\ \(A_{1}(q)\) & \(q=2^{r}>2\) & \(q\pm 1\) & \(2\) & \(1\) & \(\frac{q+2}{2}\) \\ \(A_{1}(q)\) & \(q=2^{r},3^{r}\) & \(\frac{q-1}{(q-1)}\) & \(r\) & \(\frac{q+2}{2}\) & \\ \(A_{2}(q)\) & \(q>2\), \(q\) even & \(q\pm 1\) & \(6\) & \(2^{q}3^{u}\) & \(\frac{q+3}{2}(\mathrm{mod}\ \ )\) \\ \({}^{2}A_{5}(2)\) & & \(7111\) & \(6!\) & \(2\) & \(2^{13}4^{11}\frac{q+1}{2}\) \\ \({}^{2}D_{p}(q)\) & \(p=2^{m}+1\geq 5\) & \(2^{p-1}\pm 1\) & \(2^{p-1}p!\) & \(2\) & \\ & & \(\frac{q+1}{2}\) & & & \\ & & \(\frac{q+1}{2}\) & & & \\ \(G_{2}(q)\) & \(q=0(\mathrm{mod}3)\) & \(q^{2}\pm q+1\) & \(12\) & \(2^{q}3^{u}\) & \(\frac{q^{6}(q\pm 1)(q^{2}+1)}{2^{q}}(\mathrm{mod}\ \ )\) \\ \({}^{2}G_{2}(q)\) & \(q=3^{2m+1}>3\) & \(q\pm\sqrt{3q}+1\) & \(12\) & \(3^{r}\) & \(\frac{q^{3}(q\pm 1)}{2^{q}(\mathrm{mod}\ \ )\) \\ \(F_{4}(q)\) & \(q\) even & \(q^{4}+1\) & \(2^{7}3^{2}\) & \(2^{r}\) & \(\frac{q^{24}(q^{2}-1)(3\sqrt{3}+1)(q^{2}+1)(q^{2}-1)}{2^{q}}(\mathrm{mod}\ \ ) \\ & & \(q^{4}-q^{2}+1\) & & & \\ & & \(t_{1}:q^{2}-\sqrt{2q^{3}}\) & & & \\ \({}^{2}F_{4}(q)\) & \(q=2^{2m+1}>3\) & \(q+q-\sqrt{2q}+1\) & \(2^{7}3^{2}\) & \(3^{r}\) & \\ & & \(t_{2}:q^{2}+\sqrt{2q^{3}}\) & & & \\ & & \(q+\sqrt{3q}+1\) & & & \\ \({}^{2}F_{7}(2)\) & & \(73\) & \(2^{10}3^{4}5\cdot 7\) & \(1\) & \(2^{33}3^{7}\cdot 7\cdot 11\cdot 13\cdot 17\cdot 19\cdot 31\cdot 43\cdot 127\) \\ & & \(127\) & & & \\ \({}^{2}F_{7}(3)\) & & \(757\) & \(2^{10}3^{4}5\cdot 7\) & \(1\) & \(2^{33}3^{7}\cdot 7\cdot 11\cdot 13\cdot 17\cdot 19\cdot 31\cdot 43\cdot 73\) \\ \({}^{2}F_{7}(3)\) & & \(757\) & \(2^{10}3^{4}5\cdot 7\) & \(1\) & \(2^{33}3^{6}5\cdot 7\cdot 11\cdot 13\cdot 17\cdot 19\cdot 31\cdot 43\cdot 73\) \\ & & \(1093\) & & & \\ \hline \end{tabular} \end{table} Table 2: Simple group of Lie type with disconnected prime graph and \(s(S)=3\). \begin{table} \begin{tabular}{c c c c c c} \hline \(S\) & Conditions & \(|U|\) & \(|W|\) & \(\theta_{0}\) & \(\frac{|S|}{|U|W|}\) \\ \hline \(A_{p-1}(q)\) & \((p,q)\neq(3,2),(3,4)\) & \(\frac{q+1}{(q-1)(q-q+1)}\) & \(p!\) & \(2p^{s}\) & \(\frac{q^{(\frac{r}{2})}\Gamma(\frac{q+1}{2})}{p!}\) \\ \(A_{p}(q)\) & \(q-1\mid p+1\) & \(\frac{q^{p}-1}{q+1}\) & \((p+1)!\) & \(2p^{s}\) & \(\frac{q^{(\frac{r}{2})}\Gamma(\frac{q+1}{2})}{p!}\) \\ \({}^{2}A_{p-1}(q)\) & & \(\frac{q+1}{(q+1)(q-q+1)}\) & \(p!\) & \(2^{p}s^{s}\) & \(\frac{q^{(\frac{r}{2})}\Gamma(\frac{q+1}{2})}{p!}\) \\ \({}^{2}A_{p}(q)\) & \(q+1\mid p+1\) & \(\frac{q^{p}+1}{q+1}\) & \((p+1)!\) & \(2^{p}s^{p}\) & \(\frac{q^{(\frac{r}{2})}\Gamma(\frac{q+1}{2})}{p!}\) \\ \({}^{2}A_{p}(q)\) & \(q+1\mid p+1\) & \(\frac{q^{p}+1}{q+1}\) & \((p+1)!\) & \(2^{p}s^{p}\) & \(\frac{q^{(\frac{r}{2})}\Gamma(\frac{q+1}{2})}{p!}\) \\ \({}^{2}A_{p}(q)\) & \(q+1\mid p+1\) & \(\frac{q^{p}+1}{q+1}\) & \((p+1)!\) & \(2^{p}s^{p}\) & \(\frac{q^{(\frac{r}{2})}\Gamma(\frac{q+1}{2})}{p!}\) \\ \({}^{2}A_{p}(q)\) & \(q+1\mid p+1\) & \(\frac{q^{p}+1}{q+1}\) & \((p+1)!\) & \(2^{p}s^{p}\) & \(\frac{q^{(\frac{r}{2})}\Gamma(\frac{q+1}{2})}{p!}\) \\ \({}^{2}B_{n}(q)\) & \(n=2^{m}\geq 4\), \(q\) odd & \(\frac{q^{n}+1}{2}\) & \(2^{n}\cdot n!\) & \(2^{s}\) & \(\frac{q^{n}(q^{n}-1)\Gamma(\frac{r+1}{2})}{p!}\) \\ \({}^{2}B_{p}(3)\) & & \(\frac{2^{p}-1}{2}\) & \(2^{p}\cdot p!\) & \(1\ Observing Table 4, it can be seen that neither all sporadic simple groups nor \({}^{2}F_{4}(2)^{\prime}\) satisfies equation (3.4), i.e., \[\pi(\frac{|A|}{|N|})\subseteq\pi(|C|).\] Therefore, \(G\) is a 2-Frobenius group.
2307.08435
Extinction time distributions of populations and genotypes
In the long run, the eventual extinction of any biological population is an inevitable outcome. While extensive research has focused on the average time it takes for a population to go extinct under various circumstances, there has been limited exploration of the distributions of extinction times and the likelihood of significant fluctuations. Recently, Hathcock and Strogatz [PRL 128, 218301 (2022)] identified Gumbel statistics as a universal asymptotic distribution for extinction-prone dynamics in a stable environment. In this study, we aim to provide a comprehensive survey of this problem by examining a range of plausible scenarios, including extinction-prone, marginal (neutral), and stable dynamics. We consider the influence of demographic stochasticity, which arises from the inherent randomness of the birth-death process, as well as cases where stochasticity originates from the more pronounced effect of random environmental variations. Our work proposes several generic criteria that can be used for the classification of experimental and empirical systems, thereby enhancing our ability to discern the mechanisms governing extinction dynamics. By employing these criteria, we can improve our understanding of the underlying mechanisms driving extinction processes.
David Kessler, Nadav M. Shnerb
2023-07-17T12:28:47Z
http://arxiv.org/abs/2307.08435v1
# Extinction time distributions of populations and genotypes ###### Abstract In the long run, the eventual extinction of any biological population is an inevitable outcome. While extensive research has focused on the average time it takes for a population to go extinct under various circumstances, there has been limited exploration of the distributions of extinction times and the likelihood of significant fluctuations. Recently, Hathcock and Strogatz [1] identified Gumbel statistics as a universal asymptotic distribution for extinction-prone dynamics in a stable environment. In this study, we aim to provide a comprehensive survey of this problem by examining a range of plausible scenarios, including extinction-prone, marginal (neutral), and stable dynamics. We consider the influence of demographic stochasticity, which arises from the inherent randomness of the birth-death process, as well as cases where stochasticity originates from the more pronounced effect of random environmental variations. Our work proposes several generic criteria that can be used for the classification of experimental and empirical systems, thereby enhancing our ability to discern the mechanisms governing extinction dynamics. By employing these criteria, we can improve our understanding of the underlying mechanisms driving extinction processes. Introduction Biological populations are inevitably destined for extinction. Over 99% of all known species that have ever existed on Earth have already become extinct, and the others are awaiting their inevitable turn. The concern over the anthropogenic acceleration of extinction rates has sparked heated debates in the past decade regarding whether such acceleration is indeed observed in local populations [2; 3] and, if so, what are the global implications of this change. Understanding the likelihood of extinction under specific conditions and the distribution of extinction times is crucial for predicting future extinction events and assessing the threat to biodiversity. The same questions also arise when the objective is to eliminate a particular biological entity, such as in the case of pest control, pathogen eradication, or combating genetic diseases. The dynamics of biological populations is influenced by deterministic and stochastic factors. At the deterministic level, the dynamics can be classified into two main types: those attracted to a manifold (such as a fixed point) with finite population, and those attracted to an extinction point. In the latter case the population decays over time towards zero. Persistent populations of the first type would be expected to survive indefinitely, while populations of the second type disappear. In the common case of exponential decline, the extinction time is logarithmic in the size of the original population. Stochasticity makes this picture much more subtle. Since the state of zero population is an absorbing state, the ultimate fate of _any_ stochastic dynamics is extinction. The sharp distinction between extinction-prone and stable populations thus blurs, and the focus must switch to the characteristics of the extinction process, and in particular to the statistical properties of extinction times. Typically, stochasticity in biological systems is quite strong, even under extremely stable experimental conditions [4]. Stochastic fluctuations are usually classified into two categories, _demographic stochasticity_ (or genetic drift or internal noise) and _temporal environmental stochasticity_ (extrinsic noise) [5]. Demographic noise reflects the inherent randomness of the birth-death process caused by small-scale random events that affect the reproductive success of individuals in an uncorrelated manner. Temporal environmental stochasticity (TES) is associated with large-scale events that affect entire populations. Mathematically speaking, this implies that the parameters of a given model (usually, the growth rates) vary in time, where the amplitude and correlation times of the fluctuations characterize the environment. Abundance variations induced by TES are usually proportional to the population size, whereas those induced by demographic stochasticity scale with the square root of population size. Therefore, demographic stochasticity is typically negligible when population size is large [6; 7; 8; 9; 10; 11], but it becomes important at the brink of extinction, or during invasion [12; 13; 14; 15]. This brings us to a third type of systems: those in which deterministic dynamics is weak or negligible, and stochasticity is the main, or only, driver of fluctuations. In this case, we are talking about neutral dynamics, a topic of great importance in population genetics and community ecology [16; 17; 18; 10]. In sum, our classification contains six types of systems: persistent, extinction-prone, and neutral, each of which can be analyzed under pure demographic noise or under a combination of demographic and environmental stochasticity. In recent works, Strogatz and Hathcock [19; 1] analyzed the distribution of extinction times for an extinction prone (negative growth rate, exponentially decaying) population with pure demographic stochasticity. These authors found a universal asymptotic behavior, i.e., that the fluctuations around the expected extinction time obey a Gumbel distribution. Furthermore, the width of this distribution is extremely narrow: while the deterministic mean time to extinction scales with the logarithm of the initial population size \(N_{0}\), the width is \(N_{0}\)-independent. Therefore, relative fluctuations around the mean vanish as \(N\to\infty\). Here we would like to extend the work of Hathcock and Strogatz [1] and to consider statistics of extinction times in all the six typical scenarios mentioned above. Some of these cases have already been discussed in the literature (see details below), but we believe that there would be great benefit in presenting them side by side so that a researcher interested in this topic can see the different alternatives. Furthermore, to the best of our knowledge, the results regarding neutral systems are original. In what follows we devote a single section to each of the six scenarios. In the last section we will discuss the results and provide a general outlook. ## II Extinction-prone dynamics in a fixed environment In this section we first revisit the class of systems considered by Hathcock and Strogatz [1], for which the Gumbel distribution is a universal limit. In the next subsection we provide an example of a "non-Gumbel" scenario and analyze some of its features, from which a few aspects of the general picture emerge. Special attention is directed to the relationship between the average lifespan of a population and the width of the distribution (variance-mean ratio, also known as the index of dispersion or Fano factor). In the "Gumbel cases" the mean time to extinction diverges (albeit logarithmically) at the thermodynamic limit, while the width of the distribution remains constant. Therefore, fluctuations become negligible in large systems. This characteristic reflects the negligible effect of demographic noise when the abundance is large. In certain systems, as we will explore, the average time to extinction is unaffected by the initial population size. In such cases, even in the "thermodynamic" limit (large initial size), the mean-variance ratio is \(\mathcal{O}(1)\), indicating significant fluctuations. Specifically, we examine a population-genetic model for diploid with dominance and offer insights into the broader scenario. ### Density-independent dynamics and the Gumbel statistics We begin with a simple example in which the general answer is attainable and suggest an argument for the general case. Let us consider a system with no density-dependent effects. In that case, for any single individual the birth and death rates, per unit time, are fixed, i.e., are independent of the state of other individuals. The death rate is taken to be \(\mu\) and the birth rate is \(\lambda\). If the population is extinction prone, \(\mu>\lambda\). The chance \(P_{n}(t)\) of having \(n\) individuals at time \(t\) satisfies the following differential equation, \[\frac{dP_{n}(t)}{dt}=\mu(n+1)P_{n+1}+\lambda(n-1)P_{n-1}-(\mu+\lambda)nP_{n}. \tag{1}\] We would like to solve this equation and to find \(P_{0}\), the chance of extinction, given that \(P_{n}(t=0)=\delta_{n,N_{0}}\). To do that we introduce the generating function, \[G(x,t)=\sum_{n=0}^{\infty}P_{n}x^{n}, \tag{2}\] obeying, \[\dot{G}=\mu\sum_{n}x^{n}(n+1)P_{n+1}+\lambda\sum_{n}x^{n}(n-1)P_{n-1}-(\mu+ \lambda)\sum_{n}x^{n}nP_{n}. \tag{3}\] Redefinition of indices yields a first order differential equation for \(G\), \[\dot{G}=\mu G^{\prime}+\lambda x^{2}G^{\prime}-(\mu+\lambda)xG^{\prime}=[\mu+ \lambda x^{2}-(\mu+\lambda)x]G^{\prime}=Q(x)G^{\prime}. \tag{4}\] Eq. (4) is a first-order equation that may be solver using characteristics [20]. Every function of the form \(G[F(x)+t]\) will solve Eq. (4) if \(dF/dx=1/Q(x)\). For Eq. (4) the desired \(F\) is, \[F(x)=\frac{\ln\left(\frac{x-1}{\lambda x-\mu}\right)}{\lambda-\mu}. \tag{5}\] What's left is to determine the functional form of \(G[F(x)+t]\), and this has to do with the initial condition. Suppose at \(t=0\) we have only one individual. In that case by definition \(G(x,t=0)=x\) and, \[G^{-1}(F(x))=x. \tag{6}\] The solution for Eq. (6) is \[G(F,t=0)=\frac{\mu e^{(\lambda-\mu)F}-1}{\lambda e^{(\lambda-\mu)F}-1}. \tag{7}\] And therefore the generating function at any time \(t\) is, \[G(F,t)=\frac{\mu e^{(\lambda-\mu)(F+t)}-1}{\lambda e^{(\lambda-\mu)(F+t)}-1}= \frac{\mu e^{(\lambda-\mu)t}\left(\frac{x-1}{\lambda x-\mu}\right)-1}{\lambda e ^{(\lambda-\mu)t}\left(\frac{x-1}{\lambda x-\mu}\right)-1}. \tag{8}\] Hence, the chance that at time \(t\) the lineage of a given individual has already gone extinct is \[Q_{1\to 0}(t)=1-\frac{\mu-\lambda}{\mu e^{t(\mu-\lambda)}-\lambda}. \tag{9}\] Since the dynamics of the lineages of all individuals are statistically identical (no density-dependent effects), if the population at \(t=0\) has \(N_{0}\) individuals, \[Q_{N_{0}\to 0}(t)=\left(1-\frac{\mu-\lambda}{\mu e^{(\mu-\lambda)t}-\lambda} \right)^{N_{0}}. \tag{10}\] To see the connection between the distribution (10) and the Gumbel distribution, let us measure time in units of \(\mu\), and define a decline parameter \(\kappa=1-(\lambda/\mu)\). When \(N_{0}\rightarrow\infty\), the time \(t\) in which all individuals went extinct is large, and therefore \[Q_{N_{0}\to 0}(t)\approx e^{-\kappa N_{0}e^{-\kappa t}}. \tag{11}\] The chance of extinction at \(t\) is \(P(t)=dQ/dt\). Now let us define \(t=(s+\nu)/\kappa\), where \(\nu=\ln[\beta N_{0}]\) is the point at which the large-\(N_{0}\) distribution of extinction times, \(P(t)\), is peaked, so the second derivative of the cumulative distribution \(Q(t)\) vanishes. With that definition, \[P(s)=e^{-(s+e^{-s})}, \tag{12}\] which is the CDF of the Gumbel distribution whose scale parameter is \(\beta=1/\kappa\) and its mode is \(\mu=\nu/\kappa\). The standard deviation of this distribution is \(\pi/\sqrt{6\kappa^{2}}\), an \(\mathcal{O}(1)\), \(N_{0}\)-independent number. The variance-mean ratio is then, \[\text{VMR}=\frac{\pi^{2}}{6\kappa(\nu+\gamma_{\text{E}})}=\frac{\pi^{2}}{6 \kappa(\ln\text{N}_{0}+\gamma_{\text{E}}-\ln\kappa)}, \tag{13}\] where \(\gamma_{E}\) is Euler's number. Importantly, this ratio decays like \(1/\ln N_{0}\) in the thermodynamic limit. As explained in Appendix A, the effect of demographic stochasticity is negligible out of the "extinction zone" in which \(n<n_{c}\). In the region dominated by demographic noise the dynamics is more or less neutral (see section IV), hence the variance of extinction times distribution is proportional to \(n_{c}\). For extinction-prone systems with no density dependence, \(n_{c}\) is \(N_{0}\) independent (see Appendix A). This feature may change in other scenarios, as demonstrated in the next subsection. The general result of Hathcock and Strogatz [1] may be interpreted as follows. Once the population is in decline, the intraspecific interactions are usually negligible. The question of extinction time of \(N_{0}\) individuals is thus governed by the chance of the last lineage to go extinct. In the large-\(N_{0}\) limit this becomes the classical extreme-event problem, so as long as the chance of a single lineage to persist decays exponentially at long times, the limit distribution is Gumbel [21]. The same answer holds for any other single-lineage distribution which is neither compact nor fat-tailed. The Gumbel statistics is demonstrated, in Figure 1, for logistic dynamics with negative growth rate. ### Density dependent dynamics: Non-Gumbel scenarios As pointed out by Hathcock and Strogatz [1], Gumbel distribution is a universal asymptotic limit of many extinction times statistics provided that the rates of demographic events (transition rates) decrease linearly towards zero at the vicinity of the extinction point. This characteristic reflects the weakening of the interactions between individuals in the extinction zone, so the rate of events is linearly proportional to the number of individuals. When this condition is not fulfilled, the distribution is not Gumbel. In this subsection, we consider a specific example and provide some insights into the more general cases. As a realistic example, let us consider a population genetics model for diploid with dominance [22; 23; 24]. This model describes the dynamics of two alleles, \(A\) and \(a\), in a randomly mating diploid population. The allele \(A\) is always dominant to \(a\), so that the phenotype of an \(aA\) heterozygote is the same as the phenotype of \(AA\). If the fraction of \(a\) alleles in the gamete pool is \(x\) and the fraction of \(A\) is \((1-x)\), then, after random mating, the zygote genotypes follow classic Hardy-Weinberg proportions, with \(AA:Aa:aa\) as \((1-x)^{2}:2x(1-x):x^{2}\). Setting the fitness of \(AA\) and \(Aa\) phenotype to unity and the fitness of \(aa\) to \(f<1\) Figure 1: The distribution of extinction times, \(P(t)\), for an extinction-prone populations when stochasticity is purely demographic (Panel A). The dynamics is logistic, with finite carrying capacity \(N_{0}\). In the presence of \(N\) individuals the total death rate is \(N\) and the total birth rate is \(0.5N(1-N/N_{0})\); the initial population was taken to be \(N_{0}\). The Gumbel distribution parameters \(\beta=\sqrt{6\text{Var}[t]/\pi^{2}}\) and \(\mu=\mathbb{E}[t]-\gamma_{E}\beta\) (\(\gamma_{E}\) is Euler constant) were extracted for each \(N_{0}\). When a histogram of the adjusted variable \(z=(t-\mu)/\beta\) is plotted (Panel B), all data collapse and fit the Gumbel distribution \(\exp(-[z+\exp(-z)])\) (full black line). Small deviations are observed for \(N_{0}=10\) and \(N_{0}=100\), but above these numbers there is a perfect agreement between the predicted and the observed distribution. The mean and the variance for each \(N_{0}\) are shown in the insets of Panel B. While the mean grows linearly with \(\ln N_{0}\) (upper inset) the variance saturates (lower inset) to its predicted value for \(\kappa=1/2\), namely \(2\pi^{2}/3\approx 6.58\). one expects the \(a\) allele to disappear from a well-mixed, fixed size population. This purifying selection process is, however, very slow, because an individual will only suffer from low fitness when both of its alleles are of type \(a\). Since the number of \(a\) homozyggotes is proportional to \(x^{2}\), the process is always density-dependent and one expects a non-Gumbel skewed distribution. Figure 2 shows results from a simulation of this process. In each timestep one individual is chosen to die, so two \(a\) alleles are lost with probability \(x^{2}\), one with probability \(2x(1-x)\) and the chance of zero \(a\) loss is \((1-x)^{2}\). Then a new individual is introduced, whose two alleles are chosen at random from the gamete pool in which the fraction of \(a\) is \[\frac{fx^{2}+x(1-x)}{fx^{2}+2x(1-x)+(1-x)^{2}}. \tag{14}\] Although the distribution of the standardized variables is again narrow, and appears to be \(N\) independent, it does not satisfy Gumbel statistics, as demonstrated in Figure 2. More importantly, as demonstrated in the inset of figure 2, both the mean and the standard deviation scale with the square root of \(N_{0}\), and therefore the width of the distribution is proportional to its mean even in the thermodynamic limit. These examples suggest a general insight as to the \(N_{0}\) scaling of the width of the distribution and its mean. As explained in Appendix A, the width reflects the effect of demographic stochasticity, which is relatively weak and becomes prominent only when the deterministic forces are tiny. A population undergoing demographic stochasticity and decline can be described by the Langevin equation: \[dn=-\kappa n^{p}dt-\sigma_{d}\sqrt{n}dW, \tag{15}\] where \(\kappa\) is the decay coefficient (related to \(\kappa\) and \(f\) in the above examples), \(p\) is the power that characterizes the interaction between individuals in the dilute limit (\(p=1\) for exponential decay with no interactions, \(p=2\) for diploid with dominance) and \(\sigma_{d}\) is the amplitude of demographic variations. The last term in Eq. (15) becomes important only below \(n<n_{c}\). In Appendix A we show that \(n_{c}\sim N^{(p-1)/p}\) as long as \(p>1\), and \(n_{c}\) is \(\mathcal{O}(1)\) for \(p\leq 1\). For \(n<n_{c}\) the dynamics is neutral (see section V), so the contribution of this "extinction zone" (both to the mean time to extinction and to its standard deviation) is proportional to \(n_{c}\). The regime \(n<n_{c}\) is the only place in which demographic fluctuations are important, so the variance of the extinction time distribution is \(n_{c}^{2}\). The mean time to extinction, on the other hand, is the sum of the deterministic timescale, i.e., the time required to decline from \(N_{0}\) to \(n_{c}\), and the stochastic period that scales with \(n_{c}\). The deterministic timescale for the dynamics described by Eq. (15) is \(N_{0}^{p-1}\) for \(p<1\), \(\log N_{0}\) for \(p=1\), and is \(\mathcal{O}(1)\) for \(p>1\). Accordingly, the variance-mean ratio goes to zero if \(p\leq 1\) (Assuming both \(N_{0}\) and \(N\) diverging). For \(p>1\) the mean and the standard deviation both have the same scaling with \(n_{c}\sim N^{(p-1)/p}\), so the variance-mean ratio diverges as \(N\) and \(N_{0}\) go to infinity. ## III Extinction-prone dynamics in stochastic environment In this section we consider the scenario of a population influenced by environmental stochasticity. In what follows, the term "environment" encompasses any external factor that impacts the demographic rates of an _entire_ population, including factors such as competition and/or predation pressure from other species. When the environment undergoes Figure 2: **Diploid with dominance**: Main panel: the distribution of normalized extinction times, \(P(z)\) vs. \(z\) (using the adjusted variable \(z=(t-\mu)/\beta\), where \(\beta=\sqrt{6\mathrm{Var}[t]/\pi^{2}}\) and \(\mu=\mathbb{E}[t]-\gamma_{E}\beta\)), where \(t\) is the time to extinction of the \(a\) allele whose fitness is \(f=1/2\). \(N\) is the \(a\) allele initial frequency, out of total population of \(2N_{0}\) alleles (\(N_{0}\) diploid individuals). Results are shown for \(N_{0}=200\) and for \(N_{0}=400\) (each statistic reflects \(10^{5}\) numerical experiments). Both distributions are almost identical and differ substantially from the Gumbel curve (black line). Inset: the mean (black) and the standard deviation (red) for the same system, plotted vs. \(\sqrt{N_{0}}\) for \(N_{0}=100,200,400,800,1600\). Both quantities scale linearly with \(\sqrt{N_{0}}\), so the variance-mean ratio is finite even in the thermodynamic limit. stochastic variations, the birth and death rates of the population also fluctuate. Consequently, the overall growth rate (birth rate minus death rate) experiences corresponding variations, leading the population to exhibit either growth or decay. The population is prone to extinction if its mean growth rate is negative [25; 26]. Let us reemphasize the distinction between demographic and environmental stochasticity. The origin of demographic noise (the stochastic characteristics of the birth-death process, as described in the last section) is also the effect of environmental variations on individuals. The distinction between these two forms of stochasticity has to do with their range. When the mean demographic rates remain constant over time and the fluctuations affect individuals in an uncorrelated manner, it is considered demographic noise. On the other hand, if an entire population is affected by the stochasticity, it is classified as environmental stochasticity. Demographic noise is commonly characterized as "white" noise, where different birth or death events are uncorrelated in time. In contrast, the correlation time becomes a significant characteristic of environmental variations. To wit, let us consider a simple, purely environmental, two-state system (telegraphic noise). We assume that the environment may be in either of two states, say state 1 and state 2. The environment remains in a particular state for a certain duration (referred to as the dwell time, which is considered the unit time of the process) before switching to the alternative state with a probability of \(1/2\). In each of these states of the environment, the population either grows exponentially or decreases exponentially, so if the number of individuals is \(n\), \(\ln n\) increases or decreases linearly with time. When the number of individuals is large, demographic stochasticity is negligible with respect to environmental stochasticity [5]. Therefore, in many studies the effect of demographic stochasticity is taken into account only by introducing a threshold at a given density, below which the population is considered extinct. Recent analyses suggest that this threshold has to be taken at the value of \(N\) in which the strength of demographic stochasticity is equal to the strength of environmental stochasticity [27; 15]. Once demographic stochasticity is neglected, the dynamics of \(n\) is simply \(n(t+\tau)=n(t)\exp(\zeta\tau)\), where \(\tau\) is the dwell time and \(\zeta\) is the (time dependent) growth exponent (if the environment admits two states, \(\zeta\) is either \(\zeta_{1}>0\) or \(\zeta_{2}<0\)). Taking \(\tau\) as the unit time, one arrives at \[x_{t+1}=x_{t}+\zeta_{t}, \tag{16}\] where \(x=\ln n\). The random walk in \(x\)-space is characterized by the mean and the variance of \(\zeta\), namely \(\kappa=\overline{\zeta}=(\zeta_{1}+\zeta_{2})/2\), and \(\sigma^{2}=\text{Var}[\zeta]\), where \(\kappa\), the decline rate, is assumed to be negative. When the initial population \(N_{0}\) is large, the problem is mapped to the classical first passage time for a biased random walker, as noted a while ago [28; 29]. Accordingly, if \(x_{0}=\ln N_{0}\) is the initial location of the random walker and \(x_{1}=\ln[N_{th}]\) is the threshold density below which the population is considered extinct, the probability distribution function for the time required to cross the log-space distance \(\Delta x=x_{0}-x_{1}=\ln(N_{0}/N_{th})\) is given by the inverse Gaussian distribution, \[P(t)=\frac{\Delta x}{\sigma\sqrt{2\pi t^{3}}}e^{-\frac{(\Delta x-\kappa t)^{2} }{2\sigma^{2}t}}. \tag{17}\] The mean of this distribution is \(\mathbb{E}[t]=\Delta x/\kappa\) and its variance \(\text{Var}[t]=\mathbb{E}[t]\sigma^{2}/\kappa^{2}\). Therefore, the variance-mean ratio in that case is \(N_{0}\) independent, \(\text{VMR}=\sigma^{2}/\kappa^{2}\). The chance of the system to survive until \(t\) (i.e., the cumulative distribution function) is given by \[Q(t)=\frac{1}{2}\left(1-\text{Erf}\left[\frac{\kappa t-\Delta x}{\sqrt{2t \sigma^{2}}}\right]-e^{2\kappa\Delta x/\sigma^{2}}\text{Erfc}\left[\frac{ \kappa t+\Delta x}{\sqrt{2t\sigma^{2}}}\right]\right). \tag{18}\] In the case of exponential decay (\(p=1\)) with pure demographic noise considered in section II, the mean time to extinction is also logarithmic in the initial population size, but the variance and the higher communants are \(\mathcal{O}(1)\). Here both mean and variance are linear in \(\ln N\), so the distribution is much wider than the one that characterizes the purely demographic case. When the noise is demographic, its effect becomes non-negligible only when the number of individuals \(n\) is \(\mathcal{O}(1)\) (smaller than \(N_{th}\)), while for systems with environmental stochasticity the noise affect the system all the way down from \(N_{0}\) to extinction, no matter how large is \(N_{0}\). The given example focuses on a specific example, namely telegraphic noise. However, at its core, the analysis considers the dynamics of a random walker (in the log-abundance space) with a bias. It can be shown (see, e.g., [26], Appendix A) that, as long as the log-abundance steps are not excessively large the diffusion approximation is applicable and the long-term characteristics of the dynamics are solely influenced by the mean and the variance of the \(\zeta(t)\) process. Therefore, the results presented above remain valid. Marginal dynamics with pure demographic stochasticity: the Kimura-Hubbell neutral model In sections II and III, we focused on the persistence time statistics of populations prone to extinction. In the upcoming two sections, our aim is to examine the same question but with a focus on marginal populations. These marginal populations are characterized by deterministic dynamics that support a marginally stable manifold, which includes the extinction state. A classic example is the case of competition between two populations or two types that possess identical fitness. For instance, consider two genotypes that differ only by a synonymous mutation, resulting in the same phenotype. In such cases, the system's dynamics become purely stochastic. The famous neutral models proposed by Kimura [16; 30] in population genetics and Hubbell [17; 18] in community ecology address such systems, where the dynamics are solely driven by demographic noise. Under neutral dynamics, species identity is irrelevant. One can consider a single species as a focal species and pool over the effect of all other species together as a single entity (an effective "rival species"). Therefore, in what follows we examine a single species within a community of \(N\) individuals, whose dynamics ends at one of the two absorbing states, i.e., the zero abundance state (extinction) or at abundance \(N\) (fixation). The systems considered in previous sections admit deterministic decline dynamics, so in the long run the overall population never grows beyond its initial value \(N_{0}\), and hence extinction times and extinction statistics are governed by \(N_{0}\), the initial abundance, and not by \(N\), the maximum carrying capacity. Under neutral dynamics, a population may either decline to extinction or grow to fixation, and therefore \(N\) sets the relevant timescales. \(N_{0}\) affects the statistics only through its relationship with \(N\), as explained below. ### Case I: a macroscopic population In Kimura-Hubbel version of the neutral model, with pure demographic stochasticity, one considers the dynamics of \(x=n/N\), where \(n\) is the number of individuals of a given focal species and \(N\) is the total number of individuals. In this subsection we assume that the initial frequency \(N_{0}/N\) is \(\mathcal{O}(1)\). We would like to obtain the statistics of absorption (either fixation or extinction) times where the dynamics of \(P(x,t)\) is given by, \[\frac{\partial P(x,\tau)}{\partial\tau}=\frac{\partial^{2}x(1-x)P(x,\tau)}{ \partial t^{2}};\qquad P(0,\tau)=P(1,\tau)=0,\qquad P(x,\tau=0)=\delta(x-1/2). \tag{19}\] Here \(\tau\) is the dimensionless timescale \(t/N\). Defining \[W(x,\tau)=x(1-x)P(x,\tau), \tag{20}\] \(W\) satisfies, \[\frac{\partial W(x,\tau)}{\partial\tau}=x(1-x)\frac{\partial^{2}W(x,t)}{ \partial t^{2}}. \tag{21}\] Taking \(W(x,\tau)=W_{m}(x)e^{\lambda_{m}\tau}\), the equation for the eigenfunctions \(W_{m}(x)\) and the eigenvalues \(\lambda_{m}\) is, \[\frac{\partial^{2}W_{m}(x)}{\partial t^{2}}-\lambda_{m}\frac{W_{m}(x)}{x(1-x) }=0. \tag{22}\] The general solution of (22) is a linear combination of two independent functions. One is a Meijer G-function that diverges at the origin, so its contribution must vanish (since \(P(x)\) vanishes at \(x=0\) and at \(x=1\), so does \(W\)). Thus the solution, up to a constant, is given by the other solution, which vanishes at \(x=0\), \[W_{m}(x)=x\,{}_{2}F_{1}\left(\frac{1}{2}(1-\sqrt{1-4\lambda_{m}}),\frac{1}{2} (1+\sqrt{1-4\lambda_{m}});2;x\right), \tag{23}\] where \({}_{2}F_{1}\) is the hypergeometric function. The \(\lambda_{m}\)s are determined by the condition \(W(x=1)=0\), that yields, \[W_{m}(x)=\frac{\cos\left(\frac{1}{2}\pi\sqrt{1-4\lambda_{m}}\right)}{\pi \lambda_{m}} \tag{24}\] Therefore, \[\lambda_{m}=-m(m+1). \tag{25}\] Since \(m\) is an integer, the corresponding eigenfunction simplifies to, \[W_{m}(x)=x\,{}_{2}F_{1}(-m,m+1;2;x)=xP_{m}^{(1,-1)}(1-2x)/(m+1), \tag{26}\] where the \(P_{m}^{(\alpha,\beta)}(x)\) are Jacobi polynomials. Accordingly, the general solution to Eq. (22) takes the form, \[W(x,\tau)=\sum_{m=1}^{\infty}A_{m}W_{m}(x)e^{-m(m+1)\tau}. \tag{27}\] The \(m=0\) (time-independent) term yields a non-normalizable probability function and therefore it has been discarded. The constants \(A_{m}\) are determined by the initial condition. The orthogonality relationships of the Jacobi polynomials, when translated to functions of \(1-2x\), are \[\int_{0}^{1}\frac{x}{1-x}P_{m}^{(1,-1)}(1-2x)P_{n}^{(1,-1)}(1-2x)=\delta_{n,m} \frac{(m+1)}{m(2m+1)}. \tag{28}\] To find \(A_{m}\) from \(W(x,0)=x(1-x)\delta(x-1/2)\) one multiplies both the left and the right side of this equation by \(P_{n}^{(1,-1)}(1-2x)\), integrates over \(x\) from zero to one and applies the relationship (28). That yields \[A_{m}\frac{1}{m(2m+1)}=\begin{cases}0&\text{if m even}\\ \frac{P_{m}^{(1,-1)}(0)}{m+1}=\frac{(-1)^{m_{1}}C_{m_{1}}}{2^{m}}&\text{if m odd} \end{cases} \tag{29}\] where \(m=2m_{1}+1\) and \(C_{m_{1}}=(2m_{1})!/(m_{1}!(m_{1}+1)!)\) are the Catalan numbers. The chance to survive until \(t\), \(Q(t)\), is given by the integral of \(P(x)\) over \(x\) from zero to one. Using Eq. (27), the definition (20), the relationships between Jacobi polynomials and \(W\) and the integral \[\int_{0}^{1}\ dx\ \frac{xP_{m}^{(1,-1)}(1-2x)}{x(1-x)}=\frac{2}{m}. \tag{30}\] one finds \[Q(t)=\int_{0}^{1}dx\ \sum_{m=0}^{\infty}A_{2m+1}\frac{W_{2m+1}(x)}{x(1-x)}e^{- (2m+1)(2m+2)\tau}=\sum_{m=0}^{\infty}\frac{(-1)^{m+1}C_{m}}{2^{2m+1}}(4m+3)e^{ -(2m+1)(2m+2)\tau}. \tag{31}\] Accordingly, the chance of extinction at \(\tau\), \(\mathcal{P}(t)\), is \[\mathcal{P}(t)=-\frac{dQ(t)}{dt}=\frac{1}{N}\sum_{m=0}^{\infty}\frac{(-1)^{m} C_{m}}{2^{2m+1}}(2m+1)(2m+2)(4m+3)e^{-(2m+1)(2m+2)t/N}; \tag{32}\] Figure 3 shows the correspondence between the predicted and the measured \(\mathcal{P}(t)\). Following the calculation that leads to Eq. (49) below, one obtains an expression for the asymptotic behavior of the \(n\)-th moment of the extinction time distribution \[\overline{t^{n}}=B_{n}N^{n}nlog(2). \tag{33}\] The number \(B_{n}\) is given by a complex set of hypergeometric functions, however \(B_{1}=1\) and in general \(B_{n}\approx\exp(-0.68[n-1])\) provides an excellent approximation for the first 10 moments, as demonstrated in Figure 3. The mean and the variance are \[\overline{t}=N\ln 2\qquad\text{Var}[\text{t}]=(2B_{2}-1)(N\ln 2)^{2}, \tag{34}\] so the VMR scales like \(N\). ### Case II: a single, neutral mutant Let us consider, now, the case of other initial conditions, \(W(x,0)=x(1-x)\delta(x-x_{0})\), and in particular the survival time distribution of a single mutant, \(x_{0}=1/N\). Now the general solution for \(W(x,\tau)\) takes the form, \[W(x,\tau)=\sum_{m=1}^{\infty}\frac{m(2m+1)}{m+1}x_{0}P_{m}^{(1,-1)}(1-2x_{0})xP _{m}(1-2x)e^{-m(m+1)\tau}. \tag{35}\] Dividing by \(x(1-x)\) and integrating over \(x\), \[Q(x,\tau)=\sum_{m=1}^{\infty}\frac{(2m+1)}{m+1}x_{0}P_{m}^{(1,-1)}(1-2x_{0})e^ {-m(m+1)\tau}. \tag{36}\] If \(N\) is large, for the dynamics of a single mutant (\(x_{0}=1/N\)) one may use the Mehler-Heine formula for the Jacobi polynomials, \[P_{m}^{(1,-1)}(1-2/N)\approx\sqrt{N}J_{1}\left(\frac{2m}{\sqrt{N}}\right), \tag{37}\] Figure 3: Panel (A): The extinction probability at \(t\), \(\mathcal{P}(t)\), is plotted against \(t\) for a population of \(N=20,50\) and \(100\). The initial condition is \(n(t=0)=N/2\), namely \(x=1/2\). The result of Eq. (32) (full curves) are compared with the normalized distribution obtained numerically (open circles). In the numerical experiment, the chance of the focal population to increase, or to decrease, by one unit in each elementary step is \(1/2\), and in each elementary step time is incremented by \(1/[2x(1-x)]\). In panel (B) the theoretical prediction for the moments (Eq. 33, dashed lines) are compared with the moments of these distributions (full circles) where \(J_{1}\) is the first Bessel function. Accordingly, \[Q(x,\tau)\approx\frac{1}{N}\sum_{m=1}^{\infty}\frac{(2m+1)}{m+1}J_{1}\left(\frac{ 2m}{\sqrt{N}}\right)e^{-m(m+1)\tau}. \tag{38}\] Since \(J_{1}\) vanishes at zero, the small-\(m\) behavior yields a negligible contribution to the sum. This facilitates the approximation, \[Q(x,\tau)\approx\frac{2}{N}\int_{x=1}^{\infty}J_{1}\left(\frac{2x}{\sqrt{N}} \right)e^{-x^{2}t/N}=1-e^{-1/t}-\frac{1-e^{-t/N}}{t}, \tag{39}\] so the chance of the lineage of a single mutant to reach extinction at \(t\) is, \[\mathcal{P}(t)=-\frac{dQ(t)}{dt}=\frac{e^{-1/t}}{t^{2}}-\frac{1-e^{-t/N}(1+t/N) }{t^{2}}. \tag{40}\] The first moment may be obtained from this expression, and one gets \(\overline{t}=\ln N+1-2\gamma_{E}\), where \(\gamma_{E}\) is Euler's gamma. To get the higher moments we implement the procedure described above, \(\overline{t^{n}}=-\int t^{n}[dQ/dt]\ dt\), \[\overline{t^{n}}=n!N^{n-1}\sum_{m=1}^{\infty}\frac{2m+1}{m^{n}(m+1)^{n+1}}P_{ m}^{(1,-1)}(1-2/N). \tag{41}\] Since the main contribution comes from the small-\(m\) region, we can approximate \(P_{m}^{(1,-1)}(1-2/N)\approx(m+1)\), and therefore \[\overline{t^{n}}=n!N^{n-1}\sum_{m=1}^{\infty}\frac{2m+1}{m^{n}(m+1)^{n}}. \tag{42}\] Figure 4 demonstrates the validity of these results. Note that the time required for a single mutant to be absorbed follows a logarithmic scaling of \(\ln N\), whereas the time for a macroscopic population scales linearly with \(N\). Additionally, the variance of extinction times for a single mutant is \(\mathcal{O}(N)\), while for a macroscopic population, it scales with \(N^{2}\). In general, the ratio between the moments described in Equation 42 and the corresponding moments in Equation (33) is an \(N\) factor. This characteristic highlights the fact that an individual either goes extinct within a timescale of \(\mathcal{O}(1)\) or, with a probability that scales like \(1/N\), avoids extinction and achieves macroscopic population sizes. ## V Marginal dynamics with environmental stochasticity: the time-averaged neutral model The neutral model, which we presented in the previous section, was initially introduced by Kimura as a model describing competition between two alleles with equal fitness, and later (with certain modifications) was implemented by Hubbell to describe the dynamics of an ecological community in which all species have equal fitness. Both variations of the model gained immense popularity. In particular, its community ecology version successfully explained the distribution of species abundance in high diversity assemblages using a small number of parameters [16; 17; 31; 18]. However, it seems that the neutral model fails to capture the _dynamics_ of ecological communities. According to the neutral model, which contains only demographic stochasticity that generates binomial noise, one expects the per-generation changes in abundance to be proportional to the square root of population size. In practice, changes in abundance are usually much larger [32], and usually scale with population size as expected in systems where stochasticity is environmental [6; 7], not demographic. Similarly, the times to the most recent common ancestor proposed by the neutral model are way too long [33; 34], this phenomenon also reflects the unrealistic "slowness" of neutral dynamics. To address these issues, the time-averaged neutral model of biodiversity was proposed [35; 10; 36]. This is essentially a neutral model with temporal environmental stochasticity. Like the original neutral model, the dynamics is purely stochastic, but in this model, the stochasticity has two sources - both demographic and environmental. All species have the same time-averaged fitness, but at any given moment, there are lower-fitness and higher-fitness Figure 4: Panel (A): The extinction probability of a single mutant (\(n(t=0)=1\)) at \(t\), \(\mathcal{P}(t)\), is plotted against \(t\) for a population of \(N=100000\). The result of Eq. (40) (full curve) are compared with the normalized distribution obtained numerically (open circles). In panel (B) the theoretical prediction for the moments (Eq. 42, dashed lines) are compared with the moments of these distributions (full circles) species. This immediately leads to abundance variations that scale with population size, as expected, and the theory accounts for both static and dynamic patterns of community assembly [10]. In the following treatment, we consider a focal species representing a fraction \(x\) of the community, competing with another species representing a fraction \(1-x\) of the same community. Once again, we address the question of the distribution of times until the focal species reaches either extinction or fixation, this time under environmental noise. If we allow ourselves to neglect the demographic noise, by replacing it with an absorbing boundary condition for populations below a certain threshold, what we obtain is an unbiased random walk in logit (\(z=\ln[x/(1-x)]\) space. Therefore, the problem reduces to the distribution of times for a one-dimensional simple random walk with absorbing boundary conditions. Mathematically speaking, we consider the dynamics of a population whose fraction \(x=n/N\) satisfies \(\dot{x}=\zeta(t)x(1-x)\), where \(\zeta(t)\) is a zero mean stochastic process whose variance is \(\sigma^{2}\). Therefore, the logit variable \(z\equiv\ln[x/(1-x)]\) is an unbiased random walk, \(z(t)=z_{0}\int^{t}\zeta(t^{\prime})dt^{\prime}\). If the threshold fraction \(x_{th}=N_{th}/N\ll 1\), the boundary conditions are, to the left \(z_{th,L}\approx\ln N_{th}/N\) and to the right \(z_{th,R}\approx\ln N/N_{th}\). Since there is no bias the absolute value is not important, so we focus on the corresponding diffusion equation \[\frac{\partial P(z,t)}{\partial t}=D\frac{\partial^{2}P(z,t)}{\partial t^{2}} ;\qquad P(0,t)=P(L,t)=0,\qquad P(x,0)=\delta(z-L/2), \tag{43}\] where \(L=z_{th,R}-z_{th,L}\). The problem is thus equivalent to the heat equation on a \(1d\) slab. The general form of the solution is, \[P(z,t)=\sum_{m=1}^{\infty}A_{m}\sin\left(\frac{m\pi z}{L}\right)e^{-\lambda_{ m}t} \tag{44}\] where \[\lambda_{m}=\frac{Dm^{2}\pi^{2}}{L^{2}}. \tag{45}\] Thus, the solution that satisfies both boundary and initial condition is, \[P(z,t)=\sqrt{\frac{2}{L}}\sum_{m}(-1)^{m}\sin\left(\frac{(2m+1)\pi z}{L} \right)e^{-(2m+1)^{2}\tau}, \tag{46}\] where \(\tau\equiv\pi^{2}Dt/L^{2}\). The chance of the random walker to survive to time \(t\), \(Q(t)\), is, \[Q(t)=\int_{0}^{L}P(z,t)dx=\frac{4}{\pi}\sum_{m}\frac{(-1)^{m}}{2m+1}e^{-(2m+1) ^{2}\tau}. \tag{47}\] The chance of extinction ata given time \(t\) is \(-dQ/dt\), and therefore the \(n\)-th moment of \(t\) is given by \[\overline{t^{n}}=-\int_{0}^{\infty}dt\ t^{n}\frac{dQ}{dt}=n\int_{0}^{\infty}dtt^ {n-1}Q(t)=n\left(\frac{L^{2}}{\pi^{2}D}\right)^{n}\int_{0}^{\infty}d\tau\tau^{ n-1}Q(\tau). \tag{48}\] Evaluating the integral one finds \[\overline{t^{n}}=\frac{n!}{4^{2n}\pi}\left(\frac{L^{2}}{\pi^{2}D}\right)^{n} \left[\zeta(2n+1,1/4)-\zeta(2n+1,3/4)\right], \tag{49}\] where \(\zeta\) is the Riemann zeta function. The agreement between these theoretical predictions and the outcomes of a standard Monte-Carlo simulation is demonstrated in Figure 5. Here the general scaling of the \(n\)-th cumulant is \(L^{2n}\), so the mean time to absorption scales like \(\ln^{2}N\) and the variance like \(\ln^{4}N\). As in the case of neutral dynamics with pure demographic stochasticity, the VMR diverges as \(N\rightarrow\infty\). For generic initial conditions, \(P(z,0)=\delta(z-z_{0})\), Eq. (47) is replaced by \[Q(t)=\int_{0}^{L}P(z,t)dz=\frac{4}{\pi}\sum_{m=0}^{\infty}\frac{\sin[(2m+1) \pi z_{0}/L}{2m+1}e^{-(2m+1)^{2}\tau}. \tag{50}\] Figure 5: The \(n\)-th moment of the extinction time, for a random walker that started at \(z=L/2\). In each step the random walker jumps to the left or to the right with probability \(1/2\), and time is incremented by one unit. Moments were calculated for extinction times evaluated in \(10^{5}\) numerical experiments for \(L=200\) (circles) and \(L=20\) (diamonds). Dashed lines are the corresponding predictions from Eq. (49) with \(D=1/2\). so, \[\overline{t^{n}}=\frac{n!}{4^{2n}\pi}\left(\frac{L^{2}}{\pi^{2}D}\right)^{n}\sum_ {m=0}^{\infty}\frac{\sin[(2m+1)\pi z_{0}/L]}{(2m+1)^{2n+1}}. \tag{51}\] The main contribution to this sum, even for \(n=1\), comes from the small-\(m\) regime. When \(z_{0}\to 0\) (close to the absorbing boundaries) the argument of the sine function is negligibly small. Therefore when \(z_{0}=\epsilon\) the moments are \[\overline{t^{n}}\approx\frac{4n!\epsilon}{L}\left(\frac{L^{2}}{\pi^{2}D} \right)^{n}(1-4^{-n})\zeta(2n). \tag{52}\] Again there is a factor of \(1/L\) between the "single mutant" case and the macroscopic population case, because the chance of a single mutant to avoid extinctions on timescales that are \(\mathcal{O}(1)\) and reach macroscopic abundances is proportional to \(1/L\). ## VI Stable populations Now, let's discuss systems that exhibit deterministic dynamics with an attractive fixed point capable of supporting large population. One example is the logistic system described by the equation \(dn/dt=rn(1-n/K)\), where \(r>0\) (throughout this section, we refer to Figure 6: The \(n\)-th moment of the extinction time, for a random walker that started at \(z_{0}=1\). In each step the random walker jumps to the left or to the right with probability \(1/2\), and time is incremented by one unit. Moments were calculated for extinction times evaluated in \(10^{5}\) numerical experiments for \(L=100\) (circles) and \(L=1000\) (diamonds). Dashed lines are the corresponding predictions from Eq. (51) with \(D=1/2\). \(K\) as the number of individuals in the equilibrium state). In such cases, the occurrence of extinctions, even in the presence of stochastic fluctuations, is relatively rare. We can think of the stochastic process as a random walk biased towards the equilibrium state. For extinction (or approaching the zero population point) to happen, the random walker would need to take numerous steps "against the current," an event with an extremely low probability. The stochastic dynamic of a stable system is some sort of a random walk biased away from the extinction point. The path to extinction thus consists of a series of implausible steps, where any plausible step leads to an increase in the population size. Therefore, the most probable decline path is composed of a consecutive sequence of these implausible steps. Under pure demographic stochasticity, this series requires \(K\) consecutive death events without any birth event, and the likelihood of this decreases exponentially as \(\exp(-c_{1}K)\), where \(c_{1}\) is some coefficient. In cases where environmental variations allow for periods of negative growth rate, the most probable path to extinction involves a long period \(T\) of adverse weather conditions. The duration \(T\) scales logarithmically with \(K\), resulting in the frequency of extinctions, which is exponentially rare in \(T\), decaying as a power-law function of \(K\). These arguments were extensively discussed and presented in detail in [25; 26]. However, beyond the differences in the scaling of the _average_ extinction time with \(K\), stable systems have a common characteristic that determines the _distribution_ around that mean. As mentioned, the extinction event is a rare fluctuation, and the typical timescale associated with the decline, \(T_{d}\sim\ln K\), is much shorter than the persistence time of a system in the asymptotic limit of large \(K\). This separation of timescales, between the decline time and the persistence time, allows us to treat this stochastic process as a binomial process in which, during each increment \(T_{d}\), an extinction event occurs with a tiny probability. If extinction doesn't happen, even if it "almost" happens (the population declines to small abundance), the system recovers and returns to its equilibrium state. Therefore, the lifetime distribution of stable systems is simply an exponential distribution with an average equal to the average persistence time, as shown in [26]. Mathematically, extensive efforts have been made to calculate the mean time to extinction and determine its numerical value, including the coefficient \(c_{1}\) mentioned earlier or the prefactor of the exponential term [37; 38; 39; 40]. These studies have revealed that the spectrum of the Markov matrix governing such a stochastic process exhibits several interesting properties. Firstly, it supports an extinction state whose decay rate (log of its eigenvalue) is zero, indicating the absorbing nature of the extinction state. Secondly, there exists a _single_ quasi-stationary state whose decay rate decreases to zero as \(K\) increases. Finally, the decay rates of all other eigenstates are \(\mathcal{O}(1)\), independent of \(K\). These results are in agreement with the qualitative picture illustrated above: starting from an arbitrary initial state, which is a linear combination of many eigenstates of the corresponding Markov matrix, the system converges to the quasi-stationary state on timescales that are \(\mathcal{O}(1)\), and then the survival probability decays exponentially. ## VII Summary and Discussion Through this paper we discussed the extinction time statistics in various generic scenarios. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Scenario & Mean & Variance & Distribution \\ \hline \hline Extinction prone, demographic, density & \(\ln N_{0}\) & \(\mathcal{O}(1)\) & Gumbel (Eq. 12) \\ independent (\(p=1\)) & & & \\ \hline Extinction prone, demographic, \(p>1\) & \(N^{(p-1)/p}\) & \(N^{2(p-1)/p}\) & Skewed (Fig 2) \\ \hline Extinction prone, environmental & \(\ln N_{0}\) & \(\ln N_{0}\) & Inverse Gaussian (Eq. 17) \\ \hline Neutral, demographic, macroscopic & \(N\) & \(N^{2}\) & (Eq. 32) \\ population & & & \\ \hline Neutral, demographic, single mutant & \(\ln N\) & \(N\) & (Eq. 40) \\ \hline Neutral + environmental stochasticity, macroscopic population & \(\ln^{2}N\) & \(\ln^{4}N\) & (Eq. 46) \\ \hline Neutral + environmental stochasticity, single mutant & \(\ln N\) & \(\ln^{3}N\) & (Eq. 51) \\ \hline Stable population, demographic stochasticity & \(\exp(K)\) & \(\exp(2K)\) & Exponential [37; 38; 39] \\ \hline Stable population, environmental stochasticity & \(K^{\alpha}\) (power-law) & \(K^{2\alpha}\) & Exponential [25; 26] \\ \hline \end{tabular} \end{table} Table 1: A summary of the main results. The main results we derived or quoted are summarized in Table 1. In the mean and in the variance columns of this table we provided only the dependencies of the times on the relevant large parameter, be it the initial population size \(N_{0}\), the total population \(N\) or the population at the attractive fixed point \(K\). Perhaps it is worth starting at this point: the determining factor, be it \(K\), \(N_{0}\) or \(N\). In a stable system, this factor is \(K\), the number of focal species individuals' in the stable state. It is independent of the initial population size \(N_{0}\), because the system usually flows towards the stable state. Similarly, it has nothing to do with the total carrying capacity \(N\) (how many total individuals, regardless of species, are allowed in the system). In marginal and neutral systems there is no specific abundance for a particular species. Accordingly, the determining factor is the total carrying capacity of the system, \(N\), because every species has a non-negligible chance of reaching it regardless of its initial size. In contrast, in an exponentially decaying system, the initial condition \(N_{0}\) is the only important factor since the population does not generally increase in size. An exceptional case is when a population undergoes density-dependent extinction dynamics, as demonstrated in the diploid with dominance dynamics. In this case, the deterministic extinction time depends only weakly on the initial population size. Therefore, the factor that governs extinction times is the width of the fluctuations-dominated region, where the system exhibits neutral behavior. Consequently in these cases (\(p>1\)) the important quantity is again \(N\), since it determines the width of the stochastically-dominated zone. The width of the distribution, and the variance-mean ratio, are governed by the stochastic part of the dynamics. When the origin of these fluctuations is demographic and the deterministic forces take the system to extinction, these fluctuations are important only in a narrow region around zero (\(n<n_{c}\)). In the Gumbel case, or by and large when \(p\leq 1\), this implies that the variance-mean ratio goes to zero in the thermodynamic limit. When \(p>1\) two things happen. First, \(n_{c}\) is proportional to \(N\), and second, the time required to reach \(n_{c}\), starting from \(N_{0}\), is \(\mathcal{O}(1)\). Therefore, the properties of the distribution of extinction times when \(p<1\) are more or less identical to the corresponding properties of a neutral system with \(N\sim n_{c}\). A significant number of experimental [41; 42; 43] and empirical [44; 45; 46; 47] studies have been dedicated to investigating the distribution of extinction times. However, in order to interpret these results in the context of the archetypal models discussed in this paper, further analysis is required. Nevertheless, we believe that this review article can serve as a point of reference for future analyses of extinction statistics. The key characteristics observed in each study of extinction times, such as their dependence on initial conditions or carrying capacity, first moments, variance-mean ratio etc., can provide valuable insights for classifying the basic dynamics of the system. This classification can then facilitate more detailed examinations, revealing other, system-specific features. Together, these valuable insights possess the potential to significantly enhance our comprehension of the underlying mechanisms that drive extinctions. Such knowledge can play a pivotal role in bolstering conservation efforts and guiding strategic approaches aimed at safeguarding biodiversity and promoting ecosystem stability. **Acknowledgments** We would like to express our gratitude to Stephen P. Ellner for bringing to our attention the experimental works of Drake and his collaborators.
2305.06261
Pseudo-reversing and its application for multiscaling of manifold-valued data
The well-known Wiener's lemma is a valuable statement in harmonic analysis; in the Banach space of functions with absolutely convergent Fourier series, the lemma proposes a sufficient condition for the existence of a pointwise multiplicative inverse. We call the functions that admit an inverse as \emph{reversible}. In this paper, we introduce a simple and efficient method for approximating the inverse of functions, which are not necessarily reversible, with elements from the space. We term this process \emph{pseudo-reversing}. In addition, we define a condition number to measure the reversibility of functions and study the reversibility under pseudo-reversing. Then, we exploit pseudo-reversing to construct a multiscale pyramid transform based on a refinement operator and its pseudo-reverse for analyzing real and manifold-valued data. Finally, we present the properties of the resulting multiscale methods and numerically illustrate different aspects of pseudo-reversing, including the applications of its resulting multiscale transform to data compression and contrast enhancement of manifold-valued sequence.
Wael Mattar, Nir Sharon
2023-05-10T15:47:43Z
http://arxiv.org/abs/2305.06261v1
# Pseudo-reversing and its application for # Pseudo-reversing and its application for multiscaling of manifold-valued data Wael Mattar and Nir Sharon **Abstract.** The well-known Wiener's lemma is a valuable statement in harmonic analysis; in the Banach space of functions with absolutely convergent Fourier series, the lemma proposes a sufficient condition for the existence of a pointwise multiplicative inverse. We call the functions that admit an inverse as _reversible_. In this paper, we introduce a simple and efficient method for approximating the inverse of functions, which are not necessarily reversible, with elements from the space. We term this process _pseudo-reversing_. In addition, we define a condition number to measure the reversibility of functions and study the reversibility under pseudo-reversing. Then, we exploit pseudo-reversing to construct a multiscale pyramid transform based on a refinement operator and its pseudo-reverse for analyzing real and manifold-valued data. Finally, we present the properties of the resulting multiscale methods and numerically illustrate different aspects of pseudo-reversing, including the applications of its resulting multiscale transform to data compression and contrast enhancement of manifold-valued sequence. **Keywords:** Wiener's lemma; pseudo-reversing; multiscale transform; manifold-valued sequences; compressing manifold data; manifold data enhancement. **Mathematics subject classification:** 42C40; 65G99; 43A99; 65D15. ## 1 Introduction Manifolds have become ubiquitous in modeling nonlinearities throughout many fields, ranging from science to engineering. In addition, the unceasing increase of sophisticated, modern data sets has brought many challenges in the processing of manifold data, including essential tasks like principle component analysis [22], interpolation [27], integration [9], and even the adaptation of proper neural networks to manifold values, e.g., [2, 10]. Here, we focus on multiscale transforms as a critical component in many applications over manifold data, see [3, 23, 26]. In particular, we aim to construct and analyze a new multiscale transform for manifold data. One foundational statement in the Banach algebra theory and harmonic analysis is Wiener's Lemma, e.g., [11, Chapter 5]. The lemma deals with the invertibility and spectrum of operators. In particular, in the Banach space of periodic functions with absolutely convergent Fourier series, the lemma suggests that if a function does not vanish, then there exists a pointwise multiplicative inverse with absolutely convergent Fourier series. These functions, having an inverse in this sense, are essential for multiscale construction, and we term them _reversible_. Their inverse also plays a key role, so the need for reversing rises. Unfortunately, in some cases, the reversibility is poorly possible numerically or even impossible by definition. In such cases, we aspire to suggest an alternative. In this paper, we introduce the notion of pseudo-reversing and describe it in detail. As a natural implication of the terminology, and as one may mathematically expect, the pseudo-reverse of a reversible function coincides with its unique inverse. Conversely, applying this method to a non-reversible function produces a family of functions, depending on a continuous regularization parameter, with an absolutely convergent Fourier series. Each function approximates the corresponding inverse according to the selected regularization. Then, we study the algebraic properties of the method and introduce a condition number to determine "how reversible" functions are. Once pseudo-reversing is established, we show its application for analyzing real-valued sequences in a multiscale fashion. In the context of multiscale transforms, the importance of Wiener's Lemma is evoked when associating a refinement operator with a sequence in \(\ell_{1}(\mathbb{Z})\), that is, the space of absolutely convergent real-valued bi-infinite sequences. Specifically, given a refinement that meets the condition of reversibility, the lemma guarantees the existence of a corresponding decimation operator. Moreover, a direct result from [29] implies that the calculation of the decimation involves an infinitely supported sequence which in turn can be truncated while maintaining accuracy [23]. The two operators, refinement and decimation, define a pyramid multiscale transform and its inverse transform. In this study, we further generalize pyramid multiscale transforms based on a broader class of refinement operators that do not admit matching decimation operators in an executable form. In particular, we use pseudo-reversing to define the pseudo-reverse of a refinement operator. Epitomai of non-reversible operators appear in the least squares refinements introduced in [7]. Nevertheless, even with reversible operators, if their reverse conditioning is poor, we show that it is preferred to establish their associated pyramid with a pseudo-reversing operator. As one may expect, since our generalization is based on pseudo-reversing, it comes with a cost. We present the analytical properties of the transform and show that the cost emerges in the synthesis algorithm, that is, the inverse transform, and carries undesired inaccuracies. However, we show that under mild conditions, the error is tolerable. With the new linear multiscale transforms, we show how to adapt them to Riemannian manifolds data. First, we demonstrate how the manifold-valued transform enjoys analog results to the linear case. Specifically, we observe how the magnitude of the detail coefficients in the new multiscale representation decays with the scale. Moreover, we estimate the synthesis error from pseudo-reversing analytically for the specific manifolds with a non-negative sectional curvature. We conclude the paper with numerical illustrations of pseudo-reversing. First, we show how to use it for constructing a decimation operator for a non-reversible subdivision scheme and the resulting multiscale. Then, we move to manifold-valued data and introduce two applications of our transform: contrast enhancement and data compression. The applications are made by systematically manipulating the detail coefficients of manifold-valued sequences. Indeed, the numerical results confirm the theoretical findings. All figures and examples were generated using a Python code package that complements the paper and is available online for reproducibility. The paper is organized as follows. Section 2 lays the notation and definitions regarding pseudo-reversing and related terms. Section 3 introduces pyramid transform in its linear settings where in Section 4 we present the multiscale transform for manifold values. Finally, in Section 5, we describe the numerical examples. Pseudo-reversing and polynomials In this section we briefly revisit Wiener's Lemma and present its classical formulation. In the Banach space of functions with absolutely convergent Fourier series, the lemma proposes a sufficient condition for the existence of a pointwise multiplicative inverse, within the space. We term the functions enjoying an inverse in this sense as reversible. Next, we introduce the notion of pseudo-reversing as a method to circumvent the potential non-reversibility of polynomials and describe it in detail. Finally, we present a condition number to measure the reversibility of functions and study how pseudo-reversing improves the reversibility of polynomials. We follow similar notations presented in [11, Chapter 5]. Let \(\mathbb{T}=\{z\in\mathbb{C}:|z|=1\}\) be the unit circle of the complex plane, denote by \(\mathcal{A}(\mathbb{T})\) the Banach space consisting of all periodic functions \(f(t)=\sum_{k\in\mathbb{Z}}a_{k}e^{2\pi ikt}\) with coefficients \(\boldsymbol{a}\in\ell_{1}(\mathbb{Z})\). We endow \(\mathcal{A}(\mathbb{T})\) with the norm \[\|f\|_{\mathcal{A}}=\|\boldsymbol{a}\|_{1}=\sum_{k\in\mathbb{Z}}|a_{k}|.\] The space \(\mathcal{A}(\mathbb{T})\) becomes a Banach algebra under pointwise multiplication. In particular, \(\|fg\|_{\mathcal{A}}\leq\|f\|_{\mathcal{A}}\|g\|_{\mathcal{A}}\) for any \(f,g\in\mathcal{A}(\mathbb{T})\). Given a function \(f\in\mathcal{A}(\mathbb{T})\), Wiener's Lemma proposes a sufficient condition for the existence of the inverse \(1/f\) in the space \(\mathcal{A}(\mathbb{T})\). The classical formulation of the lemma is as follows. **Lemma 2.1**.: (Wiener's Lemma). _If \(f\in\mathcal{A}(\mathbb{T})\) and \(f(t)\neq 0\) for all \(t\in\mathbb{T}\), then also \(1/f\in\mathcal{A}(\mathbb{T})\). That is, \(1/f(t)=\sum_{k\in\mathbb{Z}}b_{k}e^{2\pi ikt}\) for some \(\boldsymbol{b}\in\ell_{1}(\mathbb{Z})\)._ Wiener's original proof [31] uses a localization property and a partition of unit argument. An abstract proof of Lemma 2.1 is given by Gel'fand theory, see e.g. [18]. A simple and elementary proof can be found in [25]. **Definition 2.1**.: _A function \(f\in\mathcal{A}(\mathbb{T})\) is called reversible if \(1/f\in\mathcal{A}(\mathbb{T})\). Moreover, \(1/f\) is termed the reverse of \(f\)._ Lemma 2.1 guarantees that functions which do not vanish on the unit circle are reversible. One primary class of functions that is advantageous to reverse is polynomials. Indeed, in various applications, many approximating operators are uniquely characterized by polynomials, e.g., refinement operators [6]. We hence focus on the reversibility of polynomials in \(\mathcal{A}(\mathbb{T})\). Let \(p(z)=a_{n}z^{n}+a_{n-1}z^{n-1}+\cdots+a_{1}z+a_{0}\) be a polynomial of degree \(n\in\mathbb{N}\) with complex-valued coefficients. Without loss of generality, from now on we assume that the coefficients of \(p\) sum to \(1\), that is \(p(1)=\sum_{k=0}^{n}a_{k}=1\). This requirement is compatible with (7), as we will see next in the context of refinement operators, and is frequent in approximation theory, e.g., in interpolation techniques and partition of unity, see [1] for instance. Denote by \(\Lambda\) the set of all zeros of \(p\) including multiplicities. That is, if \(r\) is a root with multiplicity \(m\in\mathbb{N}\) then \(r\) appears \(m\) many times in \(\Lambda\). By the complete factorization theorem, we can write \[p(z)=C(p)\prod_{r\in\Lambda}(z-r)\] where \(C(p)\) is the leading coefficient of \(p\). This algebraic expression makes a flexible framework to manipulate the zeros of \(p\). The following definition introduces the pseudo-reverse of \(p\). **Definition 2.2**.: _For some \(\xi>0\), the pseudo-reverse of a polynomial \(p\) is defined as_ \[p_{\xi}^{\dagger}(z)=\left(C(p_{\xi}^{\dagger})\prod_{r\in\Lambda\setminus\mathbb{ T}}(z-r)\prod_{r\in\Lambda\cap\mathbb{T}}(z-(1+\xi)r)\right)^{-1} \tag{1}\] _where \(C(p_{\xi}^{\dagger})\) is a constant depending on \(\xi\) determined by \(p_{\xi}^{\dagger}(1)=1\)._ Note that the pseudo-reverse \(p_{\xi}^{\dagger}\) of a polynomial \(p\) is uniquely determined by the constant \(\xi\), and is not a polynomial unless \(p(z)\equiv 1\). Moreover, one can easily see that if a polynomial \(p\) does not vanish on the unit circle \(\mathbb{T}\), then its pseudo-reverse coincides with its reverse. That is, \(p_{\xi}^{\dagger}=1/p\) for any \(\xi\). The requirement that \(p_{\xi}^{\dagger}(1)=1\) is proposed to ensure the equality \(p_{\xi}^{\dagger}(z)p(z)=1\) at least for \(z=1\). We denote the inverted term on the right hand side of (1), that is the term within the parenthesis by \(p_{\xi}^{-\dagger}\). Geometrically speaking, \(p_{\xi}^{-\dagger}\) approximates \(p\) by displacing its zeros \(\Lambda\cap\mathbb{T}\) along the rays connecting the origin with the zeros, outwards, with a displacement parameter \(\xi\). Hence, by Wiener's Lemma 2.1 the polynomial \(p_{\xi}^{-\dagger}\) is reversible and \(p_{\xi}^{\dagger}\in\mathcal{A}(\mathbb{T})\). The perturbations between the polynomial coefficients of \(p\) and \(p_{\xi}^{-\dagger}\) can be expressed in a closed form with respect to \(\xi\), but the analytical evaluations are not essential to our work. However, it is worth mentioning that the perturbations are monotonic with respect to the power of the argument \(z\). In particular, the perturbation between the coefficient of \(z^{j}\) in \(p\) and the coefficient of \(z^{j}\) in \(p_{\xi}^{-\dagger}\) increases when \(j\) decreases. Figure 1 illustrates the computations behind pseudo-reversing (1). We proceed with two examples. **Example 2.1**.: _Consider the polynomial \(p(z)=(z^{2}+1)/2\) which vanishes for \(z=\pm i\in\mathbb{T}\). The pseudo reverse \(p_{\xi}^{\dagger}\) of \(p\) calculated via (1) is then_ \[p_{\xi}^{\dagger}(z)=\frac{2+2\xi+\xi^{2}}{z^{2}+(1+\xi)^{2}},\quad\xi>0.\] **Example 2.2**.: _Consider the polynomial \(p(z)=(z^{2}+z+1)/3\) which vanishes for \(z=-1/2\pm i\sqrt{3}/2\in\mathbb{T}\). The pseudo reverse \(p_{\xi}^{\dagger}\) of \(p\) calculated via (1) is then_ \[p_{\xi}^{\dagger}(z)=\frac{3+3\xi+\xi^{2}}{z^{2}+z(1+\xi)+(1+\xi)^{2}},\quad \xi>0.\] We now present some properties of pseudo-reversing through a series of useful propositions. **Proposition 2.2**.: _If the polynomial \(p\) has real coefficients, then so does the polynomial \(p_{\xi}^{-\dagger}\)._ Figure 1: The computations behind pseudo-reversing polynomials (1). To pseudo-reverse a polynomial \(p(z)\), we first approximate it with \(p_{\xi}^{-\dagger}(z)\) by pushing its zeros with modulus \(1\), away from the unit circle, and then reversing to get \(p_{\xi}^{\dagger}(z)\). Proof.: The assumption implies that if \(r\in\Lambda\) is a zero of \(p\), then so is the conjugate \(\bar{r}\in\Lambda\). Moreover, the nature of pseudo-reversing (1) preserves the conjugacy property of the zeros of \(p_{\xi}^{-\dagger}\). **Proposition 2.3**.: _For any polynomial \(p\), the product \(p_{\xi}^{\dagger}p\) converges in norm to \(1\) as \(\xi\) approaches \(0^{+}\). Namely,_ \[\lim_{\xi\to 0^{+}}\|p_{\xi}^{\dagger}p-1\|_{\mathcal{A}}=0.\] Proof.: Note that the polynomial coefficients of the difference \(p-p_{\xi}^{-\dagger}\) are continuous with respect to the parameter \(\xi\) in the positive vicinity of \(0\). The uniqueness of Fourier series implies that the Fourier coefficients of \(p-p_{\xi}^{-\dagger}\) vanish as \(\xi\to 0^{+}\). Therefore, for any \(\varepsilon>0\) there exists \(\delta(\varepsilon)\) such that \(\|p-p_{\xi}^{-\dagger}\|_{\mathcal{A}}<\varepsilon/\|p_{\xi}^{\dagger}\|_{ \mathcal{A}}\) for \(0<\xi<\delta(\varepsilon)\). Now, since \(\mathcal{A}(\mathbb{T})\) is a Banach algebra, we overall get \[\|p_{\xi}^{\dagger}p-1\|_{\mathcal{A}}=\|p_{\xi}^{\dagger}p-p_{\xi}^{\dagger} p_{\xi}^{-\dagger}\|_{\mathcal{A}}=\|p_{\xi}^{\dagger}(p-p_{\xi}^{-\dagger})\|_{ \mathcal{A}}\leq\|p_{\xi}^{\dagger}\|_{\mathcal{A}}\|p-p_{\xi}^{-\dagger}\|_{ \mathcal{A}}<\varepsilon\] as required. **Proposition 2.4**.: _If all zeros of the polynomial \(p\) are on the unit circle, then \(p_{\xi}^{\dagger}(z)\) converges uniformly to \(1\) as \(\xi\) approaches \(\infty\) on every compact subset of \(\mathbb{C}\)._ Proof.: The assumption implies that \(\Lambda\subset\mathbb{T}\). Therefore, as \(\xi\) approaches infinity we have \[\lim_{\xi\to\infty}p_{\xi}^{\dagger}(z)=\lim_{\xi\to\infty}\frac{\prod_{r\in \Lambda}(1-(1+\xi)r)}{\prod_{r\in\Lambda}(z-(1+\xi)r)}=\lim_{\xi\to\infty} \prod_{r\in\Lambda}\frac{1-r-\xi r}{z-r-\xi r}=1\] for all \(z\) in any compact subset of \(\mathbb{C}\). The notion of pseudo-reversing induces the necessity of proposing a condition number to quantify the reversibility of functions in \(\mathcal{A}(\mathbb{T})\). Conventionally, the condition number of a non-reversible function should take the value \(\infty\), whereas the "best" reversible function should take the value \(1\). Inspired by the results of [29], the next definition introduces such a condition. **Definition 2.3**.: _The reversibility condition \(\kappa:\mathcal{A}(\mathbb{T})\to[1,\infty]\) acting on a function \(f\in\mathcal{A}(\mathbb{T})\) is given by_ \[\kappa(f)=\frac{\sup_{z\in\mathbb{T}}|f(z)|}{\inf_{z\in\mathbb{T}}|f(z)|}, \tag{2}\] _with the convention \(\kappa(f)=\infty\) for functions with \(\inf_{z\in\mathbb{T}}|f(z)|=0\)._ The nature of this definition implies that \(\kappa(f)\) is well defined for any \(f\in\mathcal{A}(\mathbb{T})\) and returns values in \([1,\infty]\). Moreover, the condition number \(\kappa(f)\) is invariant under operations that preserve the ratio between the sup and inf of \(f\) over unit circle \(\mathbb{T}\), e.g., under rotations, \(\kappa(e^{i\theta}f)=\kappa(f)\) for any \(\theta\in[0,2\pi)\). Furthermore, \(\kappa\) is submultiplicative; for any two functions \(f\) and \(g\) we have \(\kappa(fg)\leq\kappa(f)\kappa(g)\). Meaning that, reversing a product of functions would not be worse than reversing each factor solely. The reversibility condition \(\kappa(f)\) in (2) of a function \(f\) is proportional to \(\|1/f\|_{\mathcal{A}}\). In particular, better reversibility implies faster convergence in Fourier coefficients of the inverse. Indeed, the consistency between \(\kappa(f)\) and \(\|1/f\|_{\mathcal{A}}\) for any function \(f\) can be evidently seen in [29]. We formulate the relation by the following corollary. **Corollary 2.5**.: _Let \(f(t)=\sum_{k\in\mathbb{Z}}a_{k}e^{2\pi\mathrm{i}kt}\in\mathcal{A}(\mathbb{T})\) be a positive function on the unit circle \(\mathbb{T}\). Assume that \(f\) is \(s\)-banded, that is, \(a_{k}=0\) for all \(|k|>s\). Fix_ \[\lambda=\left(\frac{\sqrt{\kappa(f)}-1}{\sqrt{\kappa(f)}+1}\right)^{1/s}\quad \text{and}\quad C=\frac{1}{\inf_{t\in\mathbb{T}}|f(t)|}\max\left\{1,\frac{(1+ \sqrt{\kappa(f)})^{2}}{2\kappa(f)}\right\}\] _where \(\kappa(f)\) is the condition number (2) of \(f\). Denote by \(1/f(t)=\sum_{k\in\mathbb{Z}}b_{k}e^{2\pi\mathrm{i}kt}\). Then,_ \[|b_{k}|\leq C\lambda^{|k|},\quad k\in\mathbb{Z}.\] _Moreover, the coefficients \(b_{k}\) are in general not banded._ The following corollary shows that pseudo-reversing (1) improves the reversibility condition. **Corollary 2.6**.: _Let \(p\) be a polynomial with \(n\in\mathbb{N}\) zeros all on the unit circle. Then,_ \[\kappa(p_{\xi}^{-\dagger})=\frac{\sup_{z\in\mathbb{T}}|\prod_{r\in\Lambda}(z- (1+\xi)r)|}{\inf_{z\in\mathbb{T}}|\prod_{r\in\Lambda}(z-(1+\xi)r)|}\leq\prod_ {r\in\Lambda}\frac{\sup_{z\in\mathbb{T}}|z-(1+\xi)r|}{\inf_{z\in\mathbb{T}}|z -(1+\xi)r|}=(1+2/\xi)^{n}\] _where \(\xi\) is the pseudo-reversing parameter. Hence, the more we push the zeros of \(p\) away from the unit circle, the more reversible the polynomial \(p_{\xi}^{-\dagger}\) becomes. Consequently, using Corollary 2.5 we can get an estimation of \(\|p_{\xi}^{\dagger}\|_{\mathcal{A}}\) with respect to \(\xi\). In particular, the norm \(\|p_{\xi}^{\dagger}\|_{\mathcal{A}}\) is bounded by a monotonically decreasing expression depending on \(\xi\)._ A natural implication of Corollary 2.6 and Proposition 2.3, is that there exists a trade-off between how well we approximate \(p\) with \(p_{\xi}^{-\dagger}\), and how reversible \(p_{\xi}^{-\dagger}\) becomes. The question of finding the optimal parameter \(\xi\), which simultaneously minimizes the perturbation \(p-p_{\xi}^{-\dagger}\) and \(\kappa(p_{\xi}^{-\dagger})\), can be answered by numerical tests, as we will see in Section 5. We conclude the section by remarking that pseudo-reversing can be applied to analytic functions. And, in the following sections we will see a use to pseudo-reversing in the context of multiscaling. **Remark 2.1**.: _Let \(f\in\mathcal{A}(\mathbb{T})\) be an analytic function on the unit circle. We describe how to pseudo-reverse \(f\) with some arbitrary error. First, we represent \(f\) by its Laurent series, which converges uniformly to \(f\) in some compact annulus containing the unit circle. Thanks to the analyticity of \(f\), its zeros are guaranteed to be isolated. Then, we truncate the power series to get a polynomial approximating \(f\) to any desirable degree. Finally, the polynomial can be reversed via (1). The result can be considered as the pseudo-reverse of \(f\)._ ## 3 Linear pyramid transform based on pseudo-reversing In this section we present the quintessential ideas and notations needed to realize our novel pyramid transform. Then, we use the notion of pseudo-reversing from Section 2 to introduce the new pyramid transform. Finally, we study its analytical properties. ### Background Multiscale transforms decompose a real-valued sequence \(\mathbf{c}^{(J)}=\{c_{k}^{(J)}\in\mathbb{R}\ \big{|}\ k\in 2^{-J}\mathbb{Z}\}\) given over the dyadic grid of scale \(J\in\mathbb{N}\), to a pyramid of the form \(\{\mathbf{c}^{(0)};\mathbf{d}^{(1)},\ldots,\mathbf{d}^{(J)}\}\) where \(\mathbf{c}^{(0)}\) is a coarse approximation of \(\mathbf{c}^{(J)}\) given on the integers \(\mathbb{Z}\), and \(\mathbf{d}^{(\ell)}\), \(\ell=1,\ldots,J\) are the _detail coefficients_ associated with the dyadic grids \(2^{-\ell}\mathbb{Z}\), respectively. We focus on transforms that involve refinement operators \(\mathcal{S}\) as upsampling operators, and decimation operators \(\mathcal{D}\) as their downsampling counterparts. Namely, the multiscale analysis is defined recursively by \[\mathbf{c}^{(\ell-1)}=\mathcal{D}\mathbf{c}^{(\ell)},\quad\mathbf{d}^{(\ell)}=\mathbf{c}^{( \ell)}-\mathcal{S}\mathbf{c}^{(\ell-1)},\quad\ell=1,\ldots,J, \tag{3}\] while the inverse transform, i.e., the multiscale synthesis is given by \[\mathbf{c}^{(\ell)}=\mathcal{S}\mathbf{c}^{(\ell-1)}+\mathbf{d}^{(\ell)},\quad\ell=1, \ldots,J. \tag{4}\] Practically, the role of the detail coefficients \(\mathbf{d}^{(\ell)}\), \(1\leq\ell\leq J\) in (3) is to store the data needed to reconstruct \(\mathbf{c}^{(\ell)}\), that is, approximant of \(\mathbf{c}^{(J)}\) at scale \(\ell\), using the coarser approximant \(\mathbf{c}^{(\ell-1)}\) of the predecessor scale \(\ell-1\). Figure 2 illustrates the iterative calculations of the multiscale transform (3) and its inverse (4). Let \(\mathcal{S}_{\mathbf{\alpha}}\) be a linear, binary refinement rule of a univariate subdivision scheme \(\mathcal{S}\), associated with a finitely supported mask \(\mathbf{\alpha}\), and defined by \[\mathcal{S}_{\mathbf{\alpha}}(\mathbf{c})_{j}=\sum_{k\in\mathbb{Z}}\alpha_{j-2k}c_{k}, \quad j\in\mathbb{Z}. \tag{5}\] Applying the refinement \(\mathcal{S}_{\mathbf{\alpha}}\) on a sequence \(\mathbf{c}\) associated with the integers, yields a sequence \(\mathcal{S}_{\mathbf{\alpha}}(\mathbf{c})\) associated with the values over the refined grid \(2^{-1}\mathbb{Z}\). Depending on the parity of the index \(k\), the refinement rule (5) can be split into two rules. Namely, \[\mathcal{S}_{\mathbf{\alpha}}(\mathbf{c})_{2j}=\sum_{k\in\mathbb{Z}}\alpha_{2k}c_{j-k }\quad\text{and}\quad\mathcal{S}_{\mathbf{\alpha}}(\mathbf{c})_{2j+1}=\sum_{k\in \mathbb{Z}}\alpha_{2k+1}c_{j-k},\quad j\in\mathbb{Z}. \tag{6}\] The refinement rule (5) is termed interpolating if \(\mathcal{S}_{\mathbf{\alpha}}(\mathbf{c})_{2j}=c_{j}\) for all \(j\in\mathbb{Z}\). Moreover, a necessary condition for the convergence of a subdivision scheme with the refinement rule, see e.g. [6], is \[\sum_{k\in\mathbb{Z}}\alpha_{2k}=\sum_{k\in\mathbb{Z}}\alpha_{2k+1}=1 \tag{7}\] Figure 2: The pyramid transform. On the left, the analysis (3). On the right, the synthesis (4). which is termed _shift invariance_. Indeed, refining a shifted data points with a shift invariant refinement gives precisely the shifted original refined outcome. With the shift invariance property (7), the rules (6) can be interpreted as moving _center of masses_ of the elements of \(\mathbf{c}\). We assume that any refinement mentioned is of convergent refinement operators. Given a refinement rule \(\mathcal{S}_{\mathbf{\alpha}}\), we look for a decimation operator \(\mathcal{D}\) such that the detail coefficients \(\mathbf{d}^{(\ell)}\) generated by the multiscale transform (3), vanish at all even indices. That is, \(d_{2j}^{(\ell)}=0\) for all \(j\in\mathbb{Z}\) and \(\ell=1,\ldots,J\). This property is beneficial for many tasks including data compression as we will see in later sections. If such a decimation operator \(\mathcal{D}\) exists and involves a sequence in \(\ell_{1}(\mathbb{Z})\), then \(\mathcal{S}_{\mathbf{\alpha}}\) is termed _reversible_, and \(\mathcal{D}\) is its _reverse_. This terminology will agree with Definition 2.1 as we will see next. Though, we note here that in [23, 24], such refinement is termed even-reversible. It turns out that the operator \(\mathcal{S}_{\mathbf{\alpha}}\) is reversible if and only if its corresponding reverse \(\mathcal{D}\) is associated with a real-valued sequence \(\mathbf{\gamma}\in\ell_{1}(\mathbb{Z})\) and takes the form \[\mathcal{D}_{\mathbf{\gamma}}(\mathbf{c})_{j}=\sum_{k\in\mathbb{Z}}\gamma_{j-k}c_{2k},\quad j\in\mathbb{Z}, \tag{8}\] for any real-valued sequence \(\mathbf{c}\), while \(\mathbf{\gamma}\) solves the convolutional equation \[(\mathbf{\alpha}\downarrow 2)\ast\mathbf{\gamma}=\mathbf{\delta}, \tag{9}\] where \(\mathbf{\alpha}\downarrow 2\) denotes the even elements of \(\mathbf{\alpha}\), i.e., \((\mathbf{\alpha}\downarrow 2)_{j}=\alpha_{2j}\) for \(j\in\mathbb{Z}\), and \(\mathbf{\delta}\) is the Kronecker delta sequence (\(\delta_{0}=1\) and \(\delta_{j}=0\) for \(j\neq 0\)). Contrary to the refinement rule (5), applying the decimation operator (8) on a sequence \(\mathbf{c}\) associated with the dyadic grid \(2^{-1}\mathbb{Z}\) produces a sequence \(\mathcal{D}_{\mathbf{\gamma}}(\mathbf{c})\) associated with the integers \(\mathbb{Z}\), and hence the term _decimation_. Put simply, the decimation operator convolves the sequence \(\mathbf{\gamma}\) with the even elements of \(\mathbf{c}\). If a solution \(\mathbf{\gamma}\) to (9) exists, then we call the coefficients of \(\mathbf{\gamma}\) the _decimation coefficients_. Moreover, if the refinement is interpolating, that is \(\mathbf{\alpha}\downarrow 2=\mathbf{\delta}\), then \(\mathcal{D}_{\mathbf{\delta}}\) becomes the simple downsampling operator \(\downarrow 2\), returning only the even elements of the input sequence. The following remark is essential to solving (9) and makes the key connection to pseudo-reversing (1) introduced in Section 2. **Remark 3.1**.: _We treat the entries of the sequences appearing in (9) as the Fourier coefficients of functions in \(\mathcal{A}(\mathbb{T})\), and rely on the convolution theorem to solve the equation. In particular, we transfer both sides with the transform \(\mathbf{c}\to\mathbf{c}(z)=\sum_{k\in\mathbb{Z}}c_{k}z^{k}\) to get_ \[\mathbf{\alpha}(z)\mathbf{\gamma}(z)=1. \tag{10}\] _Here we omitted the notation \(\downarrow 2\) from \((\mathbf{\alpha}\downarrow 2)(z)\) for convenience. The function \(\mathbf{c}(z)\) is termed the symbol of \(\mathbf{c}\). In other words, given a compactly supported refinement mask \(\mathbf{\alpha}\) defining the symbol \(\mathbf{\alpha}(z)\in\mathcal{A}(\mathbb{T})\), we look for its reverse \(\mathbf{\gamma}(z)\), as defined in Definition 2.1. The solution \(\mathbf{\gamma}\) of (9) is then the absolutely convergent Fourier coefficients of \(\mathbf{\gamma}(z)\). If \(\mathbf{\alpha}(z)\) is not reversible, then we turn to pseudo-reversing (1) with some parameter \(\xi\) - making a practical use of the notion._ Using Corollary 2.5, the solution \(\mathbf{\gamma}\) of (9) does not have a compact support. This elevates computational challenges. However, recent study [23] has approximated the decimation operator (8) with operators involving compactly supported coefficients via proper truncation. The study was concluded with decimation operators that are concretely executable, with negligible errors. Only when the solution \(\mathbf{\gamma}\) of (10) is obtained, we are able to employ the refinement operator \(\mathcal{S}_{\mathbf{\alpha}}\) of (5) together with its reverse \(\mathcal{D}_{\mathbf{\gamma}}\) of (8) into the multiscale transform (3) as we will see next. By the nature of this construction, we will indeed have \(d_{2j}^{(\ell)}=0\) for all \(j\in\mathbb{Z}\) and \(\ell=1,\ldots,J\). Inspired by the necessity of using sequences \(\boldsymbol{\alpha}\) and \(\boldsymbol{\gamma}\) that do not particularly satisfy (9), we define the linear operator \(\pi_{\boldsymbol{\gamma}}^{\boldsymbol{\alpha}}:\ell_{\infty}(\mathbb{Z})\to \ell_{\infty}(\mathbb{Z})\), mapping real-valued bounded sequences as follows \[\pi_{\boldsymbol{\gamma}}^{\boldsymbol{\alpha}}(\boldsymbol{c})=[(\mathcal{I}- \mathcal{S}_{\boldsymbol{\alpha}}\mathcal{D}_{\boldsymbol{\gamma}})\boldsymbol {c}]\downarrow 2, \tag{11}\] where \(\mathcal{I}\) is the identity operator. The operator \(\pi_{\boldsymbol{\gamma}}^{\boldsymbol{\alpha}}\) measures the significance of the detail coefficients on the even indices, with one iteration of decomposition (3) when applied to a sequence \(\boldsymbol{c}\). Moreover, if \(\boldsymbol{\alpha}\) and \(\boldsymbol{\gamma}\) satisfy (10), then \(\pi_{\boldsymbol{\gamma}}^{\boldsymbol{\alpha}}\) becomes the trivial zero operator. ### Linear multiscaling Here we introduce a novel family of multiscale transforms similar to (3). What distinguishes our transforms from the ones studied in [23, 24] is that they are based on non-reversible refinement operators. One interesting family of non-reversible refinement operators is the _least squares_ introduced in [7]. This branch of schemes was derived by fitting local least squares polynomials. We first exploit the idea of pseudo-reversing (1) to define the pseudo-reverse of a refinement operator. **Definition 3.1**.: _Let \(\mathcal{S}_{\boldsymbol{\alpha}}\) be a refinement operator as in (5). For \(\xi>0\), the decimation operator \(\mathcal{D}_{\boldsymbol{\gamma}}\) of (8) is the pseudo-reverse of \(\mathcal{S}_{\boldsymbol{\alpha}}\) if_ \[\boldsymbol{\gamma}(z)=\boldsymbol{\alpha}_{\xi}^{\dagger}(z)\] _where \(\boldsymbol{\alpha}(z)\) and \(\boldsymbol{\gamma}(z)\) are defined as in Remark 3.1. i.e., if \(\boldsymbol{\gamma}\) is the pseudo-reverse of \(\boldsymbol{\alpha}\)._ In this definition, \(\boldsymbol{\gamma}\) depends on the parameter \(\xi\) but we omit the latter for convenience. Moreover, we encode the Fourier coefficients of \(\boldsymbol{\alpha}_{\xi}^{-\dagger}(z)\) as the even coefficients of the approximating refinement \(\widetilde{\boldsymbol{\alpha}}\), while the odd values of \(\widetilde{\boldsymbol{\alpha}}\) agree with the odd values of \(\boldsymbol{\alpha}\). Similar to pseudo-reversing functions in \(\mathcal{A}(\mathbb{T})\), the pseudo-reverse of a reversible refinement coincides with its reverse. Figure 3 illustrates the notion of pseudo-reversing refinements, and it is an analogue to Figure 1 with operators replacing functions. **Proposition 3.1**.: _If \(\mathcal{S}_{\boldsymbol{\alpha}}\) is shift invariant and convergent, then so is \(\mathcal{S}_{\widetilde{\boldsymbol{\alpha}}}\) for small values of \(\xi\)._ Proof.: The odd coefficients of the mask \(\widetilde{\boldsymbol{\alpha}}\) are similar to the odd coefficients of \(\boldsymbol{\alpha}\), while the even coefficients of \(\widetilde{\boldsymbol{\alpha}}\) sum to \(1\) due to the constant \(C(\boldsymbol{\alpha}^{\dagger})\) appearing in (1). This implies the shift invariance (7) of \(\mathcal{S}_{\widetilde{\boldsymbol{\alpha}}}\). As for convergence, we refer to [6] for the analysis and present here a proof sketch. Since \(\mathcal{S}_{\boldsymbol{\alpha}}\) is convergent, then the refinement rule \(\mathcal{S}_{\boldsymbol{\beta}}\), where the mask \(\boldsymbol{\beta}\) is determined by \(\boldsymbol{\alpha}(z)=(1+z)\boldsymbol{\beta}(z)\), is contractive. The contractivity of the refinement operator \(\mathcal{S}_{\widetilde{\boldsymbol{\beta}}}\) corresponding to \(\mathcal{S}_{\widetilde{\boldsymbol{\alpha}}}\), where \(\widetilde{\boldsymbol{\alpha}}(z)=(1+z)\widetilde{\boldsymbol{\beta}}(z)\), is then naturally inherited by the continuity of \(\widetilde{\boldsymbol{\alpha}}(z)\) around \(\xi=0^{+}\), see Proposition 2.3. Figure 3: Illustration of pseudo-reversing refinement operators. We are now in position to introduce our new multiscale transforms based on non-reversible refinement operators. **Definition 3.2**.: _Let \(\mathbf{c}^{(J)}\) be a real-valued sequence of scale \(J\in\mathbb{N}\), associated on the dyadic grid \(2^{-J}\mathbb{Z}\), and let \(\mathcal{S}_{\mathbf{\alpha}}\) be a refinement rule (5). The multiscale transform based on \(\mathcal{S}_{\mathbf{\alpha}}\) is defined by_ \[\mathbf{c}^{(\ell-1)}=\mathcal{D}_{\mathbf{\gamma}}\mathbf{c}^{(\ell)},\quad\mathbf{d}^{(\ell) }=\mathbf{c}^{(\ell)}-\mathcal{S}_{\mathbf{\alpha}}\mathbf{c}^{(\ell-1)},\quad\ell=1, \ldots,J, \tag{12}\] _where \(\mathcal{D}_{\mathbf{\gamma}}\) is the pseudo-reverse of \(\mathcal{S}_{\mathbf{\alpha}}\) for some \(\xi>0\). The inverse transform of (12) is defined by iterating (4) for \(\ell=1\ldots,J\)._ The major difference between the multiscale transforms (3) and (12) is that the latter based on pseudo-reversing. Therefore, it enables the employment of non-reversible refinements as upsampling operators. Next, we analyze the analytical properties of the pyramid transform (12). ### Analytical properties In multiscale analysis, and in time-frequency analysis in general, one usually wants to have a stable and perfect reconstruction. This is useful for many numerical tasks, since, we typically manipulate the detail coefficients and then reconstruct using the inverse transform. Perfect reconstruction and stability guarantee the validity of such algorithms. In the context of our multiscale transform (12) and its inverse, perfect reconstruction means the ability to set half of the detail coefficients of each layer to zero, without losing any information after the synthesis. This property is beneficial for _data compression_ since we can avoid storing half of the information. Therefore, half of the detail coefficients of each layer has to exhibit statistical redundancy. The cost of using \(\mathcal{S}_{\mathbf{\alpha}}\) with its pseudo-reverse \(\mathcal{D}_{\mathbf{\gamma}}\) in (12) arises as a violation in the property of having zero detail coefficients on the even indices. Namely, \(\pi_{\mathbf{\gamma}}^{\mathbf{\alpha}}\) of (11) is not the zero operator. Consequently, requiring small detail coefficients on the even indices raises the necessity to study the operator norm \(\|\pi_{\mathbf{\gamma}}^{\mathbf{\alpha}}\|_{\infty}\), where \(\mathbf{\alpha}\) and \(\mathbf{\gamma}\) do not satisfy (10). The following lemma provides a global upper bound on the detail coefficients on the even indices. **Lemma 3.2**.: _Let \(\mathbf{c}^{(J)}\) be a real-valued sequence, and let \(\mathcal{S}_{\mathbf{\alpha}}\) be a refinement operator (5). Denote by \(\{\mathbf{c}^{(0)};\mathbf{d}^{(1)},\ldots,\mathbf{d}^{(J)}\}\) the multiscale representation (12) of the sequence. Then,_ \[|d_{2j}^{(\ell)}|\leq\|\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}\|_{\infty}\|\mathbf{ \gamma}\|_{1}, \tag{13}\] _for all \(j\in\mathbb{Z}\) and \(\ell=1,\ldots,J\), where \(\widetilde{\mathbf{\alpha}}\) is the approximating mask, and \(\|\mathbf{\psi}\|_{\infty}=\sup_{j\in\mathbb{Z}}|\psi_{j}|\)._ Proof.: Since \(\mathbf{d}^{(\ell)}\downarrow 2\) is calculated by \(\pi_{\mathbf{\gamma}}^{\mathbf{\alpha}}(\mathbf{c}^{(\ell)})\) for \(\ell=1,\ldots,J\), it is sufficient to bound the operator norm \(\|\pi_{\mathbf{\gamma}}^{\mathbf{\alpha}}\|_{\infty}\). For any real-valued bounded sequence \(\mathbf{c}\), observe that \[\pi_{\mathbf{\gamma}}^{\mathbf{\alpha}}(\mathbf{c}) =[(\mathcal{I}-\mathcal{S}_{\mathbf{\alpha}}\mathcal{D}_{\mathbf{\gamma} })\mathbf{c}]\downarrow 2\] \[=[(\mathcal{I}-\mathcal{S}_{\widetilde{\mathbf{\alpha}}}\mathcal{D}_{ \mathbf{\gamma}}+\mathcal{S}_{\widetilde{\mathbf{\alpha}}}\mathcal{D}_{\mathbf{\gamma}}- \mathcal{S}_{\mathbf{\alpha}}\mathcal{D}_{\mathbf{\gamma}})\mathbf{c}]\downarrow 2\] \[=\pi_{\mathbf{\gamma}}^{\widetilde{\mathbf{\alpha}}}(\mathbf{c})+[(\mathcal{ S}_{\widetilde{\mathbf{\alpha}}}\mathcal{D}_{\mathbf{\gamma}}-\mathcal{S}_{\mathbf{\alpha}} \mathcal{D}_{\mathbf{\gamma}})\mathbf{c}]\downarrow 2\] \[=[(\mathcal{S}_{\widetilde{\mathbf{\alpha}}-\mathbf{\alpha}}\mathcal{D}_{ \mathbf{\gamma}})\mathbf{c}]\downarrow 2.\] The last equality is obtained by the fact that \(\mathbf{\gamma}\) is the reverse of \(\widetilde{\mathbf{\alpha}}\), and by the linearity of the refinement (5). Now, by taking the \(\ell_{\infty}\) norm we get \[\|\pi_{\mathbf{\gamma}}^{\mathbf{\alpha}}(\mathbf{c})\|_{\infty}\leq\|\mathcal{S}_{ \widetilde{\mathbf{\alpha}}-\mathbf{\alpha}}(\mathcal{D}_{\mathbf{\gamma}}\mathbf{c})\|_{ \infty}\leq\sup_{j\in\mathbb{Z}}\sum_{k}|\widetilde{\alpha}_{j-2k}-\alpha_{j-2 k}|\sum_{n}|\gamma_{k-n}|\cdot|c_{2n}|\leq\|\widetilde{\mathbf{\alpha}}-\mathbf{ \alpha}\|_{\infty}\|\mathbf{\gamma}\|_{1}\|\mathbf{c}\|_{\infty}.\] Eventually, the operator norm of \(\pi_{\mathbf{\gamma}}^{\mathbf{\alpha}}\) is then \[\|\pi_{\mathbf{\gamma}}^{\mathbf{\alpha}}\|_{\infty}=\sup_{\mathbf{c}\neq\mathbf{0}}\frac{\| \pi_{\mathbf{\gamma}}^{\mathbf{\alpha}}(\mathbf{c})\|_{\infty}}{\|\mathbf{c}\|_{\infty}}\leq \|\widetilde{\mathbf{\alpha}}-\mathbf{\alpha}\|_{\infty}\|\mathbf{\gamma}\|_{1},\] as required. Lemma (3.2) offers the universal upper bound \(\|\widetilde{\mathbf{\alpha}}-\mathbf{\alpha}\|_{\infty}\|\mathbf{\gamma}\|_{1}\) for the even detail coefficients of (12). Indeed, there is a trade-off between the quantities \(\|\widetilde{\mathbf{\alpha}}-\mathbf{\alpha}\|_{\infty}\) and \(\|\mathbf{\gamma}\|_{1}\). In particular, if \(\xi\) grows, then so does the perturbation \(\|\widetilde{\mathbf{\alpha}}-\mathbf{\alpha}\|_{\infty}\), and according to Corollary 2.6 the norm \(\|\mathbf{\gamma}\|_{1}\) gets smaller. To provide a more precise bound on the detail coefficients on the even indices, recall that, since \(\widetilde{\mathbf{\alpha}}\) and \(\mathbf{\gamma}\) satisfy (10), then using the pair \((\mathcal{S}_{\widetilde{\mathbf{\alpha}}},\mathcal{D}_{\mathbf{\gamma}})\) in the multiscale transform (12) result in zero detail coefficients on the even indices. The next lemma compares the detail coefficients of transform (12) when the pairs \((\mathcal{S}_{\mathbf{\alpha}},\mathcal{D}_{\mathbf{\gamma}})\) and \((\mathcal{S}_{\widetilde{\mathbf{\alpha}}},\mathcal{D}_{\mathbf{\gamma}})\) are separately used. To this purpose, we introduce the operator \(\Delta\) which acts on sequences and computes the maximal consecutive difference. Namely, \(\Delta\mathbf{c}=\sup_{j\in\mathbb{Z}}|c_{j+1}-c_{j}|\) for any real sequence \(\mathbf{c}\). **Lemma 3.3**.: _Let \(\mathbf{c}^{(J)}\) be a real-valued sequence, and let \(\mathcal{S}_{\mathbf{\alpha}}\) be a refinement operator (5). Denote by \(\{\mathbf{c}^{(0)};\mathbf{d}^{(1)},\ldots,\mathbf{d}^{(J)}\}\) and \(\{\mathbf{c}^{(0)};\mathbf{q}^{(1)},\ldots,\mathbf{q}^{(J)}\}\) the multiscale representations (12) of \(\mathbf{c}^{(J)}\) using the pairs \((\mathcal{S}_{\mathbf{\alpha}},\mathcal{D}_{\mathbf{\gamma}})\) and \((\mathcal{S}_{\widetilde{\mathbf{\alpha}}},\mathcal{D}_{\mathbf{\gamma}})\), respectively. Assume that \(K_{\mathbf{\gamma}}=2\sum_{k}|\gamma_{k}||k|<\infty\). Then,_ \[\|\mathbf{d}^{(\ell)}-\mathbf{q}^{(\ell)}\|_{\infty}\leq\big{(}K_{\mathbf{\alpha},\mathbf{ \gamma}}+K_{\widetilde{\mathbf{\alpha}},\mathbf{\gamma}}\big{)}\Delta\mathbf{c}^{(\ell)} \tag{14}\] _for \(\ell=1,\ldots,J\), and the positive constants_ \[K_{\mathbf{\alpha},\mathbf{\gamma}}=K_{\mathbf{\gamma}}\|\mathbf{\alpha}\|_{1}+K_{\mathbf{\alpha} }\|\mathbf{\gamma}\|_{1}\quad\text{and}\quad K_{\widetilde{\mathbf{\alpha}},\mathbf{\gamma} }=K_{\mathbf{\gamma}}\|\widetilde{\mathbf{\alpha}}\|_{1}+K_{\widetilde{\mathbf{\alpha}}} \|\mathbf{\gamma}\|_{1}, \tag{15}\] _where \(K_{\mathbf{\alpha}}=\sum_{k}|\alpha_{k}||k|\) and \(K_{\widetilde{\mathbf{\alpha}}}=\sum_{k}|\widetilde{\alpha}_{k}||k|\)._ Proof.: We explicitly calculate a general term of \(\mathbf{q}^{(\ell)}\). For \(\ell=1,\ldots,J\) and \(j\in\mathbb{Z}\) we have \[q_{j}^{(\ell)} =c_{j}^{(\ell)}-\sum_{k}\widetilde{\alpha}_{j-2k}c_{k}^{(\ell-1)}= \sum_{k}\widetilde{\alpha}_{j-2k}\big{(}c_{j}^{(\ell)}-(\mathcal{D}_{\mathbf{ \gamma}}\mathbf{c}^{(\ell)})_{k}\big{)}\] \[=\sum_{k}\widetilde{\alpha}_{j-2k}\big{(}c_{j}^{(\ell)}-\sum_{n} \gamma_{k-n}c_{2n}^{(\ell)}\big{)}=\sum_{k}\widetilde{\alpha}_{j-2k}\sum_{n} \gamma_{k-n}(c_{j}^{(\ell)}-c_{2n}^{(\ell)}).\] Consequently, \[|q_{j}^{(\ell)}| \leq\sum_{k}|\widetilde{\alpha}_{j-2k}|\sum_{n}|\gamma_{k-n}||c_{ j}^{(\ell)}-c_{2n}^{(\ell)}|\] \[\leq\sum_{k}|\widetilde{\alpha}_{j-2k}|\bigg{(}\sum_{n}|\gamma_{k- n}||2n-j|\bigg{)}\Delta\mathbf{c}^{(\ell)}\] \[\leq\sum_{k}|\widetilde{\alpha}_{j-2k}|\bigg{(}\sum_{n}|\gamma_{k -n}|\big{(}|2n-2k|+|2k-j|\big{)}\bigg{)}\Delta\mathbf{c}^{(\ell)}\] \[\leq\sum_{k}|\widetilde{\alpha}_{j-2k}|\bigg{(}K_{\mathbf{\gamma}}+|2k -j|\|\mathbf{\gamma}\|_{1}\bigg{)}\Delta\mathbf{c}^{(\ell)}\] \[\leq\big{(}K_{\mathbf{\gamma}}\|\widetilde{\mathbf{\alpha}}\|_{1}+K_{ \widetilde{\mathbf{\alpha}}}\|\mathbf{\gamma}\|_{1}\big{)}\|\Delta\mathbf{c}^{(\ell)}\|_{ \infty}=K_{\widetilde{\mathbf{\alpha}},\mathbf{\gamma}}\Delta\mathbf{c}^{(\ell)}.\] Therefore, \(\|\mathbf{q}^{(\ell)}\|_{\infty}\leq K_{\widetilde{\mathbf{\alpha}},\mathbf{\gamma}}\Delta \mathbf{c}^{(\ell)}\). Similar arguments (where \(\widetilde{\mathbf{\alpha}}\) is replaced with \(\mathbf{\alpha}\)) give \(\|\mathbf{d}^{(\ell)}\|_{\infty}\leq K_{\mathbf{\alpha},\mathbf{\gamma}}\Delta\mathbf{c}^{( \ell)}\). Note that the constants \(K_{\mathbf{\alpha}}\) and \(K_{\widetilde{\mathbf{\alpha}}}\) are finite since the corresponding masks share the same finite support. By combining both estimates into the triangle inequality we obtain the required. Lemma 3.3 induces the following theorem. **Theorem 3.4.** _Let \(\mathbf{c}^{(J)}\) be a real-valued sequence sampled from a differentiable function \(f:\mathbb{R}\to\mathbb{R}\), with a bounded derivative, over the equispaced grid \(2^{-J}\mathbb{Z}\). Let \(\mathcal{S}_{\mathbf{\alpha}}\) be a refinement operator (5). Denote by \(\{\mathbf{c}^{(0)};\mathbf{d}^{(1)},\ldots,\mathbf{d}^{(J)}\}\) and \(\{\mathbf{c}^{(0)};\mathbf{q}^{(1)},\ldots,\mathbf{q}^{(J)}\}\) the multiscale representations (12) of \(\mathbf{c}^{(J)}\) using the pairs \((\mathcal{S}_{\mathbf{\alpha}},\mathcal{D}_{\mathbf{\gamma}})\) and \((\mathcal{S}_{\widetilde{\mathbf{\alpha}}},\mathcal{D}_{\mathbf{\gamma}})\), respectively. Then,_ \[\|\mathbf{d}^{(\ell)}-\mathbf{q}^{(\ell)}\|_{\infty}\leq\|\mathbf{\gamma}\|_{1}^{J}\|f^{ \prime}\|_{\infty}\big{(}K_{\mathbf{\alpha},\mathbf{\gamma}}+K_{\widetilde{\mathbf{\alpha }},\mathbf{\gamma}}\big{)}(2\|\mathbf{\gamma}\|_{1})^{-\ell} \tag{16}\] _for \(\ell=1,\ldots,J\), where \(\|f^{\prime}\|_{\infty}=\sup_{x\in\mathbb{R}}|f^{\prime}(x)|\) and the constants \(K_{\mathbf{\alpha},\mathbf{\gamma}}\) and \(K_{\widetilde{\mathbf{\alpha}},\mathbf{\gamma}}\) of (15)._ Proof.: Since \(f\) is differentiable and bounded, then by the mean value theorem, for all \(j\in\mathbb{Z}\) and a fixed \(J\in\mathbb{N}\), there exists \(x_{j}\) in the open segment connecting the parametrizations of \(c_{j}^{(J)}\) and \(c_{j+1}^{(J)}\), such that \[|c_{j+1}^{(J)}-c_{j}^{(J)}|=2^{-J}|f^{\prime}(x_{j})|.\] Taking the \(\sup_{j\in\mathbb{Z}}\) over both sides gives the estimation \(\Delta\mathbf{c}^{(J)}\leq 2^{-J}\|f^{\prime}\|_{\infty}\). Now, note that the decimation operator (8) can be written as \(\mathcal{D}_{\mathbf{\gamma}}\mathbf{c}=\mathbf{\gamma}*(\mathbf{c}\downarrow 2)\) for any real-valued sequence \(\mathbf{c}\). Moreover, since the convolution \(*\) commutes with \(\Delta\) we get \[\Delta\mathbf{c}^{(\ell-1)}=\Delta(\mathbf{\gamma}*(\mathbf{c}^{(\ell)}\downarrow 2))\leq\| \mathbf{\gamma}\|_{1}\Delta(\mathbf{c}^{(\ell)}\downarrow 2)\leq 2\|\mathbf{\gamma}\|_{1} \Delta\mathbf{c}^{(\ell)}.\] Iterating the latter inequality \(J-\ell\) many times we get \[\Delta\mathbf{c}^{(\ell)}\leq(2\|\mathbf{\gamma}\|_{1})^{J-\ell}\Delta\mathbf{c}^{(J)} \leq\|\mathbf{\gamma}\|_{1}^{J}(2\|\mathbf{\gamma}\|_{1})^{-\ell}\|f^{\prime}\|_{\infty}.\] This estimation together with (14) yield the required. Theorem 3.4 implies that the effect of using \(\mathcal{S}_{\widetilde{\mathbf{\alpha}}}\) instead of \(\mathcal{S}_{\mathbf{\alpha}}\) in (12) is more pronounced when comparing the corresponding details on coarse scales. Consequently, the phenomenon of having small detail coefficients on the even indices has more room to be violated on coarse scales. In the following theorem we analyze the reconstruction error, that is, the difference between the synthesized pyramids when using a non-reversible refinement \(\mathcal{S}_{\mathbf{\alpha}}\) and its reversible approximant \(\mathcal{S}_{\widetilde{\mathbf{\alpha}}}\) in parallel. Recall that a real-valued sequence \(\mathbf{c}^{(J)}\) is perfectly synthesized via (4) after its analysis (12) when the pair \((\mathcal{S}_{\widetilde{\mathbf{\alpha}}},\mathcal{D}_{\mathbf{\gamma}})\) is used. To avoid abuse of notation we denote the synthesized sequence for the pair \((\mathcal{S}_{\mathbf{\alpha}},\mathcal{D}_{\mathbf{\gamma}})\) by \(\mathbf{\zeta}^{(J)}\). **Theorem 3.5.** _Let \(\mathbf{c}^{(J)}\) be a real-valued sequence, and let \(\mathcal{S}_{\mathbf{\alpha}}\) be a refinement operator (5). Denote by \(\{\mathbf{c}^{(0)};\mathbf{d}^{(1)},\ldots,\mathbf{d}^{(J)}\}\) its multiscale representation (12) using the pair \((\mathcal{S}_{\mathbf{\alpha}},\mathcal{D}_{\mathbf{\gamma}})\). Assume \(\|\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}\|_{\infty}\leq 1/\sigma\) where \(\sigma\) is the support size of the even mask of \(\mathbf{\alpha}\). Then, the synthesis sequence \(\mathbf{\zeta}^{(J)}\) obeys_ \[\|\mathbf{c}^{(J)}-\mathbf{\zeta}^{(J)}\|_{\infty}\leq C\sum_{k=1}^{J}\|\mathbf{d}^{(k)}- \mathbf{q}^{(k)}\|_{\infty}+M\sum_{k=0}^{J}\sigma^{k}\|\mathbf{\alpha}-\widetilde{\bm {\alpha}}\|_{\infty}^{k} \tag{17}\] _for some constants \(C>0\) and \(M>0\), where \(\mathbf{q}^{(k)}\), \(k=1,\ldots,J\) are the detail coefficients of \(\mathbf{c}^{(J)}\) generated by (12) with the pair \((\mathcal{S}_{\widetilde{\mathbf{\alpha}}},\mathcal{D}_{\mathbf{\gamma}})\)._ Proof.: The requirement \(\|\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}\|_{\infty}\leq 1/\sigma\) implies that the linear operator \(\mathcal{S}_{\mathbf{\alpha}}-\mathcal{S}_{\widetilde{\mathbf{\alpha}}}\) which acts on sequences, is contractive. Namely, its operator norm is less than one since \[\|\mathcal{S}_{\mathbf{\alpha}}-\mathcal{S}_{\widetilde{\mathbf{\alpha}}}\|_{\infty}=\| \mathcal{S}_{\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}}\|_{\infty}=\sum_{k}|\alpha_{2 k}-\widetilde{\alpha}_{2k}|\leq\sigma\|\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}\|_{ \infty}\leq 1.\] Consequently, the operators \(\{\mathcal{S}^{j}_{\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}}\}_{j\in\mathbb{N}}\) are contractive as well, with the geometrically decreasing bound \(\|\mathcal{S}^{j}_{\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}}\|_{\infty}\leq\sigma ^{j}\|\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}\|_{\infty}^{j}\) for all \(j\in\mathbb{N}\). Now, iterating (4) with the pair \((\mathcal{S}_{\widetilde{\mathbf{\alpha}}},\mathcal{D}_{\mathbf{\gamma}})\) reconstructs \(\mathbf{c}^{(J)}\) as follows. We have \[\mathbf{c}^{(J)}=\mathcal{S}^{J}_{\widetilde{\mathbf{\alpha}}}\mathbf{c}^{(0)}+\sum_{k=1} ^{J}\mathcal{S}^{J-k}_{\widetilde{\mathbf{\alpha}}}\mathbf{q}^{(k)},\] where \(\mathcal{S}^{0}_{\widetilde{\mathbf{\alpha}}}=\mathcal{I}\) is the identity operator, and \(\mathbf{q}^{(k)}\) are the detail coefficients obtained via (12) where \(\mathcal{S}_{\widetilde{\mathbf{\alpha}}}\) is used with its reverse \(\mathcal{D}_{\mathbf{\gamma}}\). Moreover, by iterating equation (4) starting from \(\ell=J\), the reconstructed sequence \(\mathbf{\zeta}^{(J)}\) can be expressed as \[\mathbf{\zeta}^{(J)}=\mathcal{S}^{J}_{\mathbf{\alpha}}\mathbf{c}^{(0)}+\sum_{k=1}^{J} \mathcal{S}^{J-k}_{\mathbf{\alpha}}\mathbf{d}^{(k)}.\] Therefore, by the linearity of the refinement operators we get \[\|\mathbf{c}^{(J)}-\mathbf{\zeta}^{(J)}\|_{\infty} \leq\|\mathcal{S}^{J}_{\mathbf{\alpha}}\mathbf{c}^{(0)}-\mathcal{S}^{J}_{ \widetilde{\mathbf{\alpha}}}\mathbf{c}^{(0)}\|_{\infty}+\sum_{k=1}^{J}\|\mathcal{S}^{ J-k}_{\mathbf{\alpha}}\mathbf{d}^{(k)}-\mathcal{S}^{J-k}_{\widetilde{\mathbf{\alpha}}}\mathbf{q}^{(k )}\|_{\infty}\] \[\leq\|\mathcal{S}^{J}_{\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}}\mathbf{c} ^{(0)}\|_{\infty}+\sum_{k=1}^{J}\|\mathcal{S}^{J-k}_{\mathbf{\alpha}}\mathbf{d}^{(k)}- \mathcal{S}^{J-k}_{\mathbf{\alpha}}\mathbf{q}^{(k)}\|_{\infty}+\|\mathcal{S}^{J-k}_{ \mathbf{\alpha}}\mathbf{q}^{(k)}-\mathcal{S}^{J-k}_{\widetilde{\mathbf{\alpha}}}\mathbf{q}^{(k )}\|_{\infty}\] \[\leq\sigma^{J}\|\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}\|_{\infty}^{ J}\|\mathbf{c}^{(0)}\|_{\infty}+\sum_{k=1}^{J}\|\mathcal{S}^{J-k}_{\mathbf{\alpha}}\|_{ \infty}\|\mathbf{d}^{(k)}-\mathbf{q}^{(k)}\|_{\infty}+\sigma^{J-k}\|\mathbf{\alpha}- \widetilde{\mathbf{\alpha}}\|_{\infty}^{J-k}\|\mathbf{q}^{(k)}\|_{\infty}.\] The uniform boundedness principle guarantees the existence of a constant \(C>0\) such that \(\sup_{j\in\mathbb{N}}\|\mathcal{S}^{j}_{\mathbf{\alpha}}\|_{\infty}\leq C\). And, by taking \(M=\max\{\|\mathbf{c}^{(0)}\|_{\infty},\ \|\mathbf{q}^{(k)}\|_{\infty},\ k=1,\ldots,J\}\) the uniform bound of the norms of the pyramid \(\{\mathbf{c}^{(0)};\mathbf{q}^{(1)},\ldots,\mathbf{q}^{(J)}\}\) we eventually obtain \[\|\mathbf{c}^{(J)}-\mathbf{\zeta}^{(J)}\|_{\infty}\leq C\sum_{k=1}^{J}\|\mathbf{d}^{(k)}- \mathbf{q}^{(k)}\|_{\infty}+M\sum_{k=0}^{J}\sigma^{J-k}\|\mathbf{\alpha}-\widetilde{ \mathbf{\alpha}}\|_{\infty}^{J-k},\] as required. The assumption \(\|\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}\|_{\infty}\leq 1/\sigma\) appearing in Theorem 3.5 is mild because the maximal perturbation can be bounded as a direct result of Proposition 2.4. Moreover, since the upper bound in (17) grows with respect to the scale \(J\), it is possible to reduce its value by considering only few iterations in multiscaling (12), rather than \(J\) times. Specifically, for a fixed number of iterations \(1\leq m<J\), we can decompose \(\mathbf{c}^{(J)}\) via (12) \(m\) many times into the pyramid \(\{\mathbf{c}^{(J-m)};\mathbf{d}^{(J-m+1)},\ldots,\mathbf{d}^{(J)}\}\) and get a better synthesis. In order to have a good synthesis algorithm, that is, small-enough upper bound (17), we impose a priori on the analyzed sequence \(\mathbf{c}^{(J)}\) in Theorem 3.5. Particularly, similar to Theorem 3.4, we assume \(\mathbf{c}^{(J)}\) to be sampled from a differentiable function as the following corollary argues. **Corollary 3.6**.: _Under the conditions of Theorem 3.5, assume that \(\mathbf{c}^{(J)}\) is sampled from a differentiable function \(f:\mathbb{R}\to\mathbb{R}\) with a bounded derivative, over the equispaced grid \(2^{-J}\mathbb{Z}\). Then,_ \[\|\mathbf{c}^{(J)}-\mathbf{\zeta}^{(J)}\|_{\infty}\leq C\|\mathbf{\gamma}\|_{1}^{J}\|f^{ \prime}\|_{\infty}\big{(}K_{\mathbf{\alpha},\mathbf{\gamma}}+K_{\widetilde{\mathbf{\alpha} },\mathbf{\gamma}}\big{)}\sum_{k=1}^{J}(2\|\mathbf{\gamma}\|_{1})^{-k}+M\sum_{k=0}^{J} \sigma^{k}\|\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}\|_{\infty}^{k} \tag{18}\] _where the constants \(K_{\mathbf{\alpha},\mathbf{\gamma}}\), \(K_{\widetilde{\mathbf{\alpha}},\mathbf{\gamma}}\), \(C\) and \(M\) appear in (15) and (17). Therefore, if both quantities \(\|f^{\prime}\|_{\infty}\) and \(\|\mathbf{\alpha}-\widetilde{\mathbf{\alpha}}\|_{\infty}\) are small, then iterating (4) is efficient for recovering the analyzed sequence in (12)._ Figure 4 epitomizes the theoretical results of this section; it illustrates the analysis and the synthesis with the pairs \((\mathcal{S}_{\mathbf{\alpha}},\mathcal{D}_{\mathbf{\gamma}})\) and \((\mathcal{S}_{\widetilde{\mathbf{\alpha}}},\mathcal{D}_{\mathbf{\gamma}})\). We conclude the section with two remarks. **Remark 3.2**.: _The method of pseudo-reversing (1) can be a remedy for reversible refinements \(\mathcal{S}_{\mathbf{\alpha}}\) with bad reversibility condition. That is, refinements with high values of \(\kappa(\mathbf{\alpha})\) in (2). Although their reverses involve decimation coefficients \(\mathbf{\gamma}\in\ell_{1}(\mathbb{Z})\), their decay rate may be poor due to Corollary 2.6 and thus requiring a large truncation support [23] for implementation. Indeed, a better decay rate can be enforced by pushing the zeros of \(\mathbf{\alpha}(z)\) that are outside of the unit disk, further from the boundary \(\mathbb{T}\), in a similar manner to pseudo-reversing. Refinements from the family of B-spline [4] subdivision schemes could play as key examples to such operators, as shown in Section 5._ **Remark 3.3**.: _Despite having a perfect synthesis algorithm, the reason we avert using the pair \((\mathcal{S}_{\widetilde{\mathbf{\alpha}}},\mathcal{D}_{\mathbf{\gamma}})\) in multiscale transforms is that the approximated refinement \(\mathcal{S}_{\widetilde{\mathbf{\alpha}}}\) may not inherit the essential properties of the original \(\mathcal{S}_{\mathbf{\alpha}}\), e.g., the capability of some refinements to produce smooth limit functions may be lost when approximating \(\mathcal{S}_{\mathbf{\alpha}}\) with \(\mathcal{S}_{\widetilde{\mathbf{\alpha}}}\)._ ## 4 Multiscaling manifold values In this section we adapt the multiscale transform (12) to manifold-valued sequences. To this purpose, it is necessary to adapt the operators \(\mathcal{S}_{\mathbf{\alpha}}\) and \(\mathcal{D}_{\mathbf{\gamma}}\) of (5) and (8) to manifolds, as well as defining the analogues to the elementary operations '\(-\)' and '\(+\)' appearing in the transform and its inverse. Indeed, there are various methods for adapting the multiscale transform, but we follow the same adaptations and definitions of [23]. Let \(\mathcal{M}\) be an open complete Riemannian manifold equipped with Riemannian metric \(\langle\cdot,\cdot\rangle\). The Riemannian geodesic distance \(\rho:\mathcal{M}^{2}\to\mathbb{R}^{+}\) is defined by \[\rho(x,y)=\inf_{\Gamma}\int_{a}^{b}|\dot{\Gamma}(t)|dt, \tag{19}\] where \(\Gamma:[a,b]\to\mathcal{M}\) is a curve connecting the points \(x=\Gamma(a)\) and \(y=\Gamma(b)\), and \(|\cdot|^{2}=\langle\cdot,\cdot\rangle\). Based on the Riemannian geodesic (19), the operators \(\mathcal{S}_{\boldsymbol{\alpha}}\) and \(\mathcal{D}_{\boldsymbol{\gamma}}\) of (5) and (8) are adapted to \(\mathcal{M}\) respectively by the optimization problems \[\mathcal{T}_{\boldsymbol{\alpha}}(\boldsymbol{c})_{j}=\operatorname{argmin}_{ x\in\mathcal{M}}\sum_{k\in\mathbb{Z}}\alpha_{j-2k}\rho^{2}(x,c_{k}),\quad j\in \mathbb{Z}, \tag{20}\] and \[\mathcal{Y}_{\boldsymbol{\gamma}}(\boldsymbol{c})_{j}=\operatorname{argmin}_{ x\in\mathcal{M}}\sum_{k\in\mathbb{Z}}\gamma_{j-k}\rho^{2}(x,c_{2k}),\quad j\in \mathbb{Z}, \tag{21}\] for \(\mathcal{M}\)-valued sequences \(\boldsymbol{c}\). When the solution of (20) or (21) exists uniquely, we term the solution as the _Riemannian center of mass_[15]. It is also termed Karcher mean for matrices and Frechet mean in more general metric spaces, see [20]. The global well-definedness of (20) and (21) when \(\boldsymbol{\alpha}\) and \(\boldsymbol{\gamma}\) have non-negative entries is studied in [21]. Moreover, in the framework where \(\mathcal{M}\) has a non-positive sectional curvature, if the mask \(\boldsymbol{\alpha}\) is shift invariant (7), then a globally unique solution to problem (20) can be found, see e.g., [19, 16, 28]. Similar argument applies to problem (21) since the elements of \(\boldsymbol{\gamma}\) must sum to \(1\) due to (10). Recent studies of manifolds with positive sectional curvature show necessary conditions for uniqueness on the spread of points with respect to the injectivity radius of \(\mathcal{M}\)[5, 17]. We focus on \(\mathcal{M}\)-valued sequences \(\boldsymbol{c}\) that are admissible in the sense that, both (20) and (21) are uniquely solved for any shift invariant mask \(\boldsymbol{\alpha}\) and sequence \(\boldsymbol{\gamma}\). We say that the nonlinear operator \(\mathcal{T}_{\boldsymbol{\alpha}}\) is non-reversible if its linear counterpart \(\mathcal{S}_{\boldsymbol{\alpha}}\) is non-reversible. Similar to pseudo-reversing refinements in the linear setting, as in Definition 3.1, we define the pseudo-reverse of the manifold-valued refinement \(\mathcal{T}_{\boldsymbol{\alpha}}\). Namely, we say that the decimation operator \(\mathcal{Y}_{\boldsymbol{\gamma}}\) is the pseudo-reverse of \(\mathcal{T}_{\boldsymbol{\alpha}}\) if for some \(\xi>0\), we have \(\boldsymbol{\gamma}\) is the pseudo-reverse of \(\boldsymbol{\alpha}\). In short, Figure 3 illustrates the calculations behind pseudo-reversing refinement operators in the manifold setting, where the linear operators are replaced with the nonlinear ones. Given a Riemannian manifold \(\mathcal{M}\), recall that the exponential mapping \(\exp_{p}\) maps a vector \(v\) in the tangent space \(T_{p}\mathcal{M}\) to the end point of a geodesic of length \(\|v\|\), which emanates from \(p\in\mathcal{M}\) with initial tangent vector \(v\). Inversely, \(\log_{p}\) is the inverse map of \(\exp_{p}\) that takes an \(\mathcal{M}\)-valued element \(q\) and returns a vector in the tangent space \(T_{p}\mathcal{M}\). Following similar notations used in [13] and [23], we denote both maps by \[\log_{p}(q)=q\ominus p\quad\text{and}\quad\exp_{p}(v)=p\oplus v. \tag{22}\] We have thus defined the analogues \(\ominus\) and \(\oplus\) of the '\(-\)' and '\(+\)' operations appearing in (12), respectively. For any point \(p\in\mathcal{M}\) we use the following notation \(\ominus:\mathcal{M}^{2}\to T_{p}\mathcal{M}\) and \(\oplus:\mathcal{M}\times T_{p}\mathcal{M}\to\mathcal{M}\). Then, the compatibility condition is \[(p\oplus v)\ominus p=v,\] for all \(v\in T_{p}\mathcal{M}\) within the injectivity radius of \(\mathcal{M}\). With the operators (20), (21) and (22) in hand, we are able to define the analogue of Definition 3.2. **Definition 4.1**.: _Let \(\mathcal{M}\) be a Riemannian manifold, and let \(\mathbf{c}^{(J)}\) be an admissible \(\mathcal{M}\)-valued sequence of scale \(J\in\mathbb{N}\) parameterized over the dyadic grid \(2^{-J}\mathbb{Z}\), and let \(\mathcal{T}_{\mathbf{\alpha}}\) be a refinement rule (20). The multiscale transform is defined by_ \[\mathbf{c}^{(\ell-1)}=\mathcal{Y}_{\mathbf{\gamma}}\mathbf{c}^{(\ell)},\quad\mathbf{d}^{(\ell)} =\mathbf{c}^{(\ell)}\ominus\mathcal{T}_{\mathbf{\alpha}}\mathbf{c}^{(\ell-1)},\quad\ell=1, \ldots,J, \tag{23}\] _where \(\mathcal{Y}_{\mathbf{\gamma}}\) is the pseudo-reverse of \(\mathcal{T}_{\mathbf{\alpha}}\) for some \(\xi>0\). The inverse transform of (23) is defined by iterating_ \[\mathbf{c}^{(\ell)}=\mathcal{T}_{\mathbf{\alpha}}\mathbf{c}^{(\ell-1)}\oplus\mathbf{d}^{(\ell)} \tag{24}\] _for \(\ell=1\ldots,J\)._ A first difference between the manifold and linear versions of the transform lies in the detail coefficients. In particular, for the manifold-valued transform (23), the sequences \(\mathbf{c}^{(\ell)}\), \(\ell=1,2,\ldots,J\) are \(\mathcal{M}\)-valued, while the detail coefficients \(\mathbf{d}^{(\ell)}\) are elements in the tangent bundle \(T\mathcal{M}=\bigcup_{p\in\mathcal{M}}\{p\}\times T_{p}\mathcal{M}\) associated with \(\mathcal{M}\). To investigate the properties of the multiscale transform (23) we use the approximated refinement operator \(\mathcal{T}_{\widetilde{\mathbf{\alpha}}}\). Particularly, the pair \((\mathcal{T}_{\widetilde{\mathbf{\alpha}}},\mathcal{Y}_{\mathbf{\gamma}})\) produces detail coefficients that vanish in the tangent bundle \(T\mathcal{M}\), at the even indices, when enrolled into (23). Namely, when \(\mathcal{T}_{\mathbf{\alpha}}\) is replaced by \(\mathcal{T}_{\widetilde{\mathbf{\alpha}}}\). To this purpose, we present the following \(\mu\) and \(\Delta_{\mathcal{M}}\) notations \[\mu(\mathbf{c},\mathbf{m})=\sup_{j\in\mathbb{Z}}\rho(c_{j},m_{j})\quad\text{and}\quad \Delta_{\mathcal{M}}\mathbf{c}=\sup_{j\in\mathbb{Z}}\rho(c_{j+1},c_{j}).\] The weak convergence result of [30], together with Proposition 3.1, guarantee that for a dense enough sequence \(\mathbf{c}\), i.e., small value of \(\Delta_{\mathcal{M}}\mathbf{c}\), we have that \[\mu(\mathcal{T}_{\mathbf{\alpha}}\mathbf{c},\mathcal{T}_{\widetilde{\mathbf{\alpha}}}\bm {c})\leq R\Delta_{\mathcal{M}}\mathbf{c}, \tag{25}\] for some constant \(R>0\) depending on \(\xi\) of (1). This estimation is required to show the _stability_ of the inverse multiscale transform (24) as we will see in Theorem 4.2. The following definition is then necessary. We say that the refinement operator \(\mathcal{T}_{\mathbf{\alpha}}\) of (20) is _stable_ if there exists a constant \(K>0\) such that \[\mu(\mathcal{T}_{\mathbf{\alpha}}\mathbf{c},\mathcal{T}_{\mathbf{\alpha}}\mathbf{m})\leq K\mu (\mathbf{c},\mathbf{m}), \tag{26}\] for all admissible sequences \(\mathbf{c}\) and \(\mathbf{m}\). The stability condition has been studied in [12]. We are now ready to present the analogue of Lemma 3.3. **Lemma 4.1**.: _Let \(\mathbf{c}^{(J)}\) be admissible \(\mathcal{M}\)-valued sequence, and \(\mathcal{T}_{\mathbf{\alpha}}\) a stable refinement operator (26). Denote by \(\{\mathbf{c}^{(0)};\mathbf{d}^{(1)},\ldots,\mathbf{d}^{(J)}\}\) and \(\{\mathbf{c}^{(0)};\mathbf{q}^{(1)},\ldots,\mathbf{q}^{(J)}\}\) the multiscale transforms (23) of \(\mathbf{c}^{(J)}\) using the pairs \((\mathcal{T}_{\mathbf{\alpha}},\mathcal{Y}_{\mathbf{\gamma}})\) and \((\mathcal{T}_{\widetilde{\mathbf{\alpha}}},\mathcal{Y}_{\mathbf{\gamma}})\), respectively. Then,_ \[\|\mathbf{d}^{(\ell)}-\mathbf{q}^{(\ell)}\|_{\infty}\leq Q\Delta_{\mathcal{M}}\mathbf{c}^ {(\ell)} \tag{27}\] _for some constant \(Q>0\)._ Proof.: Note that since both pyramid representations use \(\mathcal{Y}_{\mathbf{\gamma}}\) as their decimation operator, then both \(d_{j}^{(\ell)}\) and \(q_{j}^{(\ell)}\) are emanated from the same point \(c_{j}^{(\ell)}\in\mathcal{M}\), and thus \(d_{j}^{(\ell)}-q_{j}^{(\ell)}\in T_{c_{j}^{(\ell)}}\mathcal{M}\). Theorem 5.7 of [23] guarantees the existence of a constant \(Q/2\) such that \(\|\mathbf{d}^{(\ell)}\|_{\infty}\) and \(\|\mathbf{q}^{(\ell)}\|_{\infty}\) are bounded by \(Q/2\cdot\Delta_{\mathcal{M}}\mathbf{c}^{(\ell)}\). Hence, a simple triangle inequality gives the required. Lemma (4.1) shows that the magnitude of the even coefficients \(d^{(\ell)}_{2j}\), \(j\in\mathbb{Z}\) of (23) depend on the scale \(\ell\). In particular, the coefficients are closer to \(0\) when the scale \(\ell\) is high, and therefore can be omitted when synthesizing \(\boldsymbol{c}^{(J)}\) via (24). We next analyze the synthesis error. It turns out that an analogue of Theorem 3.5 can be obtained intrinsically, when the curvature of the manifold is bounded. Next, we present such a result assuming \(\mathcal{M}\) is complete, open manifold with non-negative sectional curvature. For that, we recall two classical theorems: the first and second Rauch comparison theorems. For more details see [14, Chapter 3] and references therein. Let \(p_{j}\in\mathcal{M}\), \(j=1,2\) be two points and \(v_{j}\in T_{p_{j}}\mathcal{M}\) their vectors in the tangent spaces such that \(\|v_{1}\|=\|v_{2}\|\) and the value is smaller than the injectivity radius of \(\mathcal{M}\). Let \(G(p_{1},p_{2})\) be the geodesic line connecting \(p_{1}\) and \(p_{2}\) and \(\mathrm{PG}_{p_{2}}(v_{1})\in T_{p_{2}}\mathcal{M}\) be the parallel transport of \(v_{1}\) along \(G(p_{1},p_{2})\) to \(T_{p_{2}}\mathcal{M}\). Then, the first Rauch theorem suggests that \[\rho\big{(}p_{2}\oplus v_{2},p_{2}\oplus\mathrm{PG}_{p_{2}}(v_{1})\big{)}\leq \|v_{2}-\mathrm{PG}_{p_{2}}(v_{1})\|. \tag{28}\] In addition, the second Rauch theorem implies that \[\rho\big{(}p_{1}\oplus v_{1},p_{2}\oplus\mathrm{PG}_{p_{2}}(v_{1})\big{)}\leq \rho(p_{1},p_{2}). \tag{29}\] We are now ready for the stability conclusion. **Theorem 4.2**.: _Let \(\boldsymbol{c}^{(J)}\) be admissible \(\mathcal{M}\)-valued sequence where \(\mathcal{M}\) is a complete, open manifold with non-negative sectional curvature. Denote by \(\{\boldsymbol{c}^{(0)};\boldsymbol{d}^{(1)},\ldots,\boldsymbol{d}^{(J)}\}\) its multiscale transform (23) based on the refinement operator \(\mathcal{T}_{\boldsymbol{\alpha}}\) and its pseudo-reverse \(\mathcal{Y}_{\boldsymbol{\gamma}}\). Assume \(\mathcal{T}_{\boldsymbol{\alpha}}\) is stable with a constant \(K\) as in (26). Then, the synthesis sequence \(\boldsymbol{\zeta}^{(J)}\) obeys_ \[\mu(\boldsymbol{c}^{(J)},\boldsymbol{\zeta}^{(J)})\leq L\Delta_{\mathcal{M}} \boldsymbol{\zeta}^{(J)}+\sum_{k=1}^{J}K^{J-k}\|\widehat{\boldsymbol{d}}^{(k )}-\widehat{\boldsymbol{q}}^{(k)}\|_{\infty}, \tag{30}\] _for some constant \(L>0\), where \(\boldsymbol{q}^{(k)}\) are the detail coefficients of (23) generated by the pair \((\mathcal{T}_{\widetilde{\boldsymbol{\alpha}}},\mathcal{Y}_{\boldsymbol{ \gamma}})\), and \(\widehat{\psi}_{j}^{(k)}=\mathrm{PG}_{(\mathcal{T}_{\boldsymbol{\alpha}} \boldsymbol{\zeta}^{(k-1)})_{j}}(\psi_{j}^{(k)})\) for any sequence \(\boldsymbol{\psi}\in T\mathcal{M}\)._ Proof.: As stated in the theorem, we denote by \(\{\boldsymbol{c}^{(0)};\boldsymbol{q}^{(1)},\ldots,\boldsymbol{q}^{(J)}\}\) the multiscale transform (23) based on the pair \((\mathcal{T}_{\widetilde{\boldsymbol{\alpha}}},\mathcal{Y}_{\boldsymbol{ \gamma}})\). Without loss of generality we assume that \(\|d^{(\ell)}_{j}\|=\|q^{(\ell)}_{j}\|\) with values smaller than the injectivity radius of \(\mathcal{M}\), for all \(\ell=1,\ldots,J\) and \(j\in\mathbb{Z}\). Indeed, we allow the details to differ only by their mutual angle and not magnitude. We may remove this obstacle by using a more technical calculation. By using the estimations (28) and (29) we get \[\mu(\boldsymbol{c}^{(k)},\boldsymbol{\zeta}^{(k)}) =\mu(\mathcal{T}_{\boldsymbol{\alpha}}\boldsymbol{c}^{(k-1)}\oplus \boldsymbol{d}^{(k)},\mathcal{T}_{\widetilde{\boldsymbol{\alpha}}}\boldsymbol{ \zeta}^{(k-1)}\oplus\boldsymbol{q}^{(k)})\] \[\leq\mu(\mathcal{T}_{\boldsymbol{\alpha}}\boldsymbol{c}^{(k-1)} \oplus\boldsymbol{d}^{(k)},\mathcal{T}_{\boldsymbol{\alpha}}\boldsymbol{ \zeta}^{(k-1)}\oplus\widehat{\boldsymbol{d}}^{(k)})\] \[\qquad\qquad\qquad+\mu(\mathcal{T}_{\boldsymbol{\alpha}} \boldsymbol{\zeta}^{(k-1)}\oplus\widehat{\boldsymbol{d}}^{(k)},\mathcal{T}_ {\boldsymbol{\alpha}}\boldsymbol{\zeta}^{(k-1)}\oplus\widehat{\boldsymbol{q}} ^{(k)})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\mu(\mathcal{T}_{ \boldsymbol{\alpha}}\boldsymbol{\zeta}^{(k-1)}\oplus\widehat{\boldsymbol{q}} ^{(k)},\mathcal{T}_{\widetilde{\boldsymbol{\alpha}}}\boldsymbol{\zeta}^{(k-1 )}\oplus\boldsymbol{q}^{(k)})\] \[\leq K\mu(\boldsymbol{c}^{(k-1)},\boldsymbol{\zeta}^{(k-1)})+ \|\widehat{\boldsymbol{d}}^{(k)}-\widehat{\boldsymbol{q}}^{(k)}\|_{\infty}+ \mu(\mathcal{T}_{\boldsymbol{\alpha}}\boldsymbol{\zeta}^{(k-1)},\mathcal{T}_ {\widetilde{\boldsymbol{\alpha}}}\boldsymbol{\zeta}^{(k-1)})\] for \(1\leq k\leq J\). Now, using (25), there exists a constant \(R>0\) such that \[\mu(\mathcal{T}_{\boldsymbol{\alpha}}\boldsymbol{\zeta}^{(k-1)},\mathcal{T}_{ \widetilde{\boldsymbol{\alpha}}}\boldsymbol{\zeta}^{(k-1)})\leq R\Delta_{ \mathcal{M}}\boldsymbol{\zeta}^{(k-1)}.\] Moreover, Proposition 5.4 in [23] guarantees the existence of a constant \(P>1\) such that \[\Delta_{\mathcal{M}}\boldsymbol{\zeta}^{(k-1)}\leq(2P)^{J-k+1}\Delta_{\mathcal{M }}\boldsymbol{\zeta}^{(J)}.\] Overall we have \[\mu(\boldsymbol{c}^{(k)},\boldsymbol{\zeta}^{(k)})\leq K\mu(\boldsymbol{c}^{(k- 1)},\boldsymbol{\zeta}^{(k-1)})+\|\widehat{\boldsymbol{d}}^{(k)}-\widehat{ \boldsymbol{q}}^{(k)}\|_{\infty}+R(2P)^{J-k+1}\Delta_{\mathcal{M}}\boldsymbol {\zeta}^{(J)}.\] By iterating the latter inequality starting from \(k=J\) we get \[\mu(\boldsymbol{c}^{(J)},\boldsymbol{\zeta}^{(J)})\leq L\Delta_{\mathcal{M}} \boldsymbol{\zeta}^{(J)}+\sum_{k=1}^{J}K^{J-k}\|\widehat{\boldsymbol{d}}^{(k) }-\widehat{\boldsymbol{q}}^{(k)}\|_{\infty},\] where \(L=2PR\sum_{k=1}^{J}(2KP)^{J-k}\) as required. Note that the upper bound in (30) grows with \(J\), while the quantity \(\Delta_{\mathcal{M}}\boldsymbol{\zeta}^{(J)}\) is small in general because \(\mathcal{T}_{\boldsymbol{\alpha}}\) is convergent. Moreover, to guarantee a good synthesis algorithm, one can reduce the number of decompositions in the multiscale transform (23). Specifically, for a fixed integer \(1\leq m<J\), we decompose \(\boldsymbol{c}^{(J)}\) via iterations of (23) \(m\) many times to get the pyramid \(\{\boldsymbol{c}^{(J-m)};\boldsymbol{d}^{(J-m+1)},\ldots,\boldsymbol{d}^{(J)}\}\). The smaller \(m\) is, the better the synthesis becomes. We conclude this section by remarking that if the analyzed sequence \(\boldsymbol{c}^{(J)}\) is sampled from a regular differentiable curve \(\Gamma\subset\mathcal{M}\) over the arc-length parametrization grid \(2^{-J}\mathbb{Z}\), then we instantly have that \(\Delta_{\mathcal{M}}\boldsymbol{c}^{(J)}\leq\|\nabla\Gamma\|_{\infty}2^{-J}\) where \(\|\nabla\Gamma\|_{\infty}=\sup_{j}\|\nabla\Gamma(j)\|\). Under this assumption, one can obtain more precise estimations of the bounds (27) and (30), see [23] for more elaborations. ## 5 Applications and numerical examples In this section we illustrate different applications of our multiscale transforms. All results are reproducible via a package of Python code available online at [https://github.com/WaelMattar/Pseudo-reversing.git](https://github.com/WaelMattar/Pseudo-reversing.git). We start with numerical illustrations of pseudo-reversing subdivision schemes as refinement operators, in the linear setting, as Figure (3) shows. ### Pseudo-reversing subdivision schemes Let \(\mathcal{S}_{\boldsymbol{\alpha}}\) be the subdivision scheme (5) given with the mask \[\boldsymbol{\alpha}=\left[\frac{1}{4},\;\frac{1}{3},\;\frac{1}{4},\;\frac{1}{ 3},\;\frac{1}{4},\;\frac{1}{3},\;\frac{1}{4}\right]\quad\text{supported on}\quad[-3,-2,-1,0,1,2,3]. \tag{31}\] This subdivision is a member of a broader family of least-squares schemes [7], and its corresponding symbol \(\boldsymbol{\alpha}(z)\) of (10) is given by \[\boldsymbol{\alpha}(z)=\frac{1}{3z}+\frac{1}{3}+\frac{z}{3}. \tag{32}\] The polynomial \(z\boldsymbol{\alpha}(z)\) appears in Example 2.2 and it possesses two zeros on the unit circle; \(-1/2\pm i\sqrt{3}/2\). Figure 5 demonstrates the effect of pseudo-reversing the symbol \(\boldsymbol{\alpha}\) with different parameters of \(\xi\), highlighting the resulting tradeoff; when \(\xi\) values are large, we obtain better-decaying decimation coefficients but also larger deviations from the original sequence. On the Figure 5: Pseudo-reversing the subdivision scheme (31). Figure (5a) illustrates pushing the complex-valued zeros of (32) with a parameter \(\xi\). The basic limit functions \(\mathcal{S}_{\widetilde{\boldsymbol{\alpha}}}^{\infty}\boldsymbol{\delta}\) are depicted in Figure (5c). The decimation coefficients of \(\mathcal{D}_{\boldsymbol{\gamma}}\) are illustrated in Figure (5b). Figure (5d) illustrates the convolutional error of (9), \(\|\boldsymbol{\delta}-\boldsymbol{\gamma}*(\boldsymbol{\alpha}\downarrow 2)\|_{\infty}\), as well as the norm of the mask perturbation \(\|\boldsymbol{\alpha}-\widetilde{\boldsymbol{\alpha}}\|_{\infty}\). contrary, when we perturb only slightly with small \(\xi\) values, the decimation, comprised of the reversed sequence, grows significantly, making its practical use less feasible. To shed more light on pseudo-reversing, Table 1 shows the reversibility condition \(\kappa\) in (2) of \(\widetilde{\boldsymbol{\alpha}}\) for different values of \(\xi\), and for the same basic refinement. This table clearly shows the inverse correlation between \(\xi\) and \(\kappa\) as theory suggests in Corollary 2.6. As mentioned in Remark 3.2, the notion of pseudo-reversing can be relaxed and applied to refinements with bad reversibility conditions. That is, roughly speaking, schemes with zeros close to the unit circle. In other words, pseudo-reversing allows us to enforce a better, more practical reversibility. Technically, we can do this by pushing the zeros of \(\boldsymbol{\alpha}(z)\) which have moduli _greater_ than \(1\), with the factor \(\xi\) as similar to (1). Figure 6 illustrates the zeros of \(\boldsymbol{\alpha}(z)\) in (10) and the coefficients of the solution \(\boldsymbol{\gamma}\), where \(\boldsymbol{\alpha}\) is the mask of a high order B-spline subdivision scheme. In addition, Table 2 shows the reversibility condition \(\kappa\) of the reversible schemes, while Table 3 illustrates how pseudo-reversing imposes a better reversibility condition on the B-spline subdivision scheme of order \(6\). \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(\xi\) & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 & 1.1 & 1.2 \\ \hline \(\|\boldsymbol{\alpha}-\widetilde{\boldsymbol{\alpha}}\|_{\infty}\) & 0 & 0.03 & 0.06 & 0.09 & 0.11 & 0.14 & 0.16 & 0.18 & 0.20 & 0.22 & 0.23 & 0.25 & 0.26 \\ \hline \(\kappa(\widetilde{\boldsymbol{\alpha}})\) & \(\infty\) & 18.19 & 9.54 & 6.67 & 5.24 & 4.38 & 3.81 & 3.41 & 3.11 & 2.88 & 2.69 & 2.54 & 2.41 \\ \hline \end{tabular} \end{table} Table 1: The mask perturbation and the reversibility condition of the approximating mask \(\widetilde{\boldsymbol{\alpha}}\) against different parameters \(\xi\). Figure 6: Zeros and decimation coefficients of high order B-spline subdivision schemes. On the left, part of the zeros of \(\boldsymbol{\alpha}(z)\) in (10) corresponding to the B-spline schemes of different orders, relative to \(\mathbb{T}\). On the right, depiction of the corresponding decimation coefficients. Note how the minimal distance between the zeros and unit circle shrinks as the order increases (this holds for orders above \(6\) as well) and how the decay rate weakens as a result. ### Multiscaling in the linear setting Here we numerically illustrate the linear transform (12) and test the synthesis result appearing in Theorem 3.5. For this sake, we let \(\mathbf{c}^{(6)}\) be our analyzed sequence, sampled from a test function \(f(x):[0,10]\mapsto\mathbb{R}\) over the equispaced bounded grid \(x\in 2^{-6}\mathbb{Z}\cap[0,10]\). We take \(f\) to be the real part of the standard Morlet wavelet, centred at \(x=5\). We implement the pyramid transform (12) based on the subdivision scheme (31) and its pseudo-reverse \(\mathcal{D}_{\mathbf{\gamma}}\) for \(\xi=1.4\). We decompose the sequence \(\mathbf{c}^{(6)}\) by iterating (12) \(m\) times, for \(m=1,2,\ldots,6\), to get the pyramid \(\{\mathbf{c}^{(6-m)};\mathbf{d}^{(5-m)},\ldots,\mathbf{d}^{(6)}\}\). All detail coefficients on the even indices are then set to zero, yielding a sparser pyramid. Then, we reconstruct using the inverse pyramid transform. Theorem 3.5 suggests that the reconstruction error, measured by the infinity norm between the analyzed \(\mathbf{c}^{(6)}\) and the reconstructed \(\mathbf{\zeta}^{(6)}\) grows as \(m\) increases. Reason being that, the more we decompose into coarse scales, the loss of information resulting from setting the even detail coefficients to zero becomes more significant, and hence the upper bound (17) increases - allowing more space for the error \(\|\mathbf{c}^{(6)}-\mathbf{\zeta}^{(6)}\|_{\infty}\). Figure 7 demonstrates the analyzed sequence \(\mathbf{c}^{(6)}\) next to its 4 layers (\(m=4\)) of detail coefficients. Note how the property of having small detail coefficients on the even indices is more pronounced on fine scales, and is violated on coarse scales. This phenomenon is explained via Theorem 3.4. The synthesis error is \(0.1896\). Table 4 shows the synthesis error with respect to any number of layers \(m\). We finally remark here that similar results are obtained for any \(\xi>0\), but we picked the value \(\xi=1.4\) because it yielded a good reversibility condition \(\kappa\) that was suitable for the truncation size of the decimation operator \(\mathcal{D}_{\mathbf{\gamma}}\). \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|} \hline \(\xi\) & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 & 1.1 & 1.2 \\ \hline \(\kappa(\widetilde{\mathbf{\alpha}})\) & 8 & 6.59 & 5.69 & 5.06 & 4.60 & 4.25 & 3.97 & 3.74 & 3.55 & 3.39 & 3.26 & 3.14 & 3.04 \\ \hline \end{tabular} \end{table} Table 3: Improving the reversibility condition (2) of the B-spline subdivision scheme of order 6 with different parameters \(\xi\). Only the zeros of \(\mathbf{\alpha}(z)\) with moduli greater than 1 were displaced with the parameter \(\xi\). Note how the reversibility improves as \(\xi\) increases. ### Pyramid for SO(3) and manifold-valued contrast enhancement We begin with an illustration of the multiscale transform (23) over the manifold of rotation matrices. That is, the rotation group \(\mathrm{SO}(3)\) acting on the Euclidean space \(\mathbb{R}^{3}\). Then, we show the application of contrast enhancement using the multiscale representation. Let \[\mathrm{SO}(3)=\left\{R\in\mathbb{R}^{3\times 3}\;\middle|\;R^{T}=R^{-1},\; \det(R)=1\right\} \tag{33}\] be the special orthogonal group consisting of all rotation matrices in \(\mathbb{R}^{3}\). \(\mathrm{SO}(3)\) is endowed with a Riemannian manifold structure by considering it as a Riemannian submanifold of the embedding Euclidean space \(\mathbb{R}^{3\times 3}\), with the inner product \(\left\langle R_{1},R_{2}\right\rangle=\mathrm{trace}(R_{1}^{T}R_{2})\) for \(R_{1},R_{2}\in\mathrm{SO}(3)\). One simple way to generate smooth and random \(\mathrm{SO}(3)\)-valued sequences to test our multiscaling (23) is to sample few rotation matrices, to associate the samples with indices, and then to refine using any refinement rule promising smooth limits, see e.g., [30]. Indeed, we followed this method to synthetically generate such a sequence. Specifically, we randomly generated 4 rotation matrices, enriched the samples to 11 matrices by a simple upsampling rule, and then refined the result using the cubic B-spline analogue (20) for few iterations. The resulted sequence is then parameterized over the dyadic grid \(2^{-6}\mathbb{Z}\) corresponding to scale 6. In the refinement process, the \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|} \hline \(m\) & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \(\|\mathbf{c}^{(6)}-\mathbf{\zeta}^{(6)}\|_{\infty}\) & 0.0007 & 0.0059 & 0.0371 & 0.1896 & 0.3860 & 0.8816 \\ \hline \end{tabular} \end{table} Table 4: The reconstruction error against the number of detail layers. Figure 7: Linear multiscaling with a non-reversible subdivision scheme. On the right, the analyzed function in black, and the synthesized dot-dashed function in blue. On the left, the detail coefficients in absolute value of the first 4 iterations of (12) for \(\mathcal{S}_{\mathbf{\alpha}}\) of (31) and its pseudo-reverse \(\mathcal{D}_{\mathbf{\gamma}}\) for \(\xi=1.4\). Riemannian center of masses were approximated by the method of geodesic inductive averaging presented in [8]. To visualize the generated SO(3)-valued sequence, we rotate the standard orthonormal basis of \(\mathbb{R}^{3}\) using each rotation matrix, and then depict all results on different locations depending on the parametrization as a time series. Figure 7(b) illustrates the result. We now analyze the synthetic SO(3)-valued curve appearing in Figure 7(b) via the multiscale transform (23), using the nonlinear analogue \(\mathcal{T}_{\boldsymbol{\alpha}}\) of the subdivision scheme (31), with its pseudo-reverse \(\mathcal{Y}_{\boldsymbol{\gamma}}\) for \(\xi=0.64\). The Riemannian center of masses (20) and (21) were approximated by the method of geodesic inductive averaging. Figure 7(a) exhibits the Frobenius norms of the first 4 layers of detail coefficients. Note how the maximal norm of each layer can be bounded with geometrically decreasing values with respect to the scale. This indicates the smoothness of the curve. Moreover, the norms associated with the even indices are lower than the rest, this is a direct effect of \(\boldsymbol{\gamma}\) truncation. Both phenomena were thoroughly explicated in [23]. Over and above, note how this effect is more pronounced on the high scales, this is explained in Lemma 4.1. We remark here that the detail coefficients generated by the multiscale transform (23) lie in the tangent bundle \(\bigcup_{R\in\text{SO}(3)}\left\{R\right\}\times T_{R}\text{SO}(3)\) where the tangent space \(T_{R}\text{SO}(3)\) of a rotation matrix \(R\) is the set of all matrices \(RS\in\mathbb{R}^{3\times 3}\) where \(S\) is skew-symmetric. We now illustrate the application of contrast enhancement to the sequence in Figure 7(b) using its representation in Figure 7(a). In the context of multiscaling, the idea behind contrast enhancement lies in manipulating the detail coefficients while keeping the coarse approximation unchanged. Particularly, in order to add more contrast to a sequence, "swerves" in the case of rotations, one has to emphasize its most significant detail coefficients. Since all tangent spaces are closed under scalar multiplication, this can be done by multiplying the largest detail coefficients by a factor greater than 1, while carefully monitoring the results to be within the injectivity radii of the manifold. The enhanced sequence is then synthesized from the modified pyramid. Figure 8: Multiscaling of SO(3)-valued sequence with a non-reversible subdivision scheme. On the right, visualization of a sequence of rotation matrices. On the left, the Frobenius norms of the first 4 layers of detail coefficients. Figure 9 shows the final result of enhancing the rotation sequence of Figure (b)b, side by side, where the largest \(20\%\) of the detail coefficients of each layer in Figure (a)a were scaled up by \(40\%\). Indeed, note how regions with small rotation changes are kept unchanged, while regions with high changes are more highlighted after the application. The specific percentages \(20\%\) and \(40\%\) were chosen to give good and convincing visual results. ### Data compression of rigid body motions Here we consider the special Euclidean Lie group \(\mathrm{SE}(3)\) of dimension \(6\), which is the semidirect product of the special orthogonal group \(\mathrm{SO}(3)\) in (33) with the Euclidean space \(\mathbb{R}^{3}\). Each element in this group makes a configuration of orientation and position to a rigid body. A convenient matrix representation of this group is \[\mathrm{SE}(3)=\mathrm{SO}(3)\ltimes\mathbb{R}^{3}=\left\{\begin{pmatrix}R&p \\ 0&1\end{pmatrix}\right|R\in\mathrm{SO}(3),\;p\in\mathbb{R}^{3}\right\}.\] \(\mathrm{SE}(3)\) has a differentiable manifold structure in the embedding Euclidean space \(\mathbb{R}^{4\times 4}\). In this example, we take the \(\mathrm{SO}(3)\)-valued curve appearing in Figure (b)b and wrap it around a cone in \(\mathbb{R}^{3}\), obtaining a time-series of location coordinates and orientations, i.e., a curve on \(\mathrm{SE}(3)\) parameterized over the equispaced grid \(2^{-6}\mathbb{Z}\). Figure (a)a depicts the curve and is often called a rigid body motion. We aim to compress this sequence via our multiscale transform. We analyze the \(\mathrm{SE}(3)\)-valued curve with the multiscale transform (23) using the nonlinear analogue \(\mathcal{T}_{\boldsymbol{\alpha}}\) (20) of the subdivision scheme (31) with its pseudo-reverse \(\mathcal{Y}_{\boldsymbol{\gamma}}\) of (21) for \(\xi=0.64\). Specifically, we decompose the sequence four times to get the pyramid \(\{\boldsymbol{c}^{(2)};\boldsymbol{d}^{(3)},\boldsymbol{d}^{(4)},\boldsymbol{ d}^{(5)},\boldsymbol{d}^{(6)}\}\). The Riemannian center of masses (20) and (21) were approximated by the method of geodesic inductive averaging [8]. Similar to Section 5.2, we demonstrate the application of data compression by representing the ground-truth sequence on different scales, and then setting half of the detail coefficients to zero to Figure 9: Contrast enhancement of \(\mathrm{SO}(3)\)-valued sequence. On the left, the original sequence of rotation matrices. On the right, the enhanced rotation sequence. The largest \(20\%\) of the detail coefficients of each layer were enlarged by \(40\%\). The black arrows indicate the regions with the most drastic twists – highlighting the effect of the application. get a sparser representation. Finally, we reconstruct using the inverse pyramid transform (24) and measure the error. The validity of the application lies in the ability to set half of the information of the multiscale representation to zero while maintaining visual resemblance to the original curve after synthesis with relatively low errors. Figure 10 depicts the original sequence next to the synthesized compressed result. Note that if one chooses to use less details or decompose into more layers, the result is an increasing in the error of compression, as explained through Theorem 4.2. In the depicted example, we measure the error between the original sequence \(\boldsymbol{c}^{(6)}\) and its estimation \(\boldsymbol{\zeta}^{(6)}\) using a pointwise relative geodesic error, defined as \[\mu^{r}{}_{j}=\rho(\boldsymbol{c}^{(6)}_{j},\boldsymbol{\zeta}^{(6)}_{j})/ \rho(\boldsymbol{c}^{(6)}_{j},e), \tag{34}\] where \(e\) is the identity of the group, represented by the \(4\times 4\) identity matrix. In this example, we obtain that the median value of \(\mu^{r}\) is \(0.029\). The figure demonstrates that this low error rate is translated as almost identical sequences visually. ## Funding NS is partially supported by the NSF-BSF award 2019752 and the DFG award 514588180. Figure 10: Compressing SE(3)-valued curve. On the left, depiction of the original ground-truth curve. On the right, the result of synthesizing, using just half of the available data. The difference between the two curves is visually noticeable mostly in the top and the bottom parts. The median of the relative error (34) is \(2.9\%\).
2305.09152
Security Enhancement of Quantum Noise Stream Cipher Based on Probabilistic Constellation Shaping
We propose a QNSC pre-coding scheme based on probabilistic shaping of the basis, to reduce the probability of ciphertext bits that are easier to be intercepted. Experiment results show this scheme can improve the security performance by 100% in terms of Eve's cipher text BER.
Sheng Liu, Shuang Wei, Wei Wang, Chao Lei, Tianhe Liu, Yajie Li, Yunbo Li, Dawei Ge, Dong Wang, Yongli Zhao, Dechao Zhang, Han Li, Jie Zhang
2023-05-16T04:11:40Z
http://arxiv.org/abs/2305.09152v1
# Security Enhancement of Quantum Noise Stream Cipher Based on Probabilistic Constellation Shaping ###### Abstract _We propose a QNSC pre-coding scheme based on probabilistic shaping of the basis, to reduce the probability of ciphertext bits that are easier to be intercepted. Experiment results show this scheme can improve the security performance by 100% in terms of Eve's cipher text BER. (c)2023 The Author(s)_ ## Introduction With the development of cyber-attack methods, the security issues of optical communication networks have gained more and more attention [1]. Quantum noise stream cipher (QNSC) is a popular physical-layer security approach, and it maps the original signal symbols into a very large symbol constellation with the protection of the quantum noise of optical signals [2]. QNSC was intensively studied from the perspectives of transmission and security performance [3, 4]. In terms of security, there are generally two ways to enhance the security, strengthening the noise effect of the system and reducing the adjacent signal Euclidean distance [5]. In conventional QAM/QNSC systems, the ciphertext symbols follow a uniform distribution in the constellation diagram, i.e., symbols are distributed with the same probability. However, QNSC does not provide absolute security. As we know, according to the encryption process of QNSC, the plaintext is encrypted into the low-order ciphertext with XOR operation, and the low-order ciphertext is further appended on top of the basis, forming a high-order ciphertext symbol. After being masked by quantum noise, the bit error ratio (BER) of the lower bits (the basis) of the high-order ciphertext is relatively high, while the BER of the higher bits (the original low-order ciphertext) of the high-order cyphertext is relatively low. In conventional QAM/QNSC, the low BER for Eve means higher probability of being successfully intercepted. Therefore, the distribution of the lower bits (the basis) is of great importance for security performance. Meanwhile, the probabilistic constellation shaping (PCS) technique is emerging, and it is capable of customizing the probabilistic distribution of symbols in the constellation diagram [6, 7]. Therefore, it could be interesting to study how to improve the security of QAM/QNSC, facilitated by PCS technology, without breaking the original QNSC. In this paper, we propose a PCS-based precoding scheme to modify the distribution of the basis and plaintext, to reduce the BER of low-order ciphertext. We implemented the proposed scheme in a QAM/QNSC testbed. From the perspective of security, experimental results show that this scheme improves the system security performance by about 100% in terms of Eve's BER of low-order ciphertext. From the perspective of transmission performance, experimental results show that this scheme improves OSNR 0.5 dB and 0.6 dB compared with traditional QNSC in back-to-back (BTB) and 160 km fiber. ## The scheme of PCS-based Pre-coding In an original QNSC system, the plaintext bit is encrypted by the lowest bit of the basis with XOR operation, and the XOR result is further placed at the highest bit over the basis, forming a high-order ciphertext symbol. If we use PCS to shape the ciphertext symbols directly, the order of bits of the symbol is changed [8]. Here, we proposed a QNSC pre-coding scheme with a two-step PCS to customize the distribution of ciphertext symbols and enhance security, as illustrated in Fig. 1. * Pre-coding for basis In the first step, the seed key is divided into two parts (i.e., key1 and key2) for XOR operations and basis pre-coding. Based on key-2, bases are generated by the pseudorandom number generator (PRNG) and injected into the CCDM module. The basis's probability distribution is changed from uniform to non-uniform. In this paper, the inverse model of Gaussian distribution is adopted. * Pre-coding for both basis and plaintext Based on the shaped basis, we can further customize the distribution of the plaintext. After key1 is expanded by PRNG1, each plaintext bit is encrypted with key1 by XOR operation to obtain the ciphertext bit. Following the PCS principle, the ciphertext bits on the transmitter side are split into two branches, with the corresponding proportions of data of \(\alpha\) and \(1-\alpha\). The data in the lower branch is shaped by a constant composition distribution matching (CCDM), and the output from CCDM follows a desired distribution. The output of CCDM and the data in the upper branch are sent to the FEC encoding modules for generating parity bits. The upper branch bits from the FEC are used as the \(\pm 1\) signs, which are multiplied with the shaped data in the lower branch as the amplitude. After the two symbols are superimposed, low-order ciphertext symbols (e.g., 16QAM) with desired distribution are generated. After getting the shaped basis and ciphertext, the low-order ciphertext bits are appended on top of the basis, forming the high-order ciphertext symbols according to Y-00 protocol. The essence of this step is to attach a random perturbation to each low-order ciphertext symbol within the decision region of itself. The ciphertext symbols near the decision threshold of the low-order ciphertext symbol are easy to cross the decision threshold under the influence of noise, which will cause BER for low-order ciphertext of Eve without the key. The benefit of the scheme is that the ciphertext symbol distribution is specifically designed to increase the BER of the low-order ciphertext of Eve, by increasing the symbol points near the decision threshold of the low-order ciphertext symbol and thus improve the security of QNSC systems. To evaluate the performance of the proposed algorithm, we conduct an experiment with our lab-grown setup, as shown in Fig. 2. The output probabilistic distribution from the PCS module is set as \(P(1)=0.68\) and \(P(3)=0.32\) (\(H=3.8\)). In the setup above, the plaintext bits are encrypted by key1 and fed into the PCS module, and they are mapped to 16QAM symbols with desired distribution. The 16QAM symbols are further extended by the basis to the high-order ciphered symbols following the Y-00 protocol. OFDM signal is generated and fed into an arbitrary waveform generator (AWG) with 10GSa/s. An external cavity laser (ECL) is used to generate an optical carrier at the wavelength of 1550 nm and power of 6 dBm. After being amplified by the transmitter side erbium doped fiber amplifier (EDFA), the optical power injected into the Figure 1: PCS-based plaintext and basis pre-processing scheme for QAM/ONSC Figure 2: Experimental setup. standard single-mode fiber (SSMF) is 0dBm. The fiber length is 160km. At the receiver, the optical signal is coherently detected. The receiver decrypts the QAM symbols with the pre-shared seed key and further recovers the data by performing QAM demodulation, PCS demodulation, and XOR. ### Experimental Results In the above-mentioned setup with back-to-back or SSMF, we measured the security and transmission performance in terms of Eve's low-order ciphertext BER and the legal party's pre-FEC BER. To demonstrate the effect of our scheme, three benchmarks are measured: traditional QNSC, QNSC based on shaped 16QAM and QNSC based on shaped basis. The distribution of the constellation points for our scheme and the three benchmarks are shown in Fig. 3. The I and Q axes represent the amplitude of QNSC symbols. Fig. 4 (a) compares Eve's low-order ciphertext BER under different QAM formats. Note that, Eve has the best condition for intercepting information when the signal received by Eve is minimally affected by noise. Hence, the position right after the EDFA in the transmitter is the optimal eavesdropping point for Eve, as shown in Fig. 2. By adjusting the symbol rate, we maintain the net bit rate of the four schemes. It is clearly concluded that our scheme has the highest Eve's low-order ciphertext BER of more than 11%, while traditional QNSC and QNSC based on shaped 16QAM schemes are only about 6%. Our scheme improves security by 100%. The reason is that more symbols cross the decision threshold of 16QAM under the influence of noise in our scheme. It indicates that the system is more resistant to Eve's interception. In the practical splitting attack scenario, Eve will detect a much higher low-order ciphertext BER. In Fig. 4(b) and (c), we record the pre-FEC BER of four different schemes in two scenarios - BTB and 160 km fiber. Among them, QNSC based on the shaped 16QAM scheme has the best BER performance. The QNSC based on the shaped basis scheme has the highest BER because the minimum Euclidean distance decreases under the same power. In fact, our scheme improves OSNR 0.5 dB and 0.6 dB compared with traditional QNSC in BTB and 160 km fiber under 15% overhead SD-FEC. ### Conclusions We propose a pre-coding QNSC scheme based on the probabilistic constellation shaping for both of the plaintext and the basis to make the distribution of ciphertext symbols more resistant to eavesdroppers. Experimental results verified that this scheme can improve the security in terms of Eve's low-order ciphertext BER and improve the transmission performance in terms of OSNR by 0.5 dB and 0.6 dB over BTB and 160 km fiber, respectively. Fig. 4: Experimental results (a) low-order ciphertext’s BER for Eve; (b) the pre-FEC BER of BTB and (c) 160km fiber Fig. 3: The distribution of constellation points of (a) traditional QNSC, (b) QNSC based based 16QAM (c) QNSC based on shaped basis, (d) QNSC based on shaped 16QAM and basis. ## Acknowledgements This work is supported in part by the National Natural Science Foundation of China (NSFC, under grants: 62101063, 61831003, 61901053), and BUPT-CMCC Joint Innovation Center. Sheng Liu, Shuang Wei and Wei Wang contributed equally to this work.
2304.06148
Detection of Fake Generated Scientific Abstracts
The widespread adoption of Large Language Models and publicly available ChatGPT has marked a significant turning point in the integration of Artificial Intelligence into people's everyday lives. The academic community has taken notice of these technological advancements and has expressed concerns regarding the difficulty of discriminating between what is real and what is artificially generated. Thus, researchers have been working on developing effective systems to identify machine-generated text. In this study, we utilize the GPT-3 model to generate scientific paper abstracts through Artificial Intelligence and explore various text representation methods when combined with Machine Learning models with the aim of identifying machine-written text. We analyze the models' performance and address several research questions that rise during the analysis of the results. By conducting this research, we shed light on the capabilities and limitations of Artificial Intelligence generated text.
Panagiotis C. Theocharopoulos, Panagiotis Anagnostou, Anastasia Tsoukala, Spiros V. Georgakopoulos, Sotiris K. Tasoulis, Vassilis P. Plagianakos
2023-04-12T20:20:22Z
http://arxiv.org/abs/2304.06148v1
# Detection of Fake Generated Scientific Abstracts # Detection of Fake Generated Scientific Abstracts Panagiotis C. Theocharopoulos1, Panagiotis Anagnostou1, Anastasia Tsoukala1, Spiros V. Georgakopoulos2, Sotiris K. Tasoulis1 and Vassilis P. Plagianakos1 1Department of Computer Science and Biomedical Informatics University of Thessaly, Greece Email: {ptheochar, panagno, antsoukala, stasoulis, vpp}@uth.gr 2Department of Mathematics University of Thessaly, Greece Email: [email protected] ###### Abstract The widespread adoption of Large Language Models and publicly available ChatGPT has marked a significant turning point in the integration of Artificial Intelligence into people's everyday lives. The academic community has taken notice of these technological advancements and has expressed concerns regarding the difficulty of discriminating between what is real and what is artificially generated. Thus, researchers have been working on developing effective systems to identify machine-generated text. In this study, we utilize the GPT-3 model to generate scientific paper abstracts through Artificial Intelligence and explore various text representation methods when combined with Machine Learning models with the aim of identifying machine-written text. We analyze the models' performance and address several research questions that rise during the analysis of the results. By conducting this research, we shed light on the capabilities and limitations of Artificial Intelligence generated text. GPT-3, ChatGPT, COVID-19, Deep Learning, Large Language models ## I Introduction The development of the transformer architecture along with the attention mechanism took the lead on Natural Language Processing (NLP) tasks, due to its ability to process an entire input of sequence at once, unlike the Recurrent Neural Networks (RNNs). The wider use of the transformers architecture leads to Large Language models (LLMs). The LLMs are language models trained on a tremendous amount of data with multiple parameters. Furthermore, the development of pre-trained models allowed language models to learn generic language patterns and structures from large and diverse data before being fine-tuned for a specific task [12]. After the conversational Artificial Intelligence (AI) tool ChatGPT became publicly available in December 2022, the discipline of computer science, among other fields, was shaken to its core. Developed by OpenAI, ChatGPT is a language model representing a refined and evolved version of Generative Pre-trained Transformer (GPT) models. The tool's remarkable potential for executing a vast array of tasks with high precision has attracted widespread attention. The introduction of ChatGPT, and similar language models, has been met with excitement and concern from scientists and researchers. Despite the impressive development of language models in AI and NLP, there are valid concerns regarding their misuse [11]. These models may be used for more sophisticated phishing or social engineering schemes and for impersonating individuals or organizations. It is crucial to cautiously protect potentially affected areas by malicious intent and consider the ethical implications with further research [15]. Journalism, cybersecurity, intellectual property, customer service, and education are most prominently affected by the proliferation of generated text applications. The area of concern of our study is education, as the ability of language models to generate academic writing could lead to a rise in plagiarism and academic dishnesty. Recently, some examples of generated scientific and academic texts have already ended up as conference papers [3]. The first scientific paper generator was the SCIgen program, created in 2005 by three MIT graduate students. SCIgen is a computer program, created as a prank, which generates random computer science research papers [24]. Detecting synthetic text is getting extremely challenging because sophisticated text generation has been pursued for many years, culminating in the impressive capabilities of GPT-3, which can even deceive human readers [10]. Many text generation models followed either for severe or humorous causes. However, the necessity for a distinction between generated and actual text became even more substantial [3, 10]. Currently, GPT-3 has approximately 570 GB of text data as a training dataset, combining the 175 billion parameters [1]. Although ChatGPT and GPT-3 differ in terms of training methodology and fine-tuning, the latter is provided via its platform by OpenAI primarily for research purposes [1, 21]. The scope of this study is to investigate a way that distinguishes text generated by LLMs, in the particular scientific text produced by GPT-3. Additionally, the study tries to understand the results by answering logical questions, based on the false outcomes of the best-performed model. The rest of this paper is structured into different sections. Section II provides the related literature about the topic. Section III describes the methodology applied in the study. Section V, discusses our thoughts on the findings. We conclude the paper in Section VI and suggest ideas for future research directions. ## II Related Work Although the concept of AI text recognition based on Large Language Models (LLMs) is relatively new, there is already related research in the field. A tool called Giant Language Model Test Room (GLTR) [7] is implemented to detect whether a text has been generated by a machine or written by a human. The work is based on several statistical methods to detect AI-generated text from GPT-2 and BERT, including computing the model density of generated output and comparing it to human-generated text, as well as using the probability of a word and the distribution over four buckets of absolute ranks of predictions. On the same page, in order to discriminate machine-generated text, the study [27] proposes the FACtual Structure of Text (FAST) method, a graph-based model that utilizes the factual structure of a document for deep fake detection of text. The graph nodes are extracted by Named Entity Recognition (NER). Furthermore, the sentence representations were constructed via document-level aggregation for the final prediction, where the consistency and coherence of continuous sentences are sequentially modeled. For the evaluation of FAST, the authors used AI-generated text data generated from GROVER and GPT-2. The approach outperformed transformer-based models [27]. Based on the text generated by GPT-2 model, OpenAI created a detector of whether a text is artificially generated. The detector has been based on a fine-tuning version of the RoBERTa base. The model has been trained using \(510\) tokens and using \(5,000\) text samples from the WebText dataset \(5,000\) texts generated by a GPT-2 model. The model achieved \(95\%\) of accuracy [23]. Furthermore, in [14] is presented a coherence-based contrastive learning model (CoCo) for the detection of text generated from machines. To achieve that, the authors modeled the text coherence with entity consistency and sentence interaction. The CoCo model outperformed in terms of accuracy and F1 score similar models such as GPT-2, RoBERTa, and XLNet but also the GROVER and FAST models. Additionally, the authors of [18] proposed a tool, called DetectGPT, that detects machine-generated text from LLMs. DetectGPT is a more effective approach for detecting machine-generated text than existing zero-shot methods. The authors found that DetectGPT significantly improved the detection of fake news articles generated by \(20\)B parameter GPT-NeoX, achieving an AUROC of \(0.95\) compared to the strongest zero-shot baseline's AUROC of \(0.81\). Additionally, the authors found that DetectGPT does not require training a separate classifier or collecting a dataset of a real or generated corpus, making it a more efficient and practical approach for detecting machine-generated text [18]. The [16] has focused on the involvement of AI writing in scientific writing. The authors analyze the similarities and differences between the two types of content, including writing style, consistency, coherence, language redundancy, and factual errors. The results suggest that while AI has the potential to generate accurate scientific content, there is still a gap in terms of depth and overall quality. The study utilizes the logistic regression model as a detector along with OpenAI's RoBERTa detector. The results suggest that there are significant differences in the distribution of text between human-written and AI-generated scientific work and that AI-generated abstracts tend to be more generic than human-written ones. Additionally, the logistic regression model achieved a higher F1-score for detecting AI-generated text [16]. Finally, as the extensive use of GPT-3 and ChatGPT continues to grow, OpenAI published on January 31, 2023, an update of the aforementioned model. The revised version of the AI text classifier fine-tuned pre-trained models in order to distinguish the AI-generated text. The model used data from three origins of the human-written text, a dataset from Wikipedia, the 2019 WebText dataset, and a group of human examples gathered during the instruction of InstructGPT. Although, the model needs \(1,000\) tokens and more in order to produce reliable results. The revised edition of the model has shown better results than the previous version, which was based on GPT-2. In the validation set, the AUC score increased from \(0.95\) to \(0.97\), and in the challenge set, it increased from \(0.43\) to \(0.66\). Furthermore, the classifier accurately identifies \(26\%\) of text generated by AI as possibly AI-written (true positives) but mistakenly identifies \(9\%\) of human-written text as AI-generated (false positives) [21]. ## III Research Methodology The purpose of this study is to suggest a strategy for distinguishing between scientific literature produced by humans and AI-generated. We examine various techniques for text representation and Machine Learning models which are applied to both human-written and AI-generated scientific abstracts. We utilized the GPT-3 model to create a database of AI-generated abstracts, which is publicly available1. We examine well-established NLP methods for text representation as well as embedded representation methods in order to train classic ML and Deep Learning models examining their performances and trying to answer three research questions: Footnote 1: [https://github.com/panagiotisanagnostou/AI-GA](https://github.com/panagiotisanagnostou/AI-GA) 1. "_What words dominate the misclassified AI generated texts compared to the correctly classified ones?_" 2. "_What is the reason behind the misclassified texts?_" 3. "_Does the size of the title we provided the GPT-3 model affect the classification error?_" The strategy that produces the most accurate outcomes will be recommended for addressing this particular issue. A workflow diagram of the adopted strategy for creating the AI-generated dataset is presented in Fig. 1. ### _AI-Generated Dataset_ For this study, we used the publicly available COVID-19 Open Research Dataset (CORD-19) [26]. The CORD-19 is an aggregation of published papers and pre-prints from multiple sources, which has been collected in order to promote research on COVID-19 and related coronaviruses. The dataset contains tens of thousands of scholarly articles, some of them with full text. The primary objective of CORD-19 is to provide the global research community with a wealth of information, to support the fight against COVID-19. This has been achieved by creating a freely available dataset, which can be used in conjunction with advanced Natural Language Processing (NLP) and other Artificial Intelligence (AI) techniques to extract new insights. As there is a large number of academic papers regarding coronaviruses, there is a pressing need for text mining and information retrieval systems that can help medical researchers keep up. We randomly selected a subset of academic works from the CORD-19 dataset, consisting of \(14,331\) English-language papers with both titles and abstracts. In order to produce the AI-generated abstracts, we used one of the most advanced LLMs available, the GPT-3. Specifically, we used the GPT-3 model also known as the Davinci model, which is one of the most capable GPT-3 models and produces higher quality results [20]. Although GPT-3.5 models would have been preferable for this task, they were not publicly available via an API at the time of this experiment. To create the AI abstracts, we developed an appropriate prompt with the following form: _"Create an abstract for a scientific journal with a formal tone, academic language, and a background story of the topic in a unique paragraph with the title: \(\mathbf{\hat{t}}\)"_, where \(\mathbf{\hat{t}}\) is the human-written original title of the academic paper. In order to make the generated text more creative and novel, several adjustments were made to the requested parameters. These adjustments included increasing the freedom to create novelty by selecting the most likely tokens through changes to the randomness of the generated text and penalizing the frequency of repeated words or phrases to generate more diverse responses. Additionally, the model was set to avoid using words or phrases already present in the prompt or previously generated text in order to generate more unique responses. The titles provided for the prompt were the same as the titles selected from the CORD-19 dataset. The final dataset contains \(28,662\) entries, involving \(14,331\) human-written abstracts and \(14,331\) AI-generated abstracts, corresponding to the same titles. In order to ensure that the analysis is based on high-quality data, we had to perform a text-cleaning procedure. The cleaning procedure involved: * Removing special characters such as HTML tags or special characters e.g. next line symbol, * Removing whitespace or line breaks in the text, * Removing stop words, such as "the", "and", "of", etc. because they do not convey meaningful information, * Removing any non-alphabetic characters or numbers that may be present in the text, * Normalizing text involves converting all text to a consistent format, such as lowercase, to make it easier to process and analyze, * Removing the two most frequent words, namely "paper" and "study" from both original and generated texts due to their high frequency on the generated text. ### _Text representation_ Text representation is an essential step in NLP. The main purpose is to transform unstructured text data into a structured and machine-readable format. The purpose of text representation is to enable text algorithms to perform various NLP tasks such as text classification, sentiment analysis, and machine translation. In the following section, we describe some of the methods used as we carried out our tests. #### Iii-B1 Term Frequency-Inverse Document Frequency (TF-IDF) TF-IDF is a method employed in NLP and information retrieval to evaluate the significance of a term in a document or corpus. It is created by combining two distinct components: Term Frequency (TF) and Inverse Document Frequency (IDF). The TF component quantifies the number of occurrences of a term in a document, while the IDF component assigns less weight to commonly used words and more weight to less frequent words. When a word appears more frequently in documents, it will have a higher term frequency, while less frequent occurrences of a word will result in greater importance (IDF) for that word when it is searched in a specific document. TF-IDF is the product of multiplying TF and IDF components [22]. During our study, we removed words that exhibited less than 1% or more than 99% of their values to filter out unwanted noise in the data from sparse and dense variables. #### Iii-B2 Named Entity Recognition Named Entity Recognition (NER) is a fundamental task in the field of NLP, aimed at identifying and classifying named entities from a given text. Fig. 1: Schematic overview of the study. Dataset Generation: From the CORD-19 data has been collected the titles and the abstracts of the academic literature. The titles of the selected work have been prompted to GPT-3 model, via its API for the AI-generated abstract based on its title (Left). Data Analysis: The study involved text cleaning and data representations using various methods, as well as the models’ results evaluation (Right). The task of NER involves analyzing text data and identifying named entities based on a range of linguistic features, including part-of-speech tags, syntactic structures, and context. NER algorithms employ various machine learning techniques such as statistical models, neural networks, and rule-based methods to automatically detect and extract named entities from unstructured text data. The output of a NER system is typically a structured representation of the identified entities, which may include labels indicating their types [9, 19]. As we conducted our experiments, we utilized a bioinformatics-focused approach due to the main topic of the abstracts being about COVID-19, employing NER via the SciSpacy python package's "en_core_sci_lg" model to identify relevant bioinformatics terminology [19]. The extracted n-grams, up to a size of 9, were then used as input for our classification models. #### Iii-B3 Word2Vec Word2Vec is a popular method for generating word embeddings, which are vector representations of words that capture their semantic and syntactic meaning [17]. Word2Vec is a widely used natural language processing technique that is predicated on the premise that words that co-occur in similar linguistic contexts tend to have similar semantic meanings. It comprises two neural network-based models, namely the Continuous Bag of Words (CBOW) and Skipgram models, which generate word embeddings from large text corpora. These embeddings encode the semantic and syntactic relationships between words, thereby facilitating a range of natural language processing applications [8, 17]. In this experiment, we used the pre-trained model google-news-300, an unsupervised natural language processing model developed by Google. It is based on the Word2Vec technique and is designed to generate high-quality word embeddings from large corpora of news articles. The model consists of 300-dimensional vectors, each of which represents a unique word in the vocabulary. #### Iii-B4 Contextualized Representations Contextualized representations refer to the capability of language models to produce unique and dynamic representations of words and sentences based on the surrounding context. These representations not only capture the inherent meaning of individual words but also the subtleties and nuances of their usage within a specific context. The use of contextualized representations enhances the accuracy and naturalness of language output and is increasingly vital in NLP tasks such as sentiment analysis, question answering, and machine translation. Contextualized representations are achieved through deep neural network architectures that are trained on extensive text data. These models usually rely on recurrent or transformer-based architectures that can capture long-range dependencies between words and phrases in a text [5]. An example of such a model is the Bidirectional Encoder Representations from Transformers (BERT), which has achieved state-of-the-art results on a variety of natural language processing tasks [4]. BERT is a type of machine learning model that uses bidirectional representation, which allows it to comprehend the entire sequence of words in the context of the sentence. The model is trained using a Masked Language Model (MLM) objective, which involves masking out \(15\%\) of the input words during training. These masked tokens are replaced with either a "MASK" token, a random word, or the same word, with frequencies of \(80\%\), \(10\%\), and \(10\%\), respectively [2]. BERT comes in two models: the base model with 12 encoders and the large model with 64 encoders. Additionally, the BERT model has the ability to perform Next Sentence Prediction, where it is pre-trained on pairs of text to learn the relationships between sentences and determine if a given sentence follows the previous sentence or not. BERT's input includes Token Embeddings, Segment Embeddings, and Positional Embeddings. The word tokenization process is performed by the BERT tokenizer, which uses the concept of word-piece tokenizer to break down certain words into sub-words or word pieces if the word can be represented by multiple tokens [2, 4, 25]. During our research experiments, we used the BERT base pre-trained model. On top of BERT's architecture, we added a fully connected layer, where each neuron is connected to every neuron in the previous layer, and each connection has an associated weight parameter that is learned during training. In addition, a regularization technique randomly drops out the \(50\%\) of the neurons in the layer during each training iteration, to prevent the model from overfitting over the training data. In our experiments, we adopted an alternative methodology that involved the use of a pre-trained NLP model designed for general usage. The objective behind this approach was to map our data onto the embedding space established by the pre-existing model. Specifically, we employed the "English Wikipedia Dump of November 2021" model to achieve this goal [13]. The Wikipedia dataset is comprised of filtered articles from all languages. The construction of these datasets entails the extraction of data from the Wikipedia dump, with each language having a distinct file. Each record within the dataset corresponds to a complete Wikipedia article, which has been processed to exclude superfluous sections, such as markdown, references, and other unwanted content [6]. By adopting this technique, we succeeded in creating a data matrix of \(300\) variables. ## IV Experimental Results This section presents the results of how each method of word representation technique, combined with the machine learning methods, performed and distinguish the original abstracts from the AI-generated ones. To evaluate the effectiveness of the methods, we conducted a comparison using a range of text representation techniques as mentioned in Section III-B with various Machine Learning (ML) methods such as Logistic Regression (LR), Multinomial Naive Bayes (MNB), Support Vector Machine (SVM), Long Short-Term Memory networks (LSTM), and BERT. Our analysis involved training and testing these algorithms on a large corpus of original and AI-generated abstracts. The training of the methods has involved the \(80\%\) of the corpus, containing \(22,930\) entries, and the evaluation of the remaining \(20\%\) of \(5,732\) entries. Furthermore, the training and the evaluation entries contain both the real and the generated abstract from each of the paper titles we used in the abstract generation process. For the evaluation of the performance of each algorithm, we used the metrics such as accuracy, precision, recall, and F1 score. The combination of text representations with the ML models can be summarized as follows: * LR with TF-IDF * MNB with TF-IDF * SVM with TF-IDF * LR with NER * MNB with NER * SVM with NER * LR with EWD * SVM with EWD * BERT with keeping the initial weights fixed * BERT with fine-tuning the model by updating the initial weights (BERT Fine-Tuning) * LSTM with BERT * LSTM with Word2Vec Table I presents the average performance of each method over 100 independent iterations. To mitigate the risk of overfitting, we employed an early stopping strategy by continuously monitoring the train and validation loss, particularly for the LSTM and BERT models. Specifically, once the validation loss reached its minimum value and began to increase, we terminated the training process and saved the current model. Our analysis reveals that the LSTM model combined with Word2Vec representation achieved the highest accuracy rate of \(98.7\%\). Although the BERT model performed equally well in terms of accuracy and better in F1 score, a composite measure of precision and recall, it exhibited lower precision and recall scores. The LSTM model incorporating Word2Vec achieved an AUC score of 0.987, which is higher than the AUC scores of the original and updated OpenAI models. The OpenAI models were developed to distinguish AI-generated text from human-written text, and their AUC scores were 0.95 and 0.97, respectively. Our findings suggest that Word2Vec embeddings outperformed BERT embeddings in this particular task, likely due to the specialized language domain of scientific abstracts. Specifically, Word2Vec was better able to capture word context within sentences, while BERT focused on predicting masked tokens or generating subsequent sentences in the document. Additionally, while Word2Vec generated fixed-size embeddings for each word, BERT's embeddings were contextualized and varied based on the word's usage in the document. ## V Discussion The experimental results show that it can be detected the human created and AI-generated scientific abstracts highly accurate. The best model, LSTM with the Word2Vec representation) misclassified 103 samples from the total test samples, where 43 were human-created abstracts while the rest were AI-generated. It is obvious that the portion of misclassified texts is significantly low. Having set up the research question in Section III we continue with an explanatory analysis of the wrong classified abstracts, to understand the particularities of the AI-generated abstract that lead the model to classify them as human text. The crucial element of abstract construction is the title of the scientific papers, so we investigate the attributes of the titles. The first question we raised is, "Does the size of the title we provided the GPT-3 model affect the classification error?" The answer to this question is straightforward negative if we consider that the average size of the titles that constitute our dataset is 13.64 words, and the length of the misclassified generated text has an average title size of 13.68 words. At this point, there is no to pose the same question in the human-created texts. The next question that needs answering is, "What words dominate the misclassified AI-generated texts compared to the correctly classified ones?" Before we answer this question, we need to understand the fundamental difference between human-created and AI-generated texts. To that end, Figure 2 presents the most frequently appeared words in both the human created and the AI-generated texts. It becomes immediately Fig. 2: Most frequent word appearances in both the human-created texts (left), and the AI-generated texts (right). apparent from this figure that there is a difference between those two sets. In the human-created texts, we can see that they contain not-so-common words (such as _influenza_ or _treatment_) with high frequency. On the contrary, AI-generated texts contain more widely used words. With this knowledge, we can now see Figure 4 and Figure 3. The combination of those figures shows us that both the titles and the abstract of the misclassified AI-generated texts contain, with high frequency, not-so-common words, and in the case of the titles, they contain significantly more uncommon words than the rest of the titles of the dataset. Only from this fact can we conclude that the more information, in terms of content, contained in the titles given to the GPT-3 model, the better text generation it will conduct. On the other hand, the human-created text that was misclassified had a poorer vocabulary than the main bulk of the human-created texts, so much so that the LSTM with Word2Vec methodology misclassified it as AI-generated text. The word analysis can be seen in Figure 4. ## VI Conclusion In this work, we presented a method of distrusting between text generated by Artificial Intelligence and text created by humans. To that end, we presented several NLP classification methodologies, from simple ones, such as the LR with TF-IDF method, or more complicated ones, such as LSTM with BERT. From the results of the classification task, we can conclude that the problem at this point of the development of the GPT-3 model is seemingly a problem that can be tackled efficiently. Moving forward, we intend to produce a higher volume dataset using the updated state-of-the-art Large Language Model, with the aim of further evaluating and improving the efficacy of our proposed method. The larger dataset will enable us to investigate the generalizability and scalability of our approach across different domains, languages, and text genres. Moreover, it will allow us to explore the potential of our method for real-world applications. By advancing our understanding of the capabilities and limitations of AI-generated text, we can pave the way for more responsible and ethical use of this technology in the future.
2306.10153
Semi-supervised Relation Extraction via Data Augmentation and Consistency-training
Due to the semantic complexity of the Relation extraction (RE) task, obtaining high-quality human labelled data is an expensive and noisy process. To improve the sample efficiency of the models, semi-supervised learning (SSL) methods aim to leverage unlabelled data in addition to learning from limited labelled data points. Recently, strong data augmentation combined with consistency-based semi-supervised learning methods have advanced the state of the art in several SSL tasks. However, adapting these methods to the RE task has been challenging due to the difficulty of data augmentation for RE. In this work, we leverage the recent advances in controlled text generation to perform high quality data augmentation for the RE task. We further introduce small but significant changes to model architecture that allows for generation of more training data by interpolating different data points in their latent space. These data augmentations along with consistency training result in very competitive results for semi-supervised relation extraction on four benchmark datasets.
Komal K. Teru
2023-06-16T19:45:42Z
http://arxiv.org/abs/2306.10153v1
# Semi-supervised Relation Extraction via ###### Abstract Due to the semantic complexity of the Relation extraction (RE) task, obtaining high-quality human labelled data is an expensive and noisy process. To improve the sample efficiency of the models, semi-supervised learning (SSL) methods aim to leverage unlabelled data in addition to learning from limited labelled data points. Recently, strong data augmentation combined with consistency-based semi-supervised learning methods have advanced the state of the art in several SSL tasks. However, adapting these methods to the RE task has been challenging due to the difficulty of data augmentation for RE. In this work, we leverage the recent advances in controlled text generation to perform high quality data augmentation for the RE task. We further introduce small but significant changes to model architecture that allows for generation of more training data by interpolating different data points in their latent space. These data augmentations along with consistency training result in very competitive results for semi-supervised relation extraction on four benchmark datasets. ## 1 Introduction Relation extraction is one of the essential components in constructing structured knowledge bases (Luan et al., 2018), performing interpretable question answering (Sun et al., 2021), improving web search, and many other information extraction pipelines. It aims to discover the semantic relation between a given head entity and tail entity based on the context in the input sentence. For example, given a sentence _"The battle led to panic on the frontier, and settlers in the surrounding counties fled."_, the goal is to extract the Cause-Effect relation between the head entity '**battle**' and the tail entity '**panic**'. The RE task requires a high level of language understanding and involves a significant level of semantic complexity (Bach and Badaskar, 2007). Due to this semantic complexity it often requires extensive and highly skilled human involvement to obtain good quality labelled data, making data collection an expensive and noisy process. Unsurprisingly, because of the same semantic complexity of the task, models typically require large amounts of labelled data to give production-ready performance. A common strategy to improve the sample efficiency of machine learning models is semi-supervised learning methods which leverage easily accessible unlabelled data to improve the overall performance. While there are several paradigms of semi-supervised learning methods, consistency training based methods have advanced the state of the art in several SSL tasks (Ghosh and Thiery, 2021). These methods can typically reach performances that are comparable to their fully supervised counterparts while using only a fraction of labelled data points. Recently, strong data augmentation combined with consistency training algorithms have shown great success, even surpassing fully supervised models, in low-data settings of various Figure 1: Different data augmentation techniques applied to a sample datapoint from SemEval dataset. Existing methods replace the head/tail entities (highlited in red), change the original meaning or do not give very fluent paraphrases. tasks (Xie et al., 2020). Adapting these methods to the task of relation extraction has been challenging due to the difficulty of data augmentation for RE task. This is because, in addition to the input sentence, each data point also consists of a head entity and a tail entity contained in the input sentence. Typical data augmentation techniques used in NLP such as back-translation, synonym-replacement, language-model based augmentation, etc. (Feng et al., 2021) can not be easily applied to such'structured' input as they do not guarantee the integrity of either a) the entities in the input sentence or, b) the meaning of the input sentence itself. Figure 1 shows that using synonym-replacement and vanilla back-translation (BT) methods (Sugiyama and Yoshinaga, 2019) the entities themselves could be paraphrased or replaced. Matching the new and the old entities is a whole problem in itself. In the Language Model-based augmentation method (Anaby-Tavor et al., 2020), the semantic meaning of the input sentence changes altogether, which makes it difficult to employ consistency training. **Present work**. In this work, we leverage the recent advances in controlled text generation to perform high quality data augmentation for the relation extraction task that not only keeps the meaning and the head/tail entities intact but also produces fluent and diverse data points. In particular, we modify back-translation to leverage lexically constrained decoding strategies (Post and Vilar, 2018; Hu et al., 2019) in order to obtain paraphrased sentences while retaining the head and the tail entities. We further propose novel modifications to the widely popular relation extraction model architecture, that allows for generation of more samples by interpolating different data points in their latent space, a trick that has been very successful in other domains and tasks (Berthelot et al., 2019; Chen et al., 2020, 2020). Additionally, we leverage the entity types of the head and the tail entities, when available, in a way that effectively exploits the knowledge embedded in pre-trained language models. These data augmentations, when applied to unlabelled data, let us employ consistency training techniques to achieve very competitive results for semi-supervised relation extraction on four benchmark datasets. To the best of our knowledge, this is the first study to apply and show the merit of data augmentation and consistency training for semi-supervised relation extraction task. ## 2 Related work **Semi-supervised learning for NLP** Semi-supervised learning algorithms can be categorized into two broad classes-1) _self-ensembling_ methods and 2) _self-training_ methods. _Self-ensembling_ methods leverage the smoothness and cluster/low-density assumptions of the latent space (Chapelle and Zien, 2005). They train the models to make consistent predictions under various kinds of perturbations to either a) the data (Miyato et al., 2019; Xie et al., 2020), or b) the model parameters themselves (Tarvainen and Valpola, 2017). The former methods are broadly referred to as consistency training methods and have resulted in state-of-the-art performances for several semi-supervised NLP tasks. Sachan et al. (2019) add adversarial noise to both labelled and unlabelled data and train models to make consistent predictions on the original and the corresponding noisy data-point. Many recent methods leverage large pre-trained language models for more advanced data augmentation techniques, like back-translation (Edunov et al., 2018), and further improve performance in the low-data regime (Xie et al., 2020). Recently, Chen et al. (2020, 2020) adapted the Mixup algorithm (Zhang et al., 2018) as another form of data augmentation for textual data and show state-of-the-art performance on text classification and NER tasks. Due to the difficulty of data augmentation for relation extraction task (Figure 1), these methods have not been adapted for semi-supervised relation extraction (SSRE) task so far. In this work, we fill that gap and demonstrate the empirical success of consistency training for SSRE. _Self-training_ methods are the oldest heuristic methods of iteratively expanding the labelled training set by including high-confidence _pseudo-labels_ from the unlabelled data. All of the existing works on SSRE fall under this paradigm. These methods famously suffer from the confirmation bias problem where the incorrect predictions of the initially trained model affect the quality of pseudo-labels and eventually cause the label distribution to drift away from the true data distribution, resulting in a _semantic drift_. Lin et al. (2019) was one of the first works to address this by training two different models and augmenting the labelled set with the 'consensus' set, i.e., the data points which are labelled the same by both models. Several works have developed on this idea of improving the pseudo-label quality via various strategies like meta-learning (Hu et al., 2021) or reinforcement learning (Hu et al., 2021). These set of methods constitute our baselines. ### Data augmentation for NLP In this work, we concentrate on two major classes of data augmentation techniques for NLP - sentence-level data augmentation and latent space augmentations. Sentence-level data augmentation techniques include back-translation (Edunov et al., 2018), language-model based augmentations (Anaby-Tavor et al., 2020), and word-replacement strategies (Zhang et al., 2015). We adapt the back-translation techniques to the RE task. In latent space augmentations one generates more samples by interpolating between pairs of given data points in their latent space. This was originally introduced for image classification (Zhang et al., 2018; Verma et al., 2019; Yun et al., 2019) as a data augmentation and regularization method. Previous works have generalized this idea to the textual domain by proposing to interpolate in embedding space (Cheng et al., 2020) or the general latent space (Chen et al., 2020, 2020) of textual data and applied the technique to NLP tasks such as text classification, machine translation, NER task and achieved significant improvements. We show that both these styles of augmentations can be effectively applied to improve performance on SSRE task. ## 3 Background **Task formulation**. In this work, we focus on the sentence-level relation extraction task, i.e., given a _relation statement_\(\mathbf{x}:(\mathbf{s},e_{h},e_{t})\) consisting of a sentence, \(\mathbf{s}\), a head entity, \(e_{h}\), and a tail entity, \(e_{t}\) (both the entities are mentioned in the given sentence \(\mathbf{s}\)), the goal is to predict a relation \(r\in\mathcal{R}\cup\{\text{NA}\}\) between the head and the tail entity, where \(\mathcal{R}\) is a pre-defined set of relations. If the sentence does not express any relation from the set \(\mathcal{R}\) between the two entities, then the relation statement \(\mathbf{x}\) is accordingly labelled NA. This is typically done by learning a relation encoder model \(\mathcal{F}_{\theta}:\mathbf{x}\mapsto\mathbf{h}_{r}\) that maps an input relation statement, \(\mathbf{x}\), to a fixed length vector \(\mathbf{h}_{r}\) that represents the relation expressed in \(\mathbf{s}\) between \(e_{h}\) and \(e_{t}\). This relation representation, \(\mathbf{h}_{r}\), is then classified to a relation \(r\in\mathcal{R}\cup\{\text{NA}\}\) via an MLP classifier. **Base model architecture**. Most recent methods for RE use a Transformer-based architecture (Devlin et al., 2019; Vaswani et al., 2017) for the relation encoder model, \(\mathcal{F}_{\theta}\). To represent the head and tail entities in the input to the encoder, the widely accepted strategy is to augment the input sentence \(\mathbf{s}\) with entity marker tokens-[E1], [/E1], [E2], [/E2]-to mark the start and end of both entities. Concretely, an input sentence like _"Lebron James currently plays for LA Lakers team."_ when augmented with entity marker tokens becomes [E1] **Lebron James** [/E1] currently plays for [E2] **LA Lakers** [/E2] team. This modified text is input to the Transformer-based sequence encoder. Next, the encoder output representations1 of the tokens [E1] and [E2] are concatenated to give the fixed length relation representation, \(\mathbf{h}_{r}=[\mathbf{h}_{[E1]}\oplus\mathbf{h}_{[E2]}]\). This fixed length vector is in turn passed through an MLP classifier, \(p_{\phi}(\mathbf{h}_{r})\), to give a probability vector, \(\mathbf{y}\), over the relation set \(\mathcal{R}\cup\{\text{NA}\}\). Footnote 1: hidden state from the last layer of the Transformer model ## 4 Proposed approach In our approach we build on the base model architecture described in SS3 and introduce additional model design elements that are necessary to obtain an improved performance in semi-supervised relation extraction (SSRE) task. We first describe the two data augmentation techniques we perform, and the model architectural changes we introduce that facilitate these augmentations. Then, we describe the training procedure we follow to leverage unlabelled data and achieve state-of-the-art performance on three out of four benchmark datasets for SSRE. ### Constrained back-translation Back-translation (Edunov et al., 2018) generates diverse and fluent augmentations while retaining the global semantics of the original input. Specifically, one translates a given text into an intermediate language, say, German, and translates it back to the source language, say English. Using different intermediate languages and temperature-based sampling results in a diverse set of paraphrases. Applying this back-translation technique in a vanilla fashion is not possible for RE task because one has little control over the retention of the head and tail entities (Figure 1). Thus, when translating back to the source language from the intermediate language we perform lexically-constrained decoding (Hu et al., 2019), i.e., force the inclusion of pre-specified words and phrases-positive constraint set-in the output. In our case the original head and tail entity words/phrases make up this positive constraints set. We use German and Russian as intermediate languages and use the pre-trained WMT'19 English-German and English-Russian translation models (in both directions) and their implementations provided by Ott et al. (2019). This methodology generates diverse data augmentations for a given sentence. For example, the sentence "_The battle led to panic on the frontier, and settlers in the surrounding counties fled._" is converted to "_The battle sparked panic at the border, with residents fleeing in surrounding counties_" when back-translated via German, and to "_The battle caused panic on the border and settlers in nearby counties fled._" when done via Russian. This is illustrated in Figure 2. This strong data-augmentation technique for RE can be applied to both labelled and unlabelled data opening the doors to consistency training (Xie et al., 2020) as we will see in SS4.3. ### Latent-space interpolation Here, we adapt a mixup-based data augmentation technique to the RE task by making necessary modifications to the base model architecture we described in SS3. As done in previous works (Chen et al., 2020, 2020), we sample two random data points-\((\mathbf{x},\mathbf{y})\) and \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\), where \(\mathbf{x}\) and \(\mathbf{y}\) denote the relation statement and the corresponding relation label-from the training data and separately compute the respective latent representations, \(\mathbf{h}^{m}\) and \(\mathbf{h}^{\prime m}\), upto the layer \(m\) of the relation encoder \(\mathcal{F}_{\theta}\) as follows: \[\mathbf{h}^{l} =\mathcal{F}_{\theta}^{l}(\mathbf{h}^{l-1});\quad l\in[1,m],\] \[\mathbf{h}^{\prime l} =\mathcal{F}_{\theta}^{l}(\mathbf{h}^{\prime l-1});\quad l\in[1,m],\] where \(\mathbf{h}^{l}\) is the latent representation of all tokens in the sentence \(\mathbf{x}\) at the \(l^{\text{th}}\) layer of the encoder. Next, the latent representations of each token in \(\mathbf{x}\) at the \(m^{\text{th}}\) layer are linearly interpolated: \[\tilde{\mathbf{h}}^{m}=\lambda\mathbf{h}^{m}+(1-\lambda)\mathbf{h}^{\prime m},\] where \(\lambda\) is the mixing coefficient which is sampled from a Beta distribution, i.e., \(\lambda\sim\text{Beta}(\alpha,\beta)\). Then, the interpolated latent representation is passed through the rest of the encoder layers: \[\tilde{\mathbf{h}}^{l}=\mathcal{F}_{\theta}^{l}(\tilde{\mathbf{h}}^{l-1}); \quad l\in[m+1,L].\] This final encoder output representation, \(\tilde{\mathbf{h}}^{L}\), can be interpreted as the encoder output representation of a _virtual_ input \(\tilde{\mathbf{x}}\), i.e., \(\tilde{\mathbf{h}}^{L}=\mathcal{F}_{\theta}(\tilde{\mathbf{x}})\). We denote this whole mixup operation2 as \(\tilde{\mathbf{h}}^{L}:=\text{REMix}(\mathbf{x},\mathbf{x}^{\prime})\). The label for this augmented _virtual_ sample is given by the linear interpolation of the respective labels, \(\mathbf{y}\) and \(\mathbf{y}^{\prime}\), with the same mixing coefficient \(\lambda\) i.e., \(\tilde{\mathbf{y}}:=\text{mix}(\mathbf{y},\mathbf{y}^{\prime})=\lambda \mathbf{y}+(1-\lambda)\mathbf{y}^{\prime}\). This _virtual_ data point, \((\tilde{\mathbf{x}},\tilde{\mathbf{y}})\), is the augmented data point and can be used as additional training data. Footnote 2: REMix entails the model architecture changes discussed below. **Proposed architecture change**. Now, for the RE task we need to extract a fixed-length relation representation from the encoder output representation of the entire input sequence. The traditional way to do this for RE task is by concatenating the encoder output representations of the entity marker tokens [E1] and [E2]. However, it is challenging to do this for a virtual sample, \(\tilde{\mathbf{x}}\), as the entity markers are not clearly defined in this case. We thus modify the relation representation to be the encoder output representation of the [CLS] token. However, Baldini Soares et al. (2019) have shown this choice to be sub-optimal compared to concatenation of marker tokens. This is because the marker token representations provide direct access to the contextual information of the respective entities. Although the [CLS] token, in theory, has access to the entire context of the sentence, it might be difficult to capture the nuances like the head entity type, tail entity type, and the contextual information around the two entities all in a single vector. Figure 2: Constrained back-translation process On the other hand, entity type information is easily accessible in most RE benchmarks3. So, to compensate for the sub-optimal choice of using [CLS] token representation as the relation representation, we modify how we represent the entity spans in the input token to more effectively use the easily accessible entity type information. In particular, we note that the entity type labels can trivially be mapped to tokens from any pre-trained language model's vocabulary. For example, entity types like PERSON and STATE_OR_PROVINCE can be tokenized into a word/phrase like 'person' and'state or province', respectively. In such cases when entity type information is available, instead of using special marker tokens like [E1] and [E2] we prepend the entity spans in the input sequence with the word/phrases corresponding to their respective types and enclose these 'type-words' in punctuation marks Zhou and Chen (2021). The modified input to the transformer along with the [CLS] token looks as follows: Footnote 3: From new datasets/applications viewpoint, when entities are identified in a piece of text it is safe to assume that their types would also be identified. [CLS] @* person * **Lebron James @** plays for & * organization * **LA Lakers &** team. We use different punctuation symbols to distinguish between subject and object entities. Specifically, we use '@' for subject and '&' for object entities. This representation helps leverage the knowledge already contained in the pre-trained large-language model about the type of the entity and offset some of the downside of using a simplified relation representation in the [CLS] token. As we will empirically see in SS5.4, this use of entity type information is not only effective but is necessary for the optimal functioning of our approach. Zhou and Chen (2021) recently showed the success of this method in the fully supervised setting. Here we use it in conjunction with a simplified relation representation and show its merit in semi-supervised RE setting. ### Consistency training for SSRE Let the given limited labelled set be \(\mathbf{X}_{l}=\{\mathbf{x}_{1}^{l},...,\mathbf{x}_{n}^{l}\}\), with their relation labels \(\mathbf{Y}_{l}=\{\mathbf{y}_{1}^{l},...,\mathbf{y}_{l}^{l}\}\), where \(\mathbf{y}_{i}^{l}\in\{0,1\}^{\left\lvert\mathcal{R}\cup\{\text{NA}\}\right\rvert}\) is a one-hot vector and \(\mathcal{R}\) is the set of pre-defined relations. Let \(\mathbf{X}_{u}=\{\mathbf{x}_{1}^{u},...,\mathbf{x}_{m}^{u}\}\) be a large unlabelled set. The goal is to apply both the data augmentation techniques described above and train a model with consistency loss to effectively leverage unlabelled data along with the limited labelled data. We largely adapt the semi-supervised training techniques introduced by Chen et al. (2020). For each \(\mathbf{x}_{i}^{u}\) in the unlabelled set \(\mathbf{X}_{u}\), we generate \(K\) augmentations \(\mathbf{x}_{i,k}^{a},k\in\{1,2,...,K\}\) using the constrained back translation technique with different intermediate languages4. These augmentations make up the set \(\mathbf{X}_{a}=\{\mathbf{x}_{i,k}^{a}\}\). For a given unlabelled data point \(\mathbf{x}_{i}^{u}\) and its \(K\) augmentations \(\mathbf{x}_{i,k}^{a}\) the label is given by the average of current model's predictions on all \(K+1\) data points: Footnote 4: In our specific implementation K = 2; with German and Russian as intermediate languages. \[\mathbf{y}_{i}^{u}=\frac{1}{K+1}\left(p_{\phi}(\mathcal{F}_{\theta}(\mathbf{x }_{i}^{u}))+\sum_{k=1}^{K}p_{\phi}(\mathcal{F}_{\theta}(\mathbf{x}_{i,k}^{a}) )\right),\] where \(\mathbf{y}_{i}^{u}\) is a probability vector. This not only enforces the constraint that the model should make consistent predictions for different augmentations but also makes the predictions more robust by ensembling all the predictions. We merge the unlabelled set and the augmented set into \(\mathbf{X}_{\text{ua}}=\mathbf{X}_{u}\cup\mathbf{X}_{a}\) and the corresponding (pseudo-)labels are given by \(\mathbf{Y}_{\text{ua}}=\mathbf{Y}_{u}\cup\mathbf{Y}_{a}\), where \(\mathbf{Y}_{u}=\{\mathbf{y}_{i}^{u}\}\), \(\mathbf{Y}_{a}=\{\mathbf{y}_{i,k}^{a}\}\), and \(\mathbf{y}_{i,k}^{a}=\mathbf{y}_{i}^{u}\forall k\in\{1,2,...,K\}\), i.e., all the augmented data points share the same label as the original unlabelled data point. Given this cumulative set \(\mathbf{X}_{\text{ua}}\) and their generated labels \(\mathbf{Y}_{\text{ua}}\) as additional training data, we employ the REMix augmentation technique to generate arbitrary amounts of training data. In particular, we randomly sample two data points \(\mathbf{x}_{s}^{\text{ua}},\mathbf{x}_{t}^{\text{ua}}\in\mathbf{X}_{\text{ua}}\), and compute the encoder output representation of a new _virtual_ data point with \(\text{REMix}(\mathbf{x}_{s}^{\text{ua}},\mathbf{x}_{t}^{\text{ua}})\) and the corresponding target label with \(\text{mix}(\mathbf{y}_{s}^{\text{ua}},\mathbf{y}_{t}^{\text{ua}})\). Additionally, while computing the final unsupervised loss in each training iteration we filter out the unlabelled data points with prediction confidence below a certain threshold \(\gamma\)Xie et al. (2020). Finally, to encourage low-entropy predictions on unlabelled data, we sharpen the predictions with a sharpening coefficient \(T\): \[\hat{\mathbf{y}}_{i}^{\text{ua}}=\frac{(\mathbf{y}_{i}^{\text{ua}})^{\frac{1} {T}}}{||(\mathbf{y}_{i}^{\text{ua}})^{\frac{1}{T}}||_{1}}.\] Everything put together, the final unsupervised loss in each training iteration with mini-batch size \(B\) is computed as: \[\mathcal{L}_{\text{unsp}}=\frac{1}{B}\sum_{\mathbf{x}_{s}^{\text{ua}}, \mathbf{x}_{t}^{\text{ua}}\sim\mathbf{X}_{\text{ua}}}^{B}\text{\text{\text{m}}}( \mathbf{x}_{s}^{\text{ua}},\mathbf{x}_{t}^{\text{ua}})\mathcal{L}_{\text{mix}}( \mathbf{x}_{s}^{\text{ua}},\mathbf{x}_{t}^{\text{ua}}),\] where \[\mathcal{L}_{\text{mix}}(\mathbf{x}_{s}^{\text{ua}},\mathbf{x}_{t }^{\text{ua}})=\text{CE}(\text{mix}(\hat{\mathbf{y}}_{s}^{\text{ua}},\hat{ \mathbf{y}}_{t}^{\text{ua}})||\\ p_{\phi}(\text{REMix}(\mathbf{x}_{s}^{\text{ua}},\mathbf{x}_{t }^{\text{ua}}))),\] \[\text{\text{\text{\text{m}}}}(\mathbf{x}_{s}^{\text{ua}},\mathbf{x }_{t}^{\text{ua}})=I(\max\mathbf{y}_{s}^{\text{ua}}>\gamma)I(\max\mathbf{y}_ {t}^{\text{ua}}>\gamma).\] Here, \(I(.)\) is an indicator function and \(\text{\text{\text{\text descent direction on labelled data and bootstrap its optimization capability through trial and error (Hu et al., 2021). MetaSRE and GradLRE are two of the strongest methods in the widely adapted _self-training_ methods for SSRE. **Implementation details**. We follow the established setting to use stratified sampling to divide the training set into various proportions of labelled and unlabelled sets so that the relation label distribution remains the same across all subsets. Following existing work, we sample 5%, 10%, and 30% of the training set as labelled data for the SemEval and KBP37 datasets, and 3%, 10%, and 15% of the training set as labelled data for TACRED and RE-TACRED datasets. For all datasets and experiments, unless otherwise specified, we sample 50% of the training set as the unlabelled set. For TACRED and SemEval datasets we take the performance numbers of all baseline models reported by Hu et al. (2021). For other datasets, we re-run the models with their best configuration as provided in their respective implementations, when available. To be consistent with all the baselines we initialize the text encoder of REMix with the bert-base-cased model architecture and pre-trained weights. Full details of all the hyperparameters can be found in Appendix A.2. ### Main Results Table 2 shows F1 results of all baseline models and our proposed model, REMix, on the four datasets when leveraging various amounts of labelled data and 50% unlabelled data. We report the mean and standard deviation of 5 different runs (with different seeds) of training and testing. REMix gives state-of-the-art performance on 10 out of 12 different configurations across all four datasets. This reinforces the importance of consistency regularization beyond the currently popular self-training methods for SSRE. Interestingly, the performance gains are significantly higher for TACRED and RE-TACRED datasets-we see an average improvement of as much as 17% when trained on 3% labelled data. This can be attributed to the fact that entity type information is available for these datasets and entity type markers are very effective in exploiting the knowledge embedded in the pre-trained language models. We revisit this observation in our ablation studies (SS5.4) where we concretely establish the benefits of using entity type markers. ### Analysis and discussion We first conduct experiments to empirically demonstrate the effectiveness of three components of our proposed model: i) data augmentation by latent space interpolation (Mix-DA), ii) data augmentation by constrained back-translation (BT-DA), and iii) entity type markers (ET). In Table 3, we report the mean F1 score of five different runs for different variations of our model by removing a certain combination of these components6. As can be seen from Table 3, each of these components contributes to the overall success of REMix. For contribution of just the Mix-DA: we compare i) row 1 v/s row \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{3}{c}{TACRED} & \multicolumn{3}{c}{KBP37} \\ \cline{2-7} \%labelled Data & 3\% & 10\% & 15\% & 5\% & 10\% & 30\% \\ \hline MRefG & 43.81\(\pm\) 1.44 & 55.42\(\pm\) 1.40 & 58.21\(\pm\) 0.71 & - & - & - \\ MetaSRE & 46.16\(\pm\) 0.74 & 56.95\(\pm\) 0.33 & 58.94\(\pm\) 0.31 & 59.29\(\pm\) 0.92 & 61.83\(\pm\) 0.21 & 63.51\(\pm\) 0.69 \\ GradLRE & 47.37\(\pm\) 0.74 & 58.20\(\pm\) 0.33 & 59.93\(\pm\) 0.31 & 59.98\(\pm\) 0.37 & 62.67\(\pm\) 0.54 & **66.41\(\pm\)** 0.28 \\ REMix(ours) & **55.80\(\pm\)** 1.33 & **61.30\(\pm\)** 0.70 & **63.07\(\pm\)** 0.93 & **60.84\(\pm\)** 0.40 & **63.82\(\pm\)** 0.71 & **66.46\(\pm\)** 0.69 \\ \hline \hline \end{tabular} \begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{3}{c}{RE-TACRED} & \multicolumn{3}{c}{SemEval} \\ \cline{2-7} \%labelled Data & 3\% & 10\% & 15\% & 5\% & 10\% & 30\% \\ \hline MRefG & - & - & - & 75.48\(\pm\) 1.34 & 77.96\(\pm\) 0.90 & 83.24\(\pm\) 0.71 \\ MetaSRE & 44.42\(\pm\) 3.02 & 58.71\(\pm\) 1.70 & 61.71\(\pm\) 3.70 & 78.33\(\pm\) 0.92 & 80.09\(\pm\) 0.78 & 84.81\(\pm\) 0.44 \\ GradLRE & 61.22\(\pm\) 0.58 & 74.03\(\pm\) 1.74 & **79.46\(\pm\)** 0.82 & **79.65\(\pm\)** 0.68 & **81.69\(\pm\)** 0.57 & **85.52\(\pm\)** 0.34 \\ REMix(ours) & **71.33\(\pm\)** 1.22 & **77.94\(\pm\)** 0.59 & **79.76\(\pm\)** 0.47 & 77.58\(\pm\) 0.59 & 81.13\(\pm\) 0.82 & **85.51\(\pm\)** 0.38 \\ \hline \hline \end{tabular} \end{table} Table 2: F1 score with various amounts of labelled data and 50% unlabelled data. Mean and standard deviation of 5 different runs is reported. Best performance on each configuration is bolded and second best is underlined. 3, and ii) row 2 v/s row 4. All comparisons show positive improvement. For contribution of just the BT-DA: we compare i) row 1 v/s row 2, and ii) row 3 v/s row 4. We note that BT-DA results only in marginal improvements in most cases. Upon closer inspection we note that the constrained-decoding algorithms we implement for BT-DA are actually not perfect, especially when combined with translation models. It sometimes misses the constraints and sometimes falls into repetitive loops in an attempt to satisfy the constraint. With the ever-improving language generation capabilities, we believe the quality of data augmentation will only improve with time and result in more significant performance improvements. For contribution of both DA techniques together: we compare row 1 v/s row 4. All comparisons show significant improvements with data augmentation. The contribution of entity type markers can be noted in TACRED and RETACRED datasets. We see an average drop of 5.4% in F1 score across all 8 comparisons. Although our data augmentation techniques are effective, without the entity type information our method doesn't result in state-of-the-art performance. This reinforces our architectural choice to include entity type markers when using [CLS] token for relation representation (SS4.2). Next we examine the effect of using different amounts of unlabelled data. In Figure 3, we report the average F1 score for different models trained with different amounts of unlabelled data and 10% labelled data. REMix outperforms the baselines in all settings except on SemEval dataset, and, interestingly, the performance only marginally changes with the change in the amount of unlabelled data. Note that we train the models until the performance on the validation set stops improving for more than 5 epochs. Hence, REMix generates, in principle, an infinite amount of unlabelled data via the mixup strategy. Coupled with the fact that the label distribution remains the same in all settings, adding more unlabelled data does not seem to add a lot of new information. This explains why the model performance is relatively insensitive to changing amounts of unlabelled data. This also implies that REMix can leverage low amounts of unlabelled data better than the baselines. Finally, in Figure 4 we show how the perfor \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Mix. & BT-DA & ET & TACRED & re-T & KBP37\({}^{*}\) & SemEval\({}^{*}\) \\ \hline \multirow{3}{*}{✓} & & ✓ & 61.30 & 77.94 & \multirow{3}{*}{63.82} & \multirow{3}{*}{81.13} \\ & & ✗ & 56.82 & 75.11 & & \\ \hline \multirow{3}{*}{✓} & ✗ & ✓ & 60.81 & 77.77 & \multirow{3}{*}{63.48} & \multirow{3}{*}{79.71} \\ & & ✗ & 56.35 & 74.67 & & \\ \hline \hline \multirow{3}{*}{✓} & ✗ & ✓ & 59.65 & 76.80 & \multirow{3}{*}{62.64} & \multirow{3}{*}{79.17} \\ & & ✗ & 55.52 & 73.78 & & \\ \hline \multirow{3}{*}{✓} & ✗ & ✓ & 58.96 & 77.25 & \multirow{3}{*}{63.14} & \multirow{3}{*}{79.20} \\ & & ✗ & 55.25 & 74.58 & & \\ \hline \hline \end{tabular} * these datasets do not have entity type information \end{table} Table 3: Ablation results on all datasets using 10% labelled set and 50% unlabelled set. Figure 4: F1 Performance of REMix with 50% unlabelled and 10% labelled data with changing mixing coefficient \(\lambda\) Figure 3: F1 Performance with various unlabelled data and 10% labelled data mance of REMix changes with a change in the mean of the Beta distribution from which \(\lambda\) is sampled on each iteration. Note that a value near 0 and 1 for \(\lambda\) means the augmented _virtual_ data point will be closer to one of the underlying data points. As we get closer to 0.5 the virtual data points get further from the original data manifold and become more 'novel'. On TACRED and RE-TACRED datasets the performance peaks at \(\text{E}(\lambda)=0.15(\text{or 0.85})\) and drops in the mid-values. This can be interpreted as: adding datapoints far from the original data manifold is detrimental for these datasets. Interestingly, on KBP37 and SemEval the pattern inverts, i.e., the performance increases as \(\text{E}(\lambda)\) approaches \(0.5\), implying that more 'novel' augmentations help for these datasets. ## 6 Conclusion In this paper, we propose a consistency-training-based semi-supervised algorithm for relation extraction and empirically show the merit of this class of methods in comparison to the current state-of-the-art _self-training_ class of methods. In future work, one could bootstrap the self-training methods with consistency training as done in some previous works on vision tasks Pham et al. (2021). Additionally, we show how the entity type information, when available, can result in massive performance boosts in the semi-supervised scenario. This is important because in most practical use cases when entities have already been identified, the entity type information is easy available and could be effectively leveraged in the proposed fashion. ## 7 Limitations One of the key limitations of our proposed method compared to the baseline methods is the tight dependence on a strong external translation system to get good quality back-translated data augmentations. Secondly, since we use [CLS] token embeddings instead of entity-specific embeddings for final classification, it is more challenging to decipher entity-specific context. This is evident from the fact that our method performs relatively the weakest on the SemEval dataset which consists of untyped nominals (concepts) as entities and abstract relations which we believe need more entity-specific context to understand. Hence, our proposed method, REMix, shines bright when the entities in the dataset are typed or named entities whose meaning or type is relatively context-agnostic. ## Acknowledgements The author would like to thank the reviewers for their thoughtful feedback, the entire AI/ML Research team at Vanguard for their support, and especially David Landsman for their valuable input and discussion on the initial drafts of this work. ## Disclaimer This material is provided for informational purposes only and is not intended to be investment advice or a recommendation to take any particular investment action. \(\copyright\)2022 The Vanguard Group, Inc. All rights reserved.
2307.07706
Lorentzian distance on the Lobachevsky plane
Left-invariant Lorentzian structures on the 2D solvable non-Abelian Lie group are studied. Sectional curvature, attainable sets, Lorentzian length maximizers, distance, spheres, and infinitesimal isometries are described.
Yu. L. Sachkov
2023-07-15T04:43:27Z
http://arxiv.org/abs/2307.07706v1
# Lorentzian distance on the Lobachevsky plane+ ###### Abstract Left-invariant Lorentzian structures on the 2D solvable non-Abelian Lie group are studied. Sectional curvature, attainable sets, Lorentzian length maximizers, distance, spheres, and infinitesimal isometries are described. ###### Contents * 1 Introduction * 2 Lorentzian geometry * 3 Lobachevsky plane * 4 Left-invariant Lorentzian problems on the Lobachevsky plane * 5 Curvature * 5.1 Levi-Civita connection and sectional curvature of Lorentzian manifolds * 5.2 Sectional curvature of \(\mathrm{Aff}_{+}(\mathbb{R})\) * 6 Attainable sets * 7 Existence of Lorentzian length maximizers * 7.1 Existence of length maximizers for globally hyperbolic Lorentzian structures * 7.2 Existence of length maximizers in the case \(K<0\) * 8 Geodesics * 8.1 Pontryagin maximum principle * 8.2 Parameterization of geodesics * 8.3 Geodesic completeness * 9 Lorentzian length maximizers * 9.1 Diffeomorphic properties of the exponential mapping * 9.2 Inverse of the exponential mapping and optimal synthesis * 10 10 Lorentzian distance and spheres * 10.1 The case \(K<0\) * 10.2 The case \(K>0\) * 10.3 The case \(K=0\) * 10.4 Regularity of Lorentzian distance * 11 Isometries * 11.1 Infinitesimal isometries of Lorentzian manifolds * 11.2 Killing vector fields and isometries of \(\mathrm{Aff}_{+}(\mathbb{R})\) * 11.3 Isometric embedding of \(\mathrm{Aff}_{+}(\mathbb{R})\) into \(\mathbb{R}_{1}^{2}\) in the case \(K=0\) * 12 Examples * 12.1 Problem \(P_{1}\) * 12.2 Problem \(P_{2}\) * 12.3 Problem \(P_{3}\) ## 1 Introduction Lorentzian geometry is the mathematical foundation of the theory of relativity [4, 5, 6]. It differs from the Riemannian one in that here information can propagate along curves with velocity vectors from some sharp cone. Here, the natural problem is to find the curves that maximize the length-type functional along admissible curves. Therefore, an important problem is to describe the Lorentzian length maximizers for all pairs of points where the second point is reachable from the first one along an admissible curve. As far as we know, this problem has been fully investigated only in the simplest case of a left-invariant Lorentzian structure in \(\mathbb{R}^{n}\), for the Minkowski space \(\mathbb{R}_{1}^{n}\)[4]. This paper presents a description of Lorentzian length maximizers, distances and spheres for the next natural case -- for left-invariant Lorentzian structures on a unique connected simply connected non-Abelian two-dimensional Lie group. These results are obtained by methods of geometric control theory [1, 2]. Curiously, in these problems, the Lorentzian length maximizers do not exist for some reachable pairs of points, and the Lorentzian distance can be infinite at some points. In these problems, all extremal trajectories (satisfying the Pontryagin maximum principle) are optimal, that is, there are neither conjugate points nor cut points. Optimal trajectories are parametrized by elementary functions, as are spheres and distances. This work has the following structure. In Sec. 2 we recall necessary basic definitions of Lorentzian geometry. In Sec. 3 we describe the group of proper affine mappings of the real line \(\mathrm{Aff}_{+}(\mathbb{R})\) which bears the left-invariant Lorentzian problems stated in Sec. 4. We show in Sec. 5 that these problems have constant curvature \(K\), thus are locally isometric to model Lorentzian spaces of constant curvature (2D Minkowski space for \(K=0\), de Sitter space for \(K>0\), anti-de Sitter space for \(K<0\)). In Sec. 6 we describe positive and negative time attainable sets of the corresponding control systems. Section 7 is devoted to the study of existence of Lorentzian length maximizers. In Sec. 8 we apply the Pontryagin maximum principle to the problems studied and parametrize geodesics. In Sec. 9 we prove that in fact all geodesics are optimal, and construct explicitly optimal synthesis. On the basis of these results in Sec. 10 we describe Lorentzian distance and spheres. In Sec. 11 we describe Lie algebras of infinitesimal isometries (Killing vector fields) and the connected component of identity of the Lie groups of isometries for the problems considered. Moreover, in the case \(K=0\) we construct explicitly an isometric embedding of \(\mathrm{Aff}_{+}(\mathbb{R})\) into a half-plane of the 2D Minkowski space. Finally, in Sec. 12 we specialize the results obtained to three model problems \(P_{1}\), \(P_{2}\), \(P_{3}\). ## 2 Lorentzian geometry A Lorentzian structure on a smooth manifold \(M\) is a nondegenerate quadratic form \(g\) of index \(1\). Lorentzian geometry attempts to transfer the rich theory of Riemannian geometry (in which the quadratic form \(g\) is positive definite) to the case of Lorentzian metric \(g\). Let us recall some basic definitions of Lorentzian geometry [4, 5]. A vector \(v\in T_{q}M\), \(q\in M\), is called timelike if \(g(v)<0\), spacelike if \(g(v)>0\) or \(v=0\), lightlike (or null) if \(g(v)=0\) and \(v\neq 0\), and nonspacelike if \(g(v)\leq 0\). A Lipschitzian curve in \(M\) is called timelike if it has timelike velocity vector a.e.; spacelike, lightlike and nonspacelike curves are defined similarly. A time orientation \(X_{0}\) is an arbitrary timelike vector field in \(M\). A nonspacelike vector \(v\in T_{q}M\) is future directed if \(g(v,X_{0}(q))<0\), and past directed if \(g(v,X_{0}(q))>0\). A future directed timelike curve \(q(t)\), \(t\in[0,t_{1}]\), is called arclength parametrized if \(g(\dot{q}(t),\dot{q}(t))\equiv-1\). Any future directed timelike curve can be parametrized by arclength, similarly to Riemannian geometry. The Lorentzian length of a nonspacelike curve \(\gamma\in\mathrm{Lip}([0,t_{1}],M)\) is \(l(\gamma)=\int_{0}^{t_{1}}|g(\dot{\gamma},\dot{\gamma})|^{1/2}dt.\) For points \(q_{0},q_{1}\in M\) denote by \(\Omega_{q_{0}q_{1}}\) the set of all future directed nonspacelike curves in \(M\) that connect \(q_{0}\) to \(q_{1}\). In the case \(\Omega_{q_{0}q_{1}}\neq\emptyset\) define the Lorentzian distance (time separation function) from the point \(q_{0}\) to the point \(q_{1}\) as \[d(q_{0},q_{1})=\sup\{l(\gamma)\mid\gamma\in\Omega_{q_{0}q_{1}}\}. \tag{2.1}\] And if \(\Omega_{q_{0}q_{1}}=\emptyset\), then by definition \(d(q_{0},q_{1})=0\). A future directed nonspacelike curve \(\gamma\) is called a Lorentzian length maximizer if it realizes the supremum in (2.1) between its endpoints \(\gamma(0)=q_{0}\), \(\gamma(t_{1})=q_{1}\). The causal future of a point \(q_{0}\in M\) is the set \(J^{+}(q_{0})\) of points \(q_{1}\in M\) for which there exists a future directed nonspacelike curve \(\gamma\) that connects \(q_{0}\) and \(q_{1}\). The causal past \(J^{-}(q_{0})\) is defined analogously in terms of past directed nonspacelike curves. The chronological future \(I^{+}(q_{0})\) and chronological past \(I^{-}(q_{0})\) of a point \(q_{0}\in M\) are defined similarly via future directed and past directed timelike curves \(\gamma\). Let \(q_{0}\in M\), \(q_{1}\in J^{+}(q_{0})\). The search for Lorentzian length maximizers that connect \(q_{0}\) with \(q_{1}\) reduces to the search for future directed nonspacelike curves \(\gamma\) that solve the problem \[l(\gamma)\to\max,\qquad\gamma(0)=q_{0},\quad\gamma(t_{1})=q_{1}. \tag{2.2}\] A set of vector fields \(X_{1},\ldots,X_{n}\in\mathrm{Vec}(M)\), \(n=\dim M\), is an orthonormal frame for a Lorentzian structure \(g\) if for all \(q\in M\) \[g_{q}(X_{1},X_{1})=-1,\qquad g_{q}(X_{i},X_{i})=1,\quad i=2, \ldots,n,\] \[g_{q}(X_{i},X_{j})=0,\quad i\neq j.\] Assume that time orientation is defined by a timelike vector field \(X\in\mathrm{Vec}(M)\) for which \(g(X,X_{1})<0\) (e.g., \(X=X_{1}\)). Then the Lorentzian problem for the Lorentzian structure with the orthonormal frame \(X_{1},\ldots,X_{n}\) is stated as the following optimal control problem: \[\dot{q}=\sum_{i=1}^{n}u_{i}X_{i}(q),\qquad q\in M,\] \[u\in U=\left\{(u_{1},\ldots,u_{n})\in\mathbb{R}^{n}\mid u_{1} \geq\sqrt{u_{2}^{2}+\cdots+u_{n}^{2}}\right\},\] \[q(0)=q_{0},\qquad q(t_{1})=q_{1},\] \[l(q(\cdot))=\int_{0}^{t_{1}}\sqrt{u_{1}^{2}-u_{2}^{2}-\cdots-u_ {n}^{2}}\,dt\to\max.\] **Remark 1**.: _The Lorentzian length is preserved under monotone Lipschitzian time reparametrizations \(t(s)\), \(s\in[0,s_{1}]\). Thus if \(q(t)\), \(t\in[0,t_{1}]\), is a Lorentzian length maximizer, then so is any its reparametrization \(q(t(s))\), \(s\in[0,s_{1}]\)._ _In this paper we choose primarily the following parametrization of trajectories: the arclength parametrization \((u_{1}^{2}-u_{2}^{2}-\cdots-u_{n}^{2}\equiv 1)\) for timelike trajectories, and the parametrization with \(u_{1}(t)\equiv 1\) for future directed lightlike trajectories. Another reasonable choice is to set \(u_{1}(t)\equiv 1\) for all future directed nonspacelike trajectories._ **Remark 2**.: _In Lorentzian geometry, only nonspacelike curves have a physical meaning since according to the Relativity Theory information cannot move with a speed greater than the speed of light [4, 5, 6]. By this reason, in Lorentzian geometry typically only nonspacelike curves are studied._ _Geometrically, spacelike curves may well be considered. For \(2\)-dimensional Lorentzian manifolds there is not much geometric difference between timelike and spacelike curves since the first ones are obtained from the second ones by a change of Lorentzian form \(g\mapsto-g\), or, equivalently, by a change of controls \((u_{1},u_{2})\mapsto(u_{2},u_{1})\). Although, for Lorentzian manifolds of dimension greater than \(2\) the spacelike cone is nonconvex, so the optimization problem of finding the longest spacelike curve is not well-defined (optimal trajectories do not exist)._ _Notice also that curves \(q(\cdot)\) of variable causality \((\operatorname{sgn}g(\dot{q})\neq\operatorname{const})\) cannot be optimal: it is easy to show that the causal character of extremal trajectories is preserved._ **Remark 3**.: _The Lorentzian distance is defined by maximization (2.1), not by minimization as in Riemannian geometry. In Lorentzian geometry, the distance means physically the space-time interval between events in a spacetime [4, 5, 6]. On the other hand, the minimum of Lorentzian length is always zero (by virtue of lightlike trajectories), so the minimization problem here is not interesting._ _Notice also that the Lorentzian distance \(d\) is not a distance (metric) in the sense of metric spaces since \(d\) is not symmetric and satisfies the reverse triangle inequality._ **Example 1**.: The simplest example of Lorentzian geometry is the Minkowski space [4]. In the 2D case it is defined as \(\mathbb{R}^{2}_{1}=\mathbb{R}^{2}_{xy}\), \(g=-dx^{2}+dy^{2}\). The Lorentzian length maximizers are straight line segments along which \(g\leq 0\), the Lorentzian distance is \[d((x_{0},y_{0}),(x_{1},y_{1}))=\begin{cases}\sqrt{(x_{1}-x_{0})^{2}-(y_{1}-y_{ 0})^{2}}&\text{ for }(x_{1}-x_{0})^{2}-(y_{1}-y_{0})^{2}>0,\\ 0&\text{ for }(x_{1}-x_{0})^{2}-(y_{1}-y_{0})^{2}\leq 0,\end{cases}\] and positive radius Lorentzian spheres are arcs of hyperbolas with asymptotes parallel to lightlike curves \(x=\pm y\). See Fig. 1. This example has the following generalizations and variations, see [12], Sec. 5.2. Let \(\mathbb{R}^{n}_{s}\), \(0\leq s\leq n\), denote the vector space \(\mathbb{R}^{n}\) with the quadratic form \(g^{n}_{s}=-\sum_{i=1}^{s}dx_{i}^{2}+\sum_{j=s+1}^{n}dx_{j}^{2}\). **Example 2**.: Let \(n\geq 2\). The Minkowski space \(\mathbb{R}^{n}_{1}\) is a Lorentzian manifold with the Lorentzian form \(g^{n}_{1}\). It has constant curvature \(K=0\) ([12], Th. 2.4.3). **Example 3**.: Let \(n\geq 2\), and let \(r>0\). The de Sitter space is the Lorentzian manifold \[\mathbb{S}^{n}_{1}=\big{\{}x=(x_{1},\ldots,x_{n+1})\in\mathbb{R}^{n+1}_{1}\mid -x_{1}^{2}+x_{2}^{2}+\cdots+x_{n+1}^{2}=r^{2}\big{\}}\] with the Lorentzian form \(g=\left.g^{n+1}_{1}\right|_{\mathbb{S}^{n}_{1}}\). The space \(\mathbb{S}^{n}_{1}\) has constant curvature \(K=\frac{1}{r^{2}}\) ([12], Th. 2.4.4). Consider the Lorentzian manifold \[\mathbb{H}^{n}_{1}=\big{\{}x=(x_{1},\ldots,x_{n+1})\in\mathbb{R}^{n+1}_{2} \mid-x_{1}^{2}-x_{2}^{2}+x_{3}^{2}+\cdots+x_{n+1}^{2}=-r^{2}\big{\}}\] with the Lorentzian form \(g=\left.g^{n+1}_{2}\right|_{\mathbb{H}^{n}_{1}}\). The universal covering \(\widetilde{\mathbb{H}^{n}_{1}}\) of \(\mathbb{H}^{n}_{1}\) is called anti-de Sitter space. The spaces \(\mathbb{H}^{n}_{1}\) and \(\widetilde{\mathbb{H}^{n}_{1}}\) have constant curvature \(K=-\frac{1}{r^{2}}\) ([12], Th. 2.4.4). Let \(M_{j}\) be a Lorentzian manifold with Lorentzian distance \(d_{j}\), \(j=1,2\). A mapping \(i\,:\,M_{1}\to M_{2}\) is called an isometry if \(d_{1}(q,p)=d_{2}(i(q),i(p))\) for all \(q,p\in M_{1}\). **Example 4**.: The group of isometries of the Minkowski plane \(\mathbb{R}^{2}_{1}\) is generated by translations, hyperbolic rotations \(e^{\imath X}\), \(X=y\frac{\partial}{\partial\,x}+x\frac{\partial}{\partial\,y}\), and reflections \((x,y)\mapsto(x,-y)\). ## 3 Lobachevsky plane Proper affine functions on the line are mappings of the form \[a\mapsto y\cdot a+x,\qquad a\in\mathbb{R},\quad y>0,\quad x\in\mathbb{R}. \tag{3.1}\] Consider the group of such functions \(G=\operatorname{Aff}_{+}(\mathbb{R})=\{(x,y)\in\mathbb{R}^{2}\mid y>0\}\) with the group product induced by composition of functions (3.1): \[(x_{2},y_{2})\cdot(x_{1},y_{1})=(x_{2}+y_{2}x_{1},y_{2}y_{1}),\qquad(x_{i},y_{i })\in G\] and the identity element \(\mathrm{Id}=(0,1)\in G\). This group is a semi-direct product \(\mathrm{Aff}_{+}(\mathbb{R})=\mathbb{R}_{+}\rtimes\mathbb{R}_{+}\) \(G\) is a two-dimensional Lie group, connected and simply connected. The vector fields \(X_{1}=y\frac{\partial}{\partial x}\), \(X_{2}=y\frac{\partial}{\partial y}\) form a left-invariant frame on \(G\), thus the Lie algebra of \(G\) is \(\mathfrak{g}=\mathrm{span}(X_{1},X_{2})\). In view of the Lie bracket \([X_{2},X_{1}]=X_{1}\), \(\mathfrak{g}\) and \(G\) are solvable and non-Abelian. In fact, \(\mathfrak{g}\) is a unique solvable non-Abelian two-dimensional Lie algebra [7]. One-parameter subgroups in \(G\) are rays (or straight lines if \(u_{2}=0\)) \[u_{1}(y-1)=u_{2}x,\qquad(u_{1},u_{2})\neq(0,0),\quad(x,y)\in G,\] with the parametrization \[x=\frac{u_{1}}{u_{2}}(e^{u_{2}t}-1),\quad y=e^{u_{2}t},\qquad u _{2}\neq 0, \tag{3.2}\] \[x=u_{1}t,\quad y=1,\qquad u_{2}=0, \tag{3.3}\] see Fig. 2. Formulas (3.2), (3.3) for \(t=1\) describe the exponential mapping \[\exp\,:\,\mathfrak{g}\to G,\qquad u_{1}X_{1}+u_{2}X_{2}\mapsto(x,y)(1). \tag{3.4}\] Notice that left translations of one-parameter subgroups in \(G\) are also rays (or straight lines if \(u_{2}=0\)) since left translations in \(G\) are compositions of homotheties with parallel translations in \(\mathbb{R}_{x,y}^{2}\). **Remark 4**.: _Riemannian geometry on \(\mathrm{Aff}_{+}(\mathbb{R})\) with the orthonormal frame \(X_{1}\), \(X_{2}\) is the Lobachevsky (Gauss, Bolyai) non-Euclidean geometry (in Poincare's model in the upper half-plane) [8, 9]._ ## 4 Left-invariant Lorentzian problems on the Lobachevsky plane In this work we consider left-invariant Lorentzian problems on the Lie group \(G=\mathrm{Aff}_{+}(\mathbb{R})\). Such a problem is specified by an index \(1\) quadratic form \(g\) on the Lie algebra \(\mathfrak{g}\) and a timelike time orientation vector field \(X_{0}\in\mathfrak{g}\). A Lipschitzian curve \(q\,:\,[0,t_{1}]\to G\) is a Lorentzian length maximizer that connects the point \(\mathrm{Id}\) to a point \(q_{1}\in G\) iff it is a solution to the following optimal control problem: \[g(\dot{q}(t))\leq 0,\qquad\bar{g}(\dot{q}(t),X_{0}(q(t))<0, \tag{4.1}\] \[q(0)=\mathrm{Id},\qquad q(t_{1})=q_{1},\] (4.2) \[l=\int_{0}^{t_{1}}|g(\dot{q}(t))|^{1/2}dt\to\max, \tag{4.3}\] where \(\bar{g}\) is the bilinear form on \(\mathfrak{g}\) corresponding to the quadratic form \(g\). Let us decompose a vector \(\mathfrak{g}\ni v=u_{1}X_{1}+u_{2}X_{2}\), then the Lorentzian form \(g\) and the bilinear form \(\bar{g}\) are represented as \(g(v)=g(u_{1},u_{2})\), \(\bar{g}(v^{1},v^{2})=\bar{g}(v^{1}_{1},v^{2}_{2},v^{2}_{1},v^{2}_{2})\), where \(v^{i}=v^{1}_{1}X_{1}+v^{i}_{2}X_{2}\). Let \(X_{0}=v^{0}_{1}X_{1}+v^{0}_{2}X_{2}\), and denote the linear form \(g_{0}(u_{1},u_{2})=\bar{g}(v^{0}_{1},v^{0}_{2},u_{1},u_{2})\). Then the Lorentzian problem (4.1)-(4.3) reads as \[\dot{q}(t)=u_{1}X_{1}+u_{2}X_{2},\qquad q\in G,\quad u=(u_{1},u_{ 2})\in U, \tag{4.4}\] \[U=\{u\in\mathbb{R}^{2}\mid g(u)\leq 0,\ g_{0}(u)<0\},\] (4.5) \[q(0)=\mathrm{Id},\qquad q(t_{1})=q_{1},\] (4.6) \[l=\int_{0}^{t_{1}}|g(u)|^{1/2}dt\to\max. \tag{4.7}\] The Lorentzian quadratic form can be decomposed as a sum of squares \[g(u)=-(au_{1}+bu_{2})^{2}+(cu_{1}+du_{2})^{2}, \tag{4.8}\] \[A=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in\mathrm{GL}(2,\mathbb{R}). \tag{4.9}\] Notice that the matrix \(A\) in (4.9) is not unique: it is determined up to the symmetries \[\varepsilon_{1}\ :\ (a,b,c,d)\mapsto(-a,-b,c,d),\qquad\qquad\qquad \varepsilon_{2}\ :\ (a,b,c,d)\mapsto(a,b,-c,-d).\] The inequality \(\left.g_{0}\right|_{U}=au_{1}+bu_{2}<0\) fixes signs of \(a\) and \(b\), thus killing the reflection \(\varepsilon_{1}\). If we further assume that \(|A|>0\) in (4.9), then the signs of \(c\) and \(d\) become fixed, thus \(\varepsilon_{2}\) is killed. Summing up, we have the following. **Lemma 1**.: _The space of left-invariant Lorentzian problems (4.4)-(4.7) is parametrized by matrices_ \[A=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in\mathrm{GL}_{+}(2,\mathbb{R})=\{A\in\mathrm{GL}(2, \mathbb{R})\mid|A|>0\}.\] Given a problem (4.4)-(4.7) determined by a matrix \(A=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in\mathrm{GL}_{+}(2,\mathbb{R})\), introduce new controls \[v_{1}=au_{1}+bu_{2},\qquad v_{2}=cu_{1}+du_{2},\] or, equivalently, \[u_{1}=\alpha v_{1}+\beta v_{2},\qquad u_{2}=\gamma v_{1}+\delta v_{2},\qquad \left(\begin{array}{cc}\alpha&\beta\\ \gamma&\delta\end{array}\right)=A^{-1}.\] Introduce further the vector fields \[Y_{1}=\alpha X_{1}+\gamma X_{2},\qquad Y_{2}=\beta X_{1}+\delta X_{2}.\] Then the problem (4.4)-(4.7) reads as \[\dot{q}=v_{1}Y_{1}+v_{2}Y_{2},\qquad q\in G, \tag{4.10}\] \[g=-v^{2}_{1}+v^{2}_{2}\leq 0,\qquad g_{0}=-v_{1}<0,\] (4.11) \[q(0)=\mathrm{Id},\qquad q(t_{1})=q_{1},\] (4.12) \[l=\int_{0}^{t_{1}}\sqrt{v^{2}_{1}-v^{2}_{2}}dt\to\max. \tag{4.13}\] The Lorentzian form factorizes as \[g=l_{1}l_{2},\qquad l_{1}(u_{1},u_{2})=(c-a)u_{1}+(d-b)u_{2},\qquad l_{2}(u_{1},u_ {2})=(c+a)u_{1}+(d+b)u_{2}.\] Introduce the corresponding functions on \(G\): \[\lambda_{1}(x,y)=\operatorname{grad}l_{1}\cdot\left(\begin{array}{c}x\\ y-1\end{array}\right)=(c-a)x+(d-b)(y-1),\] \[\lambda_{2}(x,y)=\operatorname{grad}l_{2}\cdot\left(\begin{array}{c}x\\ y-1\end{array}\right)=(c+a)x+(d+b)(y-1).\] **Remark 5**.: _By virtue of the change of variables \((u_{1},u_{2})\mapsto(-u_{1},-u_{2})\), \(A\mapsto-A\), \(t\mapsto-t\), we can get_ \[a\geq 0\text{ or, equivalently, }\delta\geq 0, \tag{4.14}\] _which we assume in the sequel._ **Example 5**.: As typical examples of Lorentzian problems (4.10)-(4.13), we consider in Sec. 12 the following model problems \(P_{i}\), \(i=1,2,3\): \[P_{1}\colon\, A=\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right)\text{, }U=\{u=(u_{1},u_{2})\in\mathbb{R}^{2}\mid-u_{1}^{2}+u_ {2}^{2}\leq 0,\ -u_{1}\leq 0\}\text{, }g=-u_{1}^{2}+u_{2}^{2}\text{, }g_{0}=-u_{1},\] \[P_{2}\colon\, A=\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right)\text{, }U=\{u=(u_{1},u_{2})\in\mathbb{R}^{2}\mid-u_{2}^{2}+u_ {1}^{2}\leq 0,\ -u_{2}\leq 0\}\text{, }g=-u_{2}^{2}+u_{1}^{2}\text{, }g_{0}=-u_{2},\] \[P_{3}\colon\, A=\left(\begin{array}{cc}1/2&1/2\\ -1/2&1/2\end{array}\right)\text{, }U=\{u=(u_{1},u_{2})\in\mathbb{R}^{2}\mid u _{1}\geq 0,\ u_{2}\geq 0\}\text{, }g=-u_{1}u_{2}\text{, }g_{0}=-(u_{1}+u_{2})/2.\] See the sets of control parameters \(U\) for these problems resp. in Figs. 4-5. We denote \(J^{+}=J^{+}(\operatorname{Id})\) and \(d(q)=d(\operatorname{Id},q)\), \(q\in G\). ## 5 Curvature In this section we show that each left-invariant Lorentzian structure on the group \(G=\mathrm{Aff}_{+}(\mathbb{R})\) has constant sectional curvature \(K\), thus it is locally isometric to the 2D Minkowski space (if \(K=0\)), to a 2D de Sitter space (if \(K>0\)), or to a 2D anti-de Sitter space (if \(K<0\)). ### Levi-Civita connection and sectional curvature of Lorentzian manifolds Here we recall some standard facts of Lorentzian (in fact, pseudo-Riemannian) geometry, following [4, 12, 13]. A connection \(D\) on a smooth manifold \(M\) is a mapping \(D\,:\,(\mathrm{Vec}(M))^{2}\to\mathrm{Vec}(M)\) such that 1. \(D_{V}W\) is \(C^{\infty}(M)\)-linear in \(V\), 2. \(D_{V}W\) is \(\mathbb{R}\)-linear in \(W\), 3. \(D_{V}(fW)=(Vf)W+sD_{V}W\) for \(f\in C^{\infty}(M)\). The vector field \(D_{V}W\) is called the covariant derivative of \(W\) w.r.t. \(V\) for the connection \(D\). **Theorem 1** ([13], Th. 11).: _On a Lorentzian manifold \((M,g)\) there is a unique connection \(D\) such that_ 1. \([V,W]=D_{V}W-D_{W}V\)_, and_ 2. \(Xg(V,W)=g(D_{X}V,W)+g(V,D_{X}W)\)_,_ _for all \(X,V,W\in\mathrm{Vec}(M)\). \(D\) is called the Levi-Civita connection on \(M\), and is characterized by the Koszul formula_ \[2g(D_{V}W,X)=Vg(W,X)+Wg(X,V)-Xg(V,W)-g(V,[W,X])+g(W,[X,V])+g(X,[V,W]).\] Let \((M,g)\) be a Lorentzian manifold with Levi-Civita connection \(D\). The mapping \(R\,:\,(C^{\infty}(M))^{3}\to C^{\infty}(M)\) given by \(R_{XY}Z=D_{[X,Y]}Z-[D_{X},D_{Y}]Z\) is called the Riemannian curvature tensor of \((M,g)\). Let \(q\in M\), and let \(P\) be a 2D plane in \(T_{q}M\). For vectors \(v,w\in T_{q}M\), define \(Q(v,w)=g(v,v)g(w,w)-(g(v,w))^{2}\). A plane \(P\) is called nondegenerate if \(Q(v,w)\neq 0\) for some (hence every) basis \(v,w\) for \(P\). **Lemma 2** ([13], Lemma 39).: _Let \(P\subset T_{q}M\) be a nondegenerate plane. The number_ \[K(q,P)=\frac{g(R_{vw}v,w)}{Q(v,w)} \tag{5.1}\] _is independent of the choice of basis \(v,w\) in \(P\), and is called the sectional curvature of the plane section \(P\)._ Figure 5: The set \(U\) for the problem \(P_{3}\) A Lorentzian manifold which has the same sectional curvature on all nondegenerate sections is said to have constant curvature. **Theorem 2** ([12], Theorem 2.4.1).: _Let \((M,g)\) be a Lorentzian manifold of dimension \(n\geq 2\), and let \(K\in\mathbb{R}\). Then the following conditions are equivalent:_ 1. \(M\) _has constant curvature_ \(K\)_,_ 2. _for any_ \(q\in M\) _there exists a neighbourhood of_ \(q\) _isometric to an open subset of de Sitter space_ \(\mathbb{S}_{1}^{n}\) _for_ \(K>0\)_, Minkowski space_ \(\mathbb{R}_{1}^{n}\) _for_ \(K=0\)_, anti-de Sitter space_ \(\widetilde{\mathbb{H}}_{1}^{n}\) _for_ \(K<0\)_._ ### Sectional curvature of \(\mathrm{Aff}_{+}(\mathbb{R})\) In this subsection we compute Levi-Civita connection and sectional curvature of left-invariant Lorentzian structures on the group \(G=\mathrm{Aff}_{+}(\mathbb{R})\). **Theorem 3**.: _Levi-Civita connection \(D\) of a left-invariant Lorentzian structure \(g\) on the group \(G=\mathrm{Aff}_{+}(\mathbb{R})\) is given as follows:_ \[D_{X_{i}}X_{j}=\mu_{ij}X_{1}+\nu_{ij}X_{2},\qquad i,j=1,2,\] \[(\mu_{11},\nu_{11})=-\frac{1}{|A|^{2}}(-g_{12}g_{11},g_{11}^{2}),\qquad(\mu_{12},\nu_{12})=-\frac{1}{|A|^{2}}(-g_{22}g_{11},g_{12}g_{11}),\] \[(\mu_{21},\nu_{21})=-\frac{1}{|A|^{2}}(-g_{12}^{2},g_{11}g_{12}),\qquad(\mu_{22},\nu_{22})=-\frac{1}{|A|^{2}}(-g_{22}g_{12},g_{12}^{2}),\] \[g_{11}=g(X_{1})=c^{2}-a^{2},\qquad g_{12}=g(X_{1},X_{2})=cd-ab, \qquad g_{22}=g(X_{2})=d^{2}-b^{2}.\] Proof.: Immediate computation via Koszul formula. **Theorem 4**.: _A left-invariant Lorentzian structure \(g\) on the group \(G=\mathrm{Aff}_{+}(\mathbb{R})\) has constant curvature \(K=\frac{g(X_{1})}{|A|^{2}}\)._ Proof.: Immediate computation via formula (5.1) for \(P=T_{q}G\), \(v=X_{1}(q)\), \(w=X_{2}(q)\), \(q\in G\). **Corollary 1**.: _A left-invariant Lorentzian structure \(g\) on the group \(G=\mathrm{Aff}_{+}(\mathbb{R})\) is locally isomorphic to the \(2D\) Minkowski space \(\mathbb{R}_{1}^{2}\) (for \(K=0\)), de Sitter space \(\mathbb{S}_{1}^{2}\) (for \(K>0\)), or anti-de Sitter space \(\widetilde{\mathbb{H}}_{1}^{2}\) (for \(K<0\))._ **Remark 6**.: _For the case \(K=0\) we construct an explicit isometry of the group \(G\) to a half-plane of \(\mathbb{R}_{1}^{2}\) in Th. 20._ ## 6 Attainable sets Denote the set of admissible velocities \(\mathcal{U}=\{u_{1}X_{1}+u_{2}X_{2}\mid(u_{1},u_{2})\in U\}\subset\mathfrak{g}\). **Theorem 5**.: _Let \(q_{0}\in G\), then_ \[J^{+}(q_{0})=q_{0}\exp(\mathcal{U})=\{q\in G\mid\lambda_{1}(q) \leq\lambda_{1}(q_{0}),\ \lambda_{2}(q)\geq\lambda_{2}(q_{0})\}, \tag{6.1}\] \[J^{-}(q_{0})=q_{0}\exp(-\mathcal{U})=\{q\in G\mid\lambda_{1}(q) \geq\lambda_{1}(q_{0}),\ \lambda_{2}(q)\leq\lambda_{2}(q_{0})\}, \tag{6.2}\] _where \(\exp\,:\mathfrak{g}\to G\) is the exponential mapping (3.4) of the Lie group \(G\)._ _Moreover, \(I^{+}(q_{0})=\{q_{0}\}\cup\mathrm{int}\,J^{+}(q_{0})\) and \(I^{-}(q_{0})=\{q_{0}\}\cup\mathrm{int}\,J^{-}(q_{0})\)._ Proof.: By left-invariance of the problem, we need to prove equalities (6.1), (6.2) in the case \(q_{0}=\mathrm{Id}\) only. Let us show that \[J^{+}=\exp(\mathcal{U})=\{q\in G\mid\lambda_{1}(q)\leq 0\leq\lambda_{2}(q)\}. \tag{6.3}\] Future oriented nonspacelike one-parameter semigroups \[\{\exp(t(u_{1},u_{2}))\mid t\geq 0\}=\{(x,y)\in G\mid u_{1}(y-1)=u_{2}x\},\qquad l _{1}(u_{1},u_{2})\leq 0\leq l_{2}(u_{1},u_{2}),\] ill the set \(\exp(\mathcal{U})\), thus \(J^{+}\supset\exp(\mathcal{U})\). On the other hand, admissible trajectories of the system (4.10), (4.11) at the boundary of \(\exp(\mathcal{U})\) are tangent to \(\partial\exp(\mathcal{U})\) or are directed inside \(\exp(\mathcal{U})\). Thus \(J^{+}\subset\exp(\mathcal{U})\), and equality (6.3) follows. A similar equality for \(J^{-}(\mathrm{Id})\) is proved analogously. The expressions for \(I^{\pm}(q_{0})\) are straightforward. See the set \(J^{+}\) for the problems \(P_{1}\), \(P_{2}\), \(P_{3}\) in Figs. 8, 14, 19 respectively. ## 7 Existence of Lorentzian length maximizers ### Existence of length maximizers for globally hyperbolic Lorentzian structures In order to study existence of Lorentzian length maximizers we need some facts from Lorentzian geometry [4]. Let \(M\) be a Lorentzian manifold. An open subset \(O\subset M\) is called causally convex if the intersection of each nonspacelike curve with \(O\) is connected. \(M\) is called strongly causally convex in any point in \(M\) has arbitrarily small causally convex neighbourhoods. Finally, a strongly causally convex Lorentzian manifold \(M\) is called globally hyperbolic if \[J^{+}(p)\cap J^{-}(q)\text{ is compact for any }p,q\in M. \tag{7.1}\] **Theorem 6** (Th. 6.1 [4]).: _If a Lorentzian manifold \(M\) is globally hyperbolic, then any points \(q_{0}\in M\), \(q_{1}\in J^{+}(q_{0})\) can be connected by a Lorentzian length maximizer._ **Theorem 7**.: _A Lorentzian structure \((g,X_{0})\) on \(\mathrm{Aff}_{+}(\mathbb{R})\) is globally hyperbolic iff \(K\geq 0\)._ Proof.: First, all left-invariant Lorentzian structures on \(\mathrm{Aff}_{+}(\mathbb{R})\) are strongly causally convex. Indeed, \(\dot{x}=u_{1}y\) or \(\dot{y}=u_{2}y\) preserves sign and is separated from zero for \((x,y)\in O\), \(u_{1}^{2}+u_{2}^{2}\geq C>0\), \(g(u)\leq 0\), \(g_{0}(u)<0\). So we need to check condition (7.1) only. It follows from Th. 5 that for \(K\geq 0\) the intersection in (7.1) is compact (it is either a parallelogram, a segment, or the empty set). The same theorem implies that for \(K<0\) there exist points \(q\in G\) such that the intersection \(J^{+}\cap J^{-}(q)\) contains points from the absolute \(\{y=0\}\) in its closure, thus this intersection is not compact. **Theorem 8**.: _Let \(K\geq 0\). Then for any points \(q_{0}\in G\), \(q_{1}\in J^{+}(q_{0})\) there exists a Lorentzian length maximizer from \(q_{0}\) to \(q_{1}\)._ Proof.: Follows from Theorems 6, 7. ### Existence of length maximizers in the case \(K<0\) In this subsection we consider the remaining case \(K<0\). Introduce the decomposition \[J^{+}=D\sqcup F\sqcup E, \tag{7.2}\] \[D=\{q\in G\mid\lambda_{1}(q)\leq 0\leq\lambda_{2}(q),\ \lambda_{3}(q)>0\},\qquad F=\{q\in G\mid\lambda_{3}(q)=0\},\qquad E=\{q\in G \mid\lambda_{3}(q)<0\},\] \[\lambda_{3}(q)=\lambda_{1}(q)-\lambda_{1}(B),\qquad B=\left( \frac{d+b}{c+a},0\right)\in\mathbb{R}^{2},\] so that the lines \(\{q\in\mathbb{R}^{2}\mid\lambda_{2}(q)=0\}\) and the absolute \(\{y=0\}\) intersect at the point \(B\in\mathbb{R}^{2}_{x,y}\setminus G\), see Fig. 6 for the problem \(P_{1}\). **Lemma 3**.: _The restriction of a negative curvature Lorentzian structure \((g,X_{0})\) on \(\mathrm{Aff}_{+}(\mathbb{R})\) to \(D\) is globally hyperbolic._ Proof.: We need to check only condition (7.1). Let \(q_{0},q_{1}\in D\). By virtue of Th. 5, the intersection \(J^{+}(q_{0})\cap J^{-}(q_{1})\) is either a parallelogram (if \(q_{1}\in\mathrm{int}\,J^{+}(q_{0})\)) or a segment (if \(q_{1}\in\partial J^{+}(q_{0})\)) or the empty set (if \(q_{1}\notin J^{+}(q_{0})\)), thus it is compact. **Theorem 9**.: _Let \(K<0\), and let \(q_{0}=\mathrm{Id},\,q_{1}\in J^{+}\)._ 1. _If_ \(q_{1}\in D\)_, then there exists a Lorentzian length maximizer from_ \(q_{0}\) _to_ \(q_{1}\) 2. _If_ \(q_{1}\in E\)_, then there exist arbitrarily long trajectories from_ \(q_{0}\) _to_ \(q_{1}\)_. Thus_ \(d(q_{1})=+\infty\) _and there are no Lorentzian length maximizers from_ \(q_{0}\) _to_ \(q_{1}\)_._ Proof.: Item (1) follows from Th. 6 and Lemma 3. Item (2). Take any point \(q_{1}=(x_{1},y_{1})\in E\). Denote by \(C\in\mathbb{R}^{2}_{x,y}\setminus G\), \(C\neq B\), the intersection point of the lines \(\{y=0\}\) and \(\{q\in\mathbb{R}^{2}\mid\lambda_{1}(q)=\lambda_{1}(q_{1})\}\), see Fig. 6 for the problem \(P_{1}\). Notice that \(x(C)>x(B)\). Take any \(\varepsilon\in(0,1)\). Denote by \(B_{\varepsilon}\in G\) the intersection point of the lines \(\{y=\varepsilon\}\) and \(\{q\in\mathbb{R}^{2}\mid\lambda_{2}(q)=0\}\), and by \(C_{\varepsilon}\in G\) the intersection point of the lines \(\{y=\varepsilon\}\) and \(\{q\in\mathbb{R}^{2}\mid\lambda_{1}(q)=\lambda_{1}(q_{1})\}\). The broken line \(q_{\varepsilon}=q_{0}B_{\varepsilon}C_{\varepsilon}q_{1}\) is an admissible trajectory of system (4.4), (4.5) with the cost given by the segment \(B_{\varepsilon}C_{\varepsilon}\) only: \(J(q_{\varepsilon})=\int_{t(B_{\varepsilon})}^{t(C_{\varepsilon})}\sqrt{|g(u_ {1},0)|}dt\). For \(u_{1}=1\) we get \(g(1,0)=c^{2}-a^{2}<0\), \(x(t)=x_{0}+\varepsilon t\), \[t(C_{\varepsilon})-t(B_{\varepsilon})=\frac{x(C_{\varepsilon})-x(B_{ \varepsilon})}{\varepsilon}=\frac{x(C)-x(B)+o(1)}{\varepsilon}\to+\infty\quad \text{ as }\quad\varepsilon\to+0,\] thus \(J(q_{\varepsilon})=\sqrt{c^{2}-a^{2}}\ \frac{x(C)-x(B)+o(1)}{\varepsilon} \to+\infty\) as \(\varepsilon\to+0.\) So \(d(q_{1})=+\infty.\) **Remark 7**.: _We prove below in Th. 13 that for any point \(q_{1}\in F\) there is no Lorentzian length maximizer connecting_ Id _to \(q_{1}\)_._ ## 8 Geodesics ### Pontryagin maximum principle We apply Pontryagin maximum principle (PMP) [1, 2, 3] to optimal control problem (4.4)-(4.7). The Hamiltonian of PMP reads \[h_{v}^{\nu}(\lambda)=v_{1}h_{1}(\lambda)+v_{2}h_{2}(\lambda)- \nu\sqrt{v_{1}^{2}-v_{2}^{2}},\qquad\lambda\in T^{*}G,\qquad\nu\in\mathbb{R},\] \[h_{i}(\lambda)=\langle\lambda,Y_{i}\rangle,\quad i=1,2.\] Since \([Y_{1},Y_{2}]=-\delta Y_{1}+\gamma Y_{2}\), then the Hamiltonian system with the Hamiltonian \(h_{v}^{\nu}\) reads \[\dot{h}_{1}=-v_{2}(-\delta h_{1}+\gamma h_{2}), \tag{8.1}\] \[\dot{h}_{2}=v_{1}(-\delta h_{1}+\gamma h_{2}),\] (8.2) \[\dot{q}=v_{1}Y_{1}+v_{2}Y_{2}.\] #### 8.1.1 Abnormal case Obvious computations in the abnormal case \(\nu=0\) give the following. **Proposition 1**.: _Abnormal extremal trajectories are Lipschitzian reparametrizations of lightlike trajectories:_ \[v_{1}=\pm v_{2}=1,\qquad u_{1}=\alpha\pm\beta,\ u_{2}=\gamma\pm\delta,\] \[q(t)=\exp(t(Y_{1}\pm Y_{2}))=\exp(t(u_{1}X_{1}+u_{2}X_{2})),\] _these are one-parameter subgroups (3.2), (3.3)._ #### 8.1.2 Normal case Now consider the normal case \(\nu=-1\). The maximality condition of PMP \[h=v_{1}h_{1}+v_{2}h_{2}+\sqrt{v_{1}^{2}-v_{2}^{2}}\to\max_{v_{1}\geq|v_{2}|} \tag{8.3}\] yields \(h_{1}^{2}-h_{2}^{2}=v_{1}^{2}-v_{2}^{2}\equiv 1\). Introduce the hyperbolic coordinates \[v_{1}=\cosh\varphi,\quad v_{2}=\sinh\varphi,\qquad\varphi\in \mathbb{R},\] \[h_{1}=-\cosh\psi,\quad h_{2}=\sinh\psi,\qquad\psi\in\mathbb{R}.\] Then the maximality condition (8.3) reads \(h=-\cosh(\varphi-\psi)+1\to\max\), whence \(\varphi=\psi\). Thus the maximized Hamiltonian of PMP reads \(H=\frac{-h_{1}^{2}+h_{2}^{2}}{2}\). Then the vertical subsystem (8.1), (8.2) of the Hamiltonian system of PMP reduces to the ODE \(\dot{\psi}=\delta\sinh\psi+\gamma\cosh\psi\). Summing up, we have the following description of arclength-parametrized (\(g=-v_{1}^{2}+v_{2}^{2}\equiv 1\)) normal extremals. **Proposition 2**.: _Arclength-parametrized normal extremals satisfy the normal Hamiltonian system_ \[\dot{\lambda}=\vec{H}(\lambda),\qquad\lambda\in T^{*}G,\] \[H(\lambda)=\frac{-h_{1}^{2}(\lambda)+h_{2}^{2}(\lambda)}{2} \equiv\frac{1}{2},\qquad h_{1}(\lambda)<0,\] _in coordinates:_ \[\dot{\psi}=\delta\cosh\psi+\gamma\sinh\psi, \tag{8.4}\] \[\dot{q}=\cosh\psi\,Y_{1}+\sinh\psi\,Y_{2}. \tag{8.5}\] Normal extremals are parametrized by covectors \(\lambda_{0}\in C=T^{*}_{\rm Id}G\cap\{H(\lambda)=1/2,\ h_{1}(\lambda)<0\}.\) They are given by the Lorentzian exponential mapping \[\operatorname{Exp}\,:\,C\times\mathbb{R}_{+}\to G,\qquad(\lambda_{0},t)\mapsto q (t)=\pi\circ e^{t\vec{H}}(\lambda_{0}), \tag{8.6}\] where \(\vec{H}\) is the Hamiltonian vector field on \(T^{*}G\) with the Hamiltonian \(H\), \(e^{t\vec{H}}\,:\,G\to G\) is the flow of this vector field, and \(\pi\,:\,T^{*}G\to G\), \(T^{*}_{q}G\ni\lambda\mapsto q\in G\), is the canonical projection of the cotangent bundle. ### Parameterization of geodesics We integrate ODEs (8.4), (8.5) in the case \(\delta\geq 0\), see (4.14). First we integrate the vertical subsystem (8.4): \[\dot{\psi}=\delta\cosh\psi+\gamma\sinh\psi,\qquad\psi(0)=\psi_{0},\qquad\delta \geq 0. \tag{8.7}\] **Proposition 3**.: _Cauchy problem (8.7) has the following solutions._ 1. _If_ \(K<0\)_, then_ \[\psi(t)=\mu(t)-\theta,\] (8.8) \[\delta=\Delta\cosh\theta,\quad\gamma=\Delta\sinh\theta,\qquad \Delta=\sqrt{\delta^{2}-\gamma^{2}},\] (8.9) \[\mu(t)=\operatorname{arsinh}\tan\tau,\qquad\tau=\sigma+\rho,\] (8.10) \[\rho=\arctan\sinh(\psi_{0}+\theta)\in\left(-\frac{\pi}{2},\frac{ \pi}{2}\right),\] (8.11) \[\sigma=\Delta t\in\left(-\frac{\pi}{2}-\rho,\frac{\pi}{2}-\rho \right).\] (8.12) 2. _If_ \(K>0\)_, then_ \[\psi(t)=\mu(t)-\theta,\] \[\gamma=s_{1}\Delta\cosh\theta,\quad\delta=s_{1}\Delta\sinh\theta,\qquad\Delta=\sqrt{\gamma^{2}-\delta^{2}},\qquad s_{1}=\operatorname{sgn}\gamma.\] (8.13) (2.1) _If_ \(\psi_{0}+\theta=0\)_, then_ \[\mu(t)=\operatorname{arcosh}\coth\tau,\qquad\tau=\rho-\sigma>0,\] (8.14) \[\sigma=s_{1}\Delta t<\rho,\] (8.15) \[\rho=\operatorname{artanh}\cosh(\psi_{0}+\theta).\] (8.16) 3. _If_ \(K=0\)_, then_ \[\psi(t)=s_{1}\mu(t),\] \[\mu(t)=-\ln\tau,\qquad\tau=\rho-\gamma t>0,\] \[s_{1}=\operatorname{sgn}\gamma,\qquad\rho=e^{-s_{1}\psi_{0}}.\] Proof.: (1) Let \(K<0\), \(\delta>0\). Introduce variables \(\Delta\), \(\theta\) according to (8.9), \(\mu\) according to (8.8), and \(\sigma\) according to (8.12). Then Cauchy problem (8.7) transforms to \[\frac{d\,\mu}{d\,\sigma}=\cosh\mu,\qquad\mu(0)=\mu_{0}=\psi_{0}+\theta,\] which has solution (8.10) by separation of variables. Cases (2), (3) are considered similarly. Now we integrate the horizontal ODE (8.5) of the Hamiltonian system for normal extremals: \[\dot{x}=yk(\psi),\qquad k(\psi)=\alpha\cosh\psi+\beta\sinh\psi, \qquad x(0)=0, \tag{8.17}\] \[\dot{y}=yl(\psi),\qquad l(\psi)=\gamma\cosh\psi+\delta\sinh\psi, \qquad y(0)=1. \tag{8.18}\] **Proposition 4**.: _Cauchy problem (8.17), (8.18) has the following solution._ 1. _If_ \(K<0\)_, then_ \[x(t)=\cos\rho\left(\lambda(\tan\tau-\tan\rho)+\nu\left(\frac{1}{ \cos\tau}-\frac{1}{\cos\rho}\right)\right),\] (8.19) \[y(t)=\frac{\cos\rho}{\cos\tau},\] (8.20) \[\lambda=\frac{\alpha\delta-\beta\gamma}{\Delta^{2}},\qquad\nu= \frac{\beta\delta-\alpha\gamma}{\Delta^{2}},\] (8.21) _where_ \(\rho\)_,_ \(\tau\)_,_ \(\Delta\) _are defined by (_8.9_)-(_8.11_). The curve_ \((x(t),y(t))\) _is an arc of a hyperbola_ \(y^{2}-(w+\sin\rho)^{2}=\cos^{2}\rho\)_, where_ \(w=\frac{x-\nu(y-1)}{\lambda}\) 2. _Let_ \(K>0\)_._ 1. _If_ \(\psi_{0}+\theta=0\)_, then_ \[x(t)=-\nu(e^{\sigma}-1),\] \[y(t)=e^{\sigma},\] \[\sigma=s_{1}\Delta t,\qquad s_{1}=\operatorname{sgn}\gamma,\qquad \Delta=\sqrt{\gamma^{2}-\delta^{2}}.\] _The curve_ \((x(t),y(t))\) _is a line_ \(x+\nu(y-1)=0\)_._ 2. _If_ \(\psi_{0}+\theta\neq 0\)_, then_ \[x(t)=\sinh\rho\left(\nu\left(\frac{1}{\sinh\rho}-\frac{1}{\sinh \tau}\right)+s_{2}\lambda(\coth\tau-\coth\rho)\right),\] \[y(t)=\frac{\sinh\rho}{\sinh\tau},\] \[s_{2}=\operatorname{sgn}\mu_{0},\] _where_ \(\rho\)_,_ \(\tau\)_,_ \(\Delta\) _are defined by_ (8.13)_-(8.16)_. The curve_ \((x(t),y(t))\) _is an arc of a hyperbola_ \((s_{2}w+\cosh\rho)^{2}-y^{2}=\sinh^{2}\rho\)_, where_ \(w=\frac{x+\nu(y-1)}{\lambda}\)_._ 3. _If_ \(K=0\)_, then_ \[x(t)=\rho\left(f(\tau-\rho)+g\left(\frac{1}{\rho}-\frac{1}{\tau} \right)\right),\] \[y(t)=\frac{\rho}{\tau},\] \[f=-\frac{\alpha-s_{1}\beta}{2\gamma},\qquad g=-\frac{\alpha+s_{ 1}\beta}{2\gamma},\qquad s_{1}=\operatorname{sgn}\gamma,\] \[\tau=\rho-\gamma t,\qquad\rho=e^{-s_{1}\psi_{0}}.\] _The curve_ \((x(t),y(t))\) _is an arc of a hyperbola_ \(w=\rho^{2}\left(\frac{1}{y}-1\right)\)_, where_ \(w=\frac{x+g(y-1)}{f}\)_._ Proof.: Cauchy problem (8.17), (8.18) integrates as \[x(t)=K(t)=\int_{0}^{t}k(s)e^{L(s)}ds, \tag{8.22}\] \[y(t)=e^{L(t)},\qquad L(t)=\int_{0}^{t}l(s)ds. \tag{8.23}\] (1) Let \(K<0\), \(\delta>0\). By item (1) of Propos. 3, \[\cosh\psi=\frac{1}{\Delta}\left(\frac{\delta}{\cos\tau}-\gamma \tan\tau\right),\qquad\sinh\psi=\frac{1}{\Delta}\left(\delta\tan\tau-\frac{ \gamma}{\cos\tau}\right),\] \[k=\Delta\tan\tau,\qquad l=\Delta\left(\frac{\lambda}{\cos\tau}+ \nu\tan\tau\right), \tag{8.24}\] and formulas (8.19), (8.20) follow from (8.22)-(8.24). (2), (3) The cases \(K\geq 0\) are treated similarly. ### Geodesic completeness Denote the maximal domain of a solution \(\lambda_{t}\) to a Cauchy problem \(\dot{\lambda}=\vec{H}(\lambda)\), \(\lambda(0)=\lambda_{0}\in C\) as \((t_{\min}(\lambda_{0}),t_{\max}(\lambda_{0}))\ni 0\). We obtain the following explicit description of this domain from Propositions 3 and 4. **Corollary 2**.: 1. _If_ \(K<0\)_, then_ \(t_{\min}=-\dfrac{\pi/2+\rho}{\Delta}\)_,_ \(t_{\max}=\dfrac{\pi/2-\rho}{\Delta}\)_._ 2. _If_ \(K>0\)_, then:_ 1. _if_ \(\psi_{0}+\theta=0\)_, then_ \(t_{\min}=-\infty\)_,_ \(t_{\max}=+\infty\)_,_ 2. _if_ \(\psi_{0}+\theta\neq 0\)_, then_ \(\begin{cases}t_{\min}=-\infty,\qquad t_{\max}=\dfrac{\rho}{\Delta}&\text{ for }\gamma>0,\\ t_{\min}=-\dfrac{\rho}{\Delta},\qquad t_{\max}=+\infty&\text{ for }\gamma<0.\end{cases}\)__ 3. _If_ \(K=0\)_, then_ \(\begin{cases}t_{\min}=-\infty,\qquad t_{\max}=\dfrac{\rho}{\gamma}&\text{ for }\gamma>0,\\ t_{\min}=\dfrac{\rho}{\gamma},\qquad t_{\max}=+\infty&\text{ for }\gamma<0.\end{cases}\)__ We recall standard definitions of Lorentzian geometry related to geodesic completeness [4]. A timelike arclength-parametrized geodesic \(q(t)\) in a Lorentzian manifold is called complete if it can be extended to be defined for \(-\infty<t<+\infty\), otherwise it is called incomplete. Future and past complete (incomplete) geodesics are defined similarly. A Lorentzian manifold \(M\) is called timelike geodesically complete if all timelike arclength-parametrized geodesics are complete, otherwise \(M\) is called timelike geodesically incomplete. Future and past timelike geodesically complete (incomplete) Lorentzian manifolds are defined similarly. Now Corollary 2 implies the following. **Corollary 3**.: _If \(K<0\), then \(\operatorname{Aff}_{+}(\mathbb{R})\) is both future and past timelike geodesically incomplete._ _Let \(K\geq 0\). If \(\gamma>0\), then \(\operatorname{Aff}_{+}(\mathbb{R})\) is past timelike geodesically complete and future timelike geodesically incomplete. If \(\gamma<0\), then \(\operatorname{Aff}_{+}(\mathbb{R})\) is past timelike geodesically incomplete and future timelike geodesically complete._ _Thus in all cases \(\operatorname{Aff}_{+}(\mathbb{R})\) is timelike geodesically incomplete._ ## 9 Lorentzian length maximizers We prove that all extremal trajectories described in Sec. 8 are optimal, i.e., are Lorentzian length maximizers. The main tool is the following Hadamard's global diffeomorphism theorem. **Theorem 10** (Th. 6.2.8 [10]).: _Let \(X\), \(Y\) be smooth manifolds and let \(F\,:\,X\to Y\) be a smooth mapping such that:_ 1. \(\dim X=\dim Y\)_,_ 2. \(X\) _and_ \(Y\) _are arcwise connected,_ 3. \(Y\) _is simply connected,_ 4. \(F\) _is nondegenerate,_ 5. \(F\) _is proper (i.e., preimage of a compact is a compact)._ _Then \(F\) is a diffeomorphism._ ### Diffeomorphic properties of the exponential mapping Denote the following open subset \(M\subset G\): \[K<0 \Rightarrow\quad M=\operatorname{int}D,\] \[K\geq 0 \Rightarrow\quad M=\operatorname{int}J^{+}.\] The set \(M\cong\mathbb{R}^{2}\) will serve as the domain of the exponential mapping \(\operatorname{Exp}\,:\,N\to G\), in view of the following theorem. **Theorem 11**.: 1. \(\operatorname{Exp}(N)\subset M\)_._ 2. Exp : \(N\to M\) _is a diffeomorphism._ 3. _For any_ \(\lambda_{0}\in N\) _and any_ \(t_{1}\in(0,t_{\max}(\lambda_{0}))\) _the extremal trajectory_ \(\mathrm{Exp}(\lambda_{0},t)\)_,_ \(t\in[0,t_{1}]\)_, is optimal._ Proof.: We consider only the case \(K<0\) since the case \(K\geq 0\) is more simple and are treated similarly. So let \(K<0\), then \[M=\mathrm{int}\,D=\{q\in G\mid\lambda_{1}(q)<0<\lambda_{2}(q), \ \lambda_{3}(q)>0\}, \tag{9.1}\] \[N=\left\{(\rho,\tau)\in\mathbb{R}^{2}\mid\rho\in\left(-\frac{ \pi}{2},\frac{\pi}{2}\right),\ \tau\in\left(\rho,\frac{\pi}{2}\right)\right\}. \tag{9.2}\] Since \(\delta>0\) by virtue of (4.14) and \(\delta^{2}-\gamma^{2}>0\) by virtue of \(K<0\), then \(\delta>|\gamma|\). Further, we have factorizations along arclength-parametrized timelike geodesics \((x(t),y(t))\) given by item (1) of Propos. 4: \[\lambda_{1}(x(t),y(t)) =\frac{2}{\delta-\gamma}\frac{\sin\left(\frac{\pi}{4}-\frac{\rho }{2}\right)\sin\left(\frac{\rho-\tau}{2}\right)}{\sin\left(\frac{\pi}{4}+ \frac{\tau}{2}\right)}, \tag{9.3}\] \[\lambda_{2}(x(t),y(t)) =-\frac{2}{\delta+\gamma}\frac{\sin\left(\frac{\pi}{4}+\frac{ \rho}{2}\right)\sin\left(\frac{\rho-\tau}{2}\right)}{\sin\left(\frac{\pi}{4}- \frac{\tau}{2}\right)},\] (9.4) \[\lambda_{3}(x(t),y(t)) =\frac{2}{\delta-\gamma}\frac{\sin\left(\frac{\pi}{4}+\frac{\rho }{2}\right)\cos\left(\frac{\rho-\tau}{2}\right)}{\sin\left(\frac{\pi}{4}+\frac {\tau}{2}\right)}. \tag{9.5}\] (1) Factorizations (9.3)-(9.5) and equalities (9.1), (9.2) imply immediately that \(\mathrm{Exp}(N)\subset M\). (2) We apply Th. 10 to the mapping Exp : \(N\to M\). Both \(N\) and \(M\) are diffeomorphic to \(\mathbb{R}^{2}\). The Jacobian of the exponential mapping is \(\frac{\partial\left(x,y\right)}{\partial\left(\tau,\rho\right)}=-\lambda\frac {\cos\rho\sin(\rho-\tau)}{\cos^{2}\tau}<0\) on \(N,\) thus \(\mathrm{Exp}\,:\,N\to M\) is nondegenerate. Finally, factorizations (9.3)-(9.5) imply that if \((\rho,\tau)\to\partial N\), then \((x,y)=\mathrm{Exp}(\rho,\tau)\to\partial M\), thus \(\mathrm{Exp}\,:\,N\to M\) is proper. Consequently, Exp : \(N\to M\) is a diffeomorphism. (3) Let \(\lambda_{0}\in N\), and let \(t_{1}\in(0,t_{\max}(\lambda_{0}))\). Let us prove that the trajectory \(q(t)=\mathrm{Exp}(\lambda_{0},t)\), \(t\in[0,t_{1}]\), is optimal. We have \(q_{1}=q(t_{1})=\mathrm{Exp}(\lambda,t_{1})\in M\). Moreover, by item (2) of this theorem \(q(t)\), \(t\in[0,t_{1}]\), is a unique arclength-parametrized geodesic connecting Id to \(q_{1}\). By item (1) of Th. 9 there exists an optimal trajectory connecting these points, so it coincides with \(q(t)\), \(t\in[0,t_{1}]\). ### Inverse of the exponential mapping and optimal synthesis **Theorem 12**.: _The inverse of the exponential mapping \(\mathrm{Exp}^{-1}\,:\,M\to N\), \((x_{1},y_{1})\mapsto(\psi_{0},t_{1})\) is given as follows._ 1. _If_ \(K<0\)_, then_ \[t_{1}=\frac{\tau-\rho}{\Delta},\qquad\psi_{0}=\mathrm{arsinh} \tan\rho-\theta,\] (9.6) \[\tau=\arcsin\left(\frac{y_{1}^{2}+w^{2}-1}{2y_{1}w}\right),\qquad \rho=\arcsin\left(\frac{y_{1}^{2}-w^{2}-1}{2w}\right),\qquad w=\frac{x_{1}-\nu (y_{1}-1)}{\lambda}.\] (9.7) 2. _Let_ \(K>0\)_, and let_ \[w=\frac{x_{1}+\nu(y_{1}-1)}{\lambda},\qquad s_{1}=\mathrm{sgn}\,\gamma.\] (9.8) (2.1) _If_ \(w=0\)_, then_ \(t_{1}=s_{1}\frac{\ln y_{1}}{\Delta}\)_,_ \(\psi_{0}=-\theta\)_._ 2. _If_ \(w\neq 0\)_, then_ \[t_{1}=s_{1}\frac{\rho-\tau}{\Delta},\qquad\psi_{0}=\mathrm{ arcosh}\coth\rho-\theta,\] \[\tau=\mathrm{arcosh}\left(s_{2}\frac{1-y_{1}^{2}-w^{2}}{2y_{1}w} \right),\qquad\rho=\mathrm{arcosh}\left(s_{2}\frac{1-y_{1}^{2}+w^{2}}{2w} \right),\qquad s_{2}=\mathrm{sgn}(\lambda w).\] (9.9) 3. _If_ \(K=0\)_, then_ \[\begin{split}& t_{1}=\frac{\rho-\tau}{\gamma},\qquad\psi_{0}=-s_{1} \ln\rho,\\ &\tau=\sqrt{\frac{w}{y_{1}-y_{1}^{2}}},\qquad\rho=\sqrt{\frac{wy_{ 1}}{1-y_{1}}},\\ & w=\frac{x_{1}-g(1-y_{1})}{f},\qquad f=-\frac{\alpha-s_{1}\beta }{2\gamma},\qquad g=-\frac{\alpha+s_{1}\beta}{2\gamma},\qquad s_{1}=\operatorname {sgn}\gamma.\end{split}\] (9.10) _For any \((x_{1},y_{1})\in M\), there is a unique arclength-parametrized optimal trajectory connecting \(\operatorname{Id}\) to \((x_{1},y_{1})\), and it is \(q(t)=\operatorname{Exp}(\psi_{0},t)\), \(t\in[0,t_{1}]\)._ Proof.: We consider only the case \(K<0\). Then the parametrization of Lorentzian geodesics given by item (1) of Propos. 4 yields \[\begin{split}&\sin\rho=y_{1}\sin\tau-w,\qquad\cos\rho=y_{1}\cos \tau,\\ & 1=\sin^{2}\rho+\cos^{2}\rho=y_{1}^{2}-2y_{1}w\sin\tau,\\ &\sin\tau=\frac{y_{1}^{2}+w^{2}-1}{2y_{1}w},\qquad\sin\rho=\frac{ y_{1}^{2}-w^{2}-1}{2w},\end{split}\] and formulas of item (1) of this theorem follow since \(\tau,\rho\in\left(-\frac{\pi}{2},\frac{\pi}{2}\right)\). **Theorem 13**.: _Let \(K<0\). If \(q_{1}\in F\), then there is no Lorentzian length maximizer connecting \(q_{0}\) to \(q_{1}\)._ Proof.: Lightlike extremal trajectories starting at \(q_{0}\) fill the set \(\partial J^{+}=\{q\in G\mid\lambda_{1}(q)\lambda_{2}(q)=0\}\). By item (1) of Th. 11, timelike extremal trajectories starting at \(q_{0}\) fill the domain \(\operatorname{int}D=\{q\in G\mid\lambda_{1}(q)<0<\lambda_{2}(q),\ \lambda_{3}(q)>0\}\). Thus extremal trajectories starting at \(q_{0}\) do not intersect the set \(F=\{q\in G\mid\lambda_{3}(q)=0\}\). By PMP, there is no optimal trajectory connecting \(q_{0}\) to a point \(q_{1}\in F\). **Remark 8**.: _The reasoning of the preceding theorem applied to the set \(E=\{q\in G\mid\lambda_{3}(q)>0\}\) proves once more that there are no Lorentzian length maximizers connecting \(q_{0}\) to points in \(E\), in addition to item (2) of Th. 9._ ## 10 Lorentzian distance and spheres We describe explicitly the Lorentzian distance \(d(q)=d(\operatorname{Id},q)\) and spheres \(S(R)=\{q\in G\mid d(q)=R\}\), \(R\in[0,+\infty]\). ### The case \(K<0\) **Theorem 14**.: _Let \(K<0\) and let \(q_{1}=(x_{1},y_{1})\in G\)._ 1. _If_ \(q_{1}\notin J^{+}\)_, then_ \(d(q_{1})=0\)_._ 2. _If_ \(q_{1}\in\partial J^{+}\)_, then_ \(d(q_{1})=0\)_._ 3. _If_ \(q_{1}\in\operatorname{int}D\)_, then_ \(d(q_{1})=\frac{\tau-\rho}{\Delta}\)_, where_ \(\tau\)_,_ \(\rho\) _are given by (_9.7_). In particular,_ \[d(\operatorname{int}D)=\left(0,\frac{\pi}{\Delta}\right).\] (10.1) 4. _If_ \(q_{1}\in F\)_, then_ \(d(q_{1})=\frac{\pi}{\Delta}\)_._ 5. _If_ \(q_{1}\in E\)_, then_ \(d(q_{1})=+\infty\) Proof.: (1) follows from the definition of Lorentzian distance \(d\). (2) follows since the only trajectories connecting \(\mathrm{Id}\) to \(q_{1}\in\partial J^{+}\) are lightlike by item (1) of Th. 11. (3) follows from item (1) of Th. 11. (4) Let \(q_{1}\in F\). Take any sequence \((\tau^{n},\rho^{n})\in N\) such that \(\tau^{n}\to\frac{\pi}{2}-0\), \(\rho^{n}\to-\frac{\pi}{2}+0\), \(\frac{\tau^{n}+\pi/2}{\rho^{n}+\pi/2}\to+\infty\). Then the parametrization of the exponential mapping (8.19), (8.20) implies that the point \(q^{n}=\mathrm{Exp}(\tau^{n},\rho^{n})\in\mathrm{int}\,D\) and \(q^{n}\to B=\{y=\lambda_{3}(q)=0\}\). By item (3) of this theorem, \(d(q^{n})=\frac{\tau^{n}-\rho^{n}}{\Delta}\to\frac{\pi}{\Delta}\). Considering a trajectory of the field \(X_{1}=y\frac{\partial}{\partial\,x}\) starting at \(q^{n}\) and terminating at the ray \(F\), we get the bound \(\left.d\right|_{F}\geq\frac{\pi}{\Delta}\). Now we show that in fact \(\left.d\right|_{F}=\frac{\pi}{\Delta}\). To this end we cite the following basic property of Lorentzian distance. **Lemma 4** (Lemma 4.4 [4]).: _For Lorentzian distance \(d\) on a Lorentzian manifold, if \(d(p,q)<\infty\), \(p_{n}\to p\), and \(q_{n}\to q\), then \(d(p,q)\leq\liminf_{n\to\infty}d(p_{n},q_{n})\)._ _Also, if \(d(p,q)=\infty\), \(p_{n}\to p\), and \(q_{n}\to q\), then \(\lim_{n\to\infty}d(p_{n},q_{n})=\infty\)._ Take any point \(\bar{q}\in F\). Choose any sequence \(\mathrm{int}\,D\ni q^{n}\to\bar{q}\). If \(d(\bar{q})=\infty\), then Lemma 4 implies \(\lim_{n\to\infty}d(q_{0},q^{n})=\infty\), which contradicts the bound (10.1). Thus \(d(\bar{q})<\infty\). Then by Lemma 4\(d(q_{0},\bar{q})\leq\liminf_{n\to\infty}d(q_{0},q^{n})\leq\frac{\pi}{\Delta}\). So \(d(q_{0},\bar{q})=\frac{\pi}{\Delta}\). (5) follows from item (2) of Th. 9. The explicit description of Lorentzian length maximizers given by Th. 12 implies, via transformations of elementary functions, the following characterization of Lorentzian spheres centred at \(\mathrm{Id}\). **Corollary 4**.: _Let \(K<0\)._ 1. \(S(0)=\{q\in G\mid\lambda_{1}(q)\geq 0\text{ or }\lambda_{2}(q)\leq 0\}\)_._ 2. _If_ \(R\in(0,\frac{\pi}{\Delta})\)_, then_ \[S(R)=\{(x,y)\in G\mid w^{2}-(y-\cos\sigma)^{2}=\sin^{2}\sigma\},\qquad w=\frac {x-\nu(y-1)}{\lambda},\qquad\sigma=\Delta R,\] _it is an arc of a hyperbola noncompact in both directions._ 3. \(S(\frac{\pi}{\Delta})=F\)_._ 4. _If_ \(R\in(\frac{\pi}{\Delta},+\infty)\)_, then_ \(S(R)=\emptyset\)_._ 5. \(S(+\infty)=E\)_._ ### The case \(K>0\) **Theorem 15**.: _Let \(K>0\) and let \(q_{1}=(x_{1},y_{1})\in G\)._ 1. _If_ \(q_{1}\notin J^{+}\)_, then_ \(d(q_{1})=0\)_._ 2. _If_ \(q_{1}\in\partial J^{+}\)_, then_ \(d(q_{1})=0\)_._ 3. _If_ \(q_{1}\in\mathrm{int}\,J^{+}\cap\{w\neq 0\}\)_, then_ \(d(q_{1})=s_{1}\frac{\rho-\tau}{\Delta}\)_, where_ \(s_{1}\)_,_ \(w\)_,_ \(\tau\)_,_ \(\rho\) _are given by_ \((\ref{eq:1})\)_,_ \((\ref{eq:2})\)_. In particular,_ \(d(\mathrm{int}\,J^{+}\cap\{w\neq 0\})=(0,+\infty)\)_._ 4. _If_ \(q_{1}\in\mathrm{int}\,J^{+}\cap\{w=0\}\)_, then_ \(d(q_{1})=s_{1}\frac{\ln y_{1}}{\Delta}\)_, where_ \(s_{1}\) _is given by_ \((\ref{eq:1})\)_. In particular,_ \(d(\mathrm{int}\,J^{+}\cap\{w=0\})=(0,+\infty)\)_._ Proof.: Similarly to the proof of Th. 14. **Corollary 5**.: _Let \(K>0\)._ 1. \(S(0)=\{q\in G\mid\lambda_{1}(q)\geq 0\text{ or }\lambda_{2}(q)\leq 0\}\)_._ 2. _If_ \(R\in(0,+\infty)\)_, then_ \[S(R)=\{(x,y)\in G\mid(y-\cosh\sigma)^{2}-w^{2}=\sinh^{2}\sigma\},\qquad w=\frac{x+ \nu(y-1)}{\lambda},\qquad\sigma=s_{1}\Delta R,\qquad s_{1}=\operatorname{sgn}\delta,\] _it is an arc of a hyperbola noncompact in both directions._ 3. \(S(+\infty)=\emptyset\)_._ Proof.: Similarly to the proof of Cor. 4. ### The case \(K=0\) **Theorem 16**.: _Let \(K=0\) and let \(q_{1}\in G\)._ 1. _If_ \(q_{1}\notin J^{+}\)_, then_ \(d(q_{1})=0\)_._ 2. _If_ \(q_{1}\in\partial J^{+}\)_, then_ \(d(q_{1})=0\)_._ 3. _If_ \(q_{1}\in\operatorname{int}J^{+}\)_, then_ \(d(q_{1})=\frac{\rho-\tau}{\gamma}\)_, where_ \(\tau\)_,_ \(\rho\) _are given by_ (9.10)_. In particular,_ \(d(\operatorname{int}J^{+})=(0,+\infty)\) _._ Proof.: Similarly to the proof of Th. 14. **Corollary 6**.: _Let \(K=0\)._ 1. \(S(0)=\{q\in G\mid\lambda_{1}(q)\geq 0\text{ or }\lambda_{2}(q)\leq 0\}\)_._ 2. _If_ \(R\in(0,+\infty)\)_, then_ \[S(R)=\{(x,y)\in G\mid(w+\sigma^{2})y=w\},\qquad w=\frac{x+g(y-1)}{f},\qquad \sigma=\gamma R,\] _it is an arc of a hyperbola noncompact in both directions._ 3. \(S(+\infty)=\emptyset\)_._ Proof.: Similarly to the proof of Cor. 4. ### Regularity of Lorentzian distance **Corollary 7**.: _We have \(d\in C^{\omega}(M)\cap C(\operatorname{cl}D)\)._ Proof.: We consider only the case \(K<0\). If \(q_{1}\in M=\operatorname{int}D\), then item (3) of Th. 14 gives \(d(q_{1})=\frac{\tau_{1}-\rho_{1}}{\Delta}\), and the functions \(\tau_{1}\), \(\rho_{1}\) are real-analytic since \(\operatorname{Exp}^{-1}\,:\,\operatorname{int}D\to N\) is real-analytic by virtue of the inverse function theorem for real-analytic mappings. In order to show the inclusion \(d\in C(\operatorname{cl}D)\), it remains to prove continuity of \(d\) on the boundary \(\partial D=\partial J^{+}\cup F\). If \(\operatorname{int}D\ni q^{n}\to q_{1}\in\partial J^{+}\), then by virtue of items (2), (3) of Th. 14 we have \(d(q^{n})\to 0=d(q_{1})\). And if \(\operatorname{int}D\ni q^{n}\to q_{1}\in F\), then similarly \(d(q^{n})\to\frac{\pi}{\Delta}=d(q_{1})\). Now we study asymptotics of the Lorentzian distance \(d(q)\) near the boundary of the domain \(M\). For a point \(q\in M\), denote by \(d_{M}(q)\) the Euclidean distance from \(q\) to \(\partial M\). The explicit expression for the Lorentzian distance in the domain \(M\) given by Theorems 14-16 implies that near smoothness points of \(\partial M\) the distance \(d(q)\) is Holder with exponent \(\frac{1}{2}\) of the distance \(d_{M}(q)\), similarly to the Minkowski plane. **Corollary 8**.: _Let \(\bar{q}\in\partial M\) be a point of smoothness of the curve \(\partial M\). Then_ \[d(q)=d(\bar{q})+f(\bar{q})\sqrt{d_{M}(q)}+O(d_{M}(q))^{3/2},\] \[M\ni q\to\bar{q},\qquad f(\bar{q})\neq 0.\] **Remark 9**.: _Alternative proofs of Corollaries 7, 8 follow by local isometry of \(\operatorname{Aff}_{+}(\mathbb{R})\) with standard constant curvature Lorentzian manifolds \(\mathbb{R}^{2}_{1}\), \(\mathbb{S}^{2}_{1}\), \(\widetilde{\mathbb{H}}^{2}_{1}\)._ ## 11 Isometries ### Infinitesimal isometries of Lorentzian manifolds We recall some necessary facts of Lorentzian (in fact, pseudo-Riemannian geometry) [13]. A vector field \(X\) on a Lorentzian manifold \((M,g)\) is called a Killing vector field (or an infinitesimal isometry) if \(L_{X}g=0\). **Proposition 5** ([13], Propos. 23).: _A vector field \(X\) is Killing iff the mappings \(\psi_{t}\) of its local flow satisfy \(\psi_{t}^{*}g=g\), where \(\psi_{t}\,:\,M\to M\) is the shift of \(M\) along \(X\) by time \(t\)._ **Corollary 9**.: _A vector field \(X\) is Killing iff \(d(q_{1},q_{2})=d(\psi_{t}(q_{1}),\psi_{t}(q_{2}))\) for all \(q_{1},q_{2}\in M\) and all \(t\) for which the right-hand side is defined._ **Proposition 6** ([13], Propos. 25).: _A vector field \(X\) is Killing iff_ \[Xg(V,W)=g([X,V],W)+g(V,[X,W]),\qquad V,W\in\mathrm{Vec}(M). \tag{11.1}\] Denote by \(i(M)\) the set of Killing vector fields on a Lorentzian manifold \(M\). The set \(i(M)\) is a Lie algebra over \(\mathbb{R}\) w.r.t. Lie bracket of vector fields. **Lemma 5** ([13], Lemma 28).: _The Lie algebra \(i(M)\) on a connected Lorentzian manifold \(M\), \(\dim M=n\), has dimension at most \(\frac{n(n+1)}{2}\)._ **Remark 10**.: _Let \(M\) be a connected Lorentzian manifold of dimension \(n\). Then \(\dim i(M)=\frac{n(n+1)}{2}\) iff \(M\) has constant curvature (Exercises \(14\), \(15\)[13])._ Denote by \(I(M)\) the set of all isometries of a Lorentzian manifold \(M\). **Theorem 17** ([13], Theorem 32).: \(I(M)\) _is a Lie group._ Denote by \(ci(M)\) the set of all complete Killing vector fields on \(M\). **Proposition 7** ([13], Propos. 33).: 1. \(ci(M)\) _is a Lie subalgebra of_ \(i(M)\)_._ 2. _There is a Lie anti-isomorphism between the Lie algebra of the Lie group_ \(I(M)\) _and the Lie algebra_ \(ci(M)\)_._ Denote by \(I_{0}(M)\) the connected component of the identity in the Lie group \(I(M)\). ### Killing vector fields and isometries of \(\mathrm{Aff}_{+}(\mathbb{R})\) We compute the Lie algebra of Killing vector fields for left-invariant Lorentzian structures on \(G=\mathrm{Aff}_{+}(\mathbb{R})\). By Th. 4, such Lorentzian structures have constant curvature. By Remark 10, \[\dim i(G)=3. \tag{11.2}\] Left translations on the Lie group \(G\) are obvious isometries. They are generated by right-invariant vector fields on \(G\): \[\widetilde{X}_{1}(q)=R_{q*}X_{1}(\mathrm{Id})=\frac{\partial}{\partial\,x}, \qquad\widetilde{X}_{2}(q)=R_{q*}X_{2}(\mathrm{Id})=x\frac{\partial}{\partial \,x}+y\frac{\partial}{\partial\,y},\] where \(R_{q}\,:\,\bar{q}\mapsto\bar{q}q\) is the right translation on \(G\). Since \([\widetilde{X}_{i},X_{j}]=0\), Propos. 11.1 implies that \(\widetilde{X}_{1}\), \(\widetilde{X}_{2}\) are Killing vector fields. By virtue of (11.2), in order to describe the 3D Lie algebra \(i(G)\) it remains to find just one Killing vector field linearly independent on \(\widetilde{X}_{1}\), \(\widetilde{X}_{2}\). **Lemma 6**.: _If \(X\in\mathrm{Vec}(G)\) is a Killing vector field such that \(X(\mathrm{Id})=0\), then \(X\) is tangent to Lorentzian spheres \(S(R)\), \(R\in[0,+\infty]\)._ Proof.: Local flow of \(X\) preserves the Lorentzian distance \(d(\mathrm{Id},q)\), thus the Lorentzian spheres as well. **Lemma 7**.: _The following vector field is tangent to Lorentzian spheres \(S(R)\), \(R\in(0,+\infty)\):_ 1. \(K<0\) \(\Rightarrow\) \(X_{-}=\left(y^{2}+w^{2}\right)\frac{\partial}{\partial\,w}+2wy\frac{\partial}{ \partial\,y}=\left(\lambda(y^{2}+w^{2}-1)+2\nu wy\right)\frac{\partial}{\partial \,x}+2wy\frac{\partial}{\partial\,y}\)_,_ \(w=\frac{x-\nu(y-1)}{\lambda}\)_,_ 2. \(K>0\) \(\Rightarrow\) \(X_{+}=\left(y^{2}+w^{2}\right)\frac{\partial}{\partial\,w}+2wy\frac{\partial }{\partial\,y}=\left(\lambda(y^{2}+w^{2}-1)-2\nu wy\right)\frac{\partial}{ \partial\,x}+2wy\frac{\partial}{\partial\,y}\)_,_ \(w=\frac{x+\nu(y-1)}{\lambda}\)_,_ 3. \(K=0\) \(\Rightarrow\) \(X_{0}=w\frac{\partial}{\partial\,w}+y(1-y)\frac{\partial}{\partial\,y}=\left( x+g(y^{2}-1)\right)\frac{\partial}{\partial\,x}+y(1-y)\frac{\partial}{ \partial\,y}\)_,_ \(w=\frac{x+g(y-1)}{f}\)_._ Proof.: Follows from the explicit parametrization of the spheres \(S(R)\), \(R\in(0,+\infty)\), see Corollaries 4, 5, 6 respectively. **Theorem 18**.: _Let \(K\neq 0\). Then \(i(G)=\operatorname{span}(\widetilde{X}_{1},\widetilde{X}_{2},X_{\pm})\), where \(\pm=\operatorname{sgn}K\), and \(X_{\pm}\) is given by items \((1)\), \((2)\) of Lemma 7. The table of Lie brackets in this Lie algebra is \([\widetilde{X}_{1},\widetilde{X}_{2}]=\widetilde{X}_{1}\), \([\widetilde{X}_{1},X_{\pm}]=\mp\frac{2\nu}{\lambda}\widetilde{X}_{1}+\frac{2 }{\lambda}\widetilde{X}_{2}\), \([\widetilde{X}_{2},X_{\pm}]=\frac{2(\lambda^{2}-\nu^{2})}{\lambda}\widetilde {X}_{1}\pm\frac{2\nu}{\lambda}\widetilde{X}_{2}+X_{\pm}\). The Lie algebra \(i(G)\) is isomorphic to the Lie algebra \(\mathfrak{sl}(2)\) of the Lie group \(\operatorname{SL}(2)\) of unimodular \(2\times 2\) matrices._ Proof.: The vector field \(X_{\pm}\) satisfies identity (11.1), thus it is Killing. Since \(X_{\pm}\) is linearly independent of \(\widetilde{X}_{1}\), \(\widetilde{X}_{2}\) and \(\dim i(G)=3\), it follows that \(i(G)=\operatorname{span}(\widetilde{X}_{1},\widetilde{X}_{2},X_{\pm})\). The table of Lie brackets in this Lie algebra is verified immediately. Moreover, these Lie brackets imply that the Lie algebra \(i(G)\) is simple, thus it is isomorphic to \(\mathfrak{sl}(2)\) or \(\mathfrak{so}(3)\), see the classification of 3D Lie algebras in [7]. But \(i(G)\) contains a 2D Lie subalgebra spanned by \(\widetilde{X}_{1}\), \(\widetilde{X}_{2}\), which is impossible in \(\mathfrak{so}(3)\). Thus \(i(G)\cong\mathfrak{sl}(2)\). **Theorem 19**.: _Let \(K=0\). Then \(i(G)=\operatorname{span}(\widetilde{X}_{1},\widetilde{X}_{2},X_{0})\), where \(X_{0}\) is given by item \((3)\) of Lemma 7. The table of Lie brackets in this Lie algebra is \([\widetilde{X}_{1},\widetilde{X}_{2}]=\widetilde{X}_{1}\), \([\widetilde{X}_{1},X_{0}]=\widetilde{X}_{1}\), \([\widetilde{X}_{2},X_{0}]=2g\widetilde{X}_{1}-\widetilde{X}_{2}+X_{0}\). The Lie algebra \(i(G)\) is isomorphic to the Lie algebra \(\mathfrak{sh}(2)\) of the Lie group \(\operatorname{SH}(2)\) of hyperbolic motions of the plane._ Proof.: Similarly to the proof of Th. 18. **Proposition 8**.: 1. \(ci(G)=\operatorname{span}(\widetilde{X}_{1},\widetilde{X}_{2})\)_._ 2. \(I_{0}(\operatorname{Aff}_{+}(\mathbb{R}))=\{L_{q}\mid q\in\operatorname{Aff}_{ +}(\mathbb{R})\}\cong\operatorname{Aff}_{+}(\mathbb{R})\)_._ Proof.: Item \((1)\). The vector fields \(\widetilde{X}_{1}\), \(\widetilde{X}_{2}\) are complete. Although, each vector field \(X_{0}\), \(X_{\pm}\) is not complete. Item \((2)\). By virtue of Propos. 7 and item \((1)\) of this proposition, the Lie algebra of the Lie group \(I_{0}(\operatorname{Aff}_{+}(\mathbb{R}))\) is anti-isomorphic to \(ci(G)=\operatorname{span}(\widetilde{X}_{1},\widetilde{X}_{2})\). Isometric embedding of \(\operatorname{Aff}_{+}(\mathbb{R})\) into \(\mathbb{R}_{1}^{2}\) in the case \(K=0\) **Theorem 20**.: _Let \(K=0\). The mapping \(i\,:\,\operatorname{Aff}_{+}(\mathbb{R})\to\Pi\subset\mathbb{R}_{1}^{2}\), \(\Pi=\left\{(\widetilde{x},\widetilde{y})\in\mathbb{R}_{1}^{2}\mid s_{1} \widetilde{y}+\widetilde{x}<1/\gamma\right\}\),_ \[i(x,y)=(\widetilde{x},\widetilde{y})=\left(\frac{1}{2}\left(\frac{y-1}{y}-\frac {w}{\gamma}\right),\frac{s_{1}}{2}\left(\frac{y-1}{y}+\frac{w}{\gamma}\right) \right), \tag{11.3}\] _is an isometry._ Proof.: We give a proof for the problem \(P_{3}\), in the general case \(K=0\) the proof is similar. For the problem \(P_{3}\) we have \(\Pi=\left\{(\widetilde{x},\widetilde{y})\in\mathbb{R}_{1}^{2}\mid\widetilde{y }+\widetilde{x}<1\right\}\), \[i(x,y)=(\widetilde{x},\widetilde{y})=\left(\frac{1}{2}\left(1-\frac{1}{y}+x \right),\frac{s_{1}}{2}\left(1-\frac{1}{y}+x\right)\right). \tag{11.4}\] Let \(q_{j}=(x_{j},y_{j})\in\operatorname{Aff}_{+}(\mathbb{R})\), \(\widetilde{q}_{j}=i(q_{j})=(\widetilde{x}_{j},\widetilde{y}_{j})\in\mathbb{R}_{ 1}^{2}\), \(j=1,2\). Immediate computation on the basis of (11.4) shows that \(\widetilde{q}_{j}\in\Pi\), \(j=1,2\). We prove that \[\widetilde{d}(\widetilde{q}_{1},\widetilde{q}_{2})=d(q_{1},q_{2}), \tag{11.5}\] where \(d\) and \(\widetilde{d}\) are the Lorentzian distances in \(\operatorname{Aff}_{+}(\mathbb{R})\) and \(\mathbb{R}_{1}^{2}\) respectively. First we show that \[d(q_{1},q_{2})\neq 0\iff\widetilde{d}(\widetilde{q}_{1},\widetilde{q}_{2})\neq 0. \tag{11.6}\] Denote \(\bar{q}=q_{1}^{-1}q=(\bar{x},\bar{y})=((x_{2}-x_{1})/y_{1},y_{2}/y_{1})\). Then \[d(q_{1},q_{2})\neq 0\iff d(\mathrm{Id},\bar{q})\neq 0\iff\bar{x}>0,\ \bar{y}>1\iff x_{2}>x_{1},\ y_{2}>y_{1}.\] On the other hand, \[\widetilde{d}(\widetilde{q}_{1},\widetilde{q}_{2})\neq 0 \iff\widetilde{x}_{2}-\widetilde{x}_{1}>|\widetilde{y}_{2}- \widetilde{y}_{1}|\iff\begin{cases}\widetilde{x}_{2}-\widetilde{x}_{1}> \widetilde{y}_{2}-\widetilde{y}_{1},\\ \widetilde{x}_{2}-\widetilde{x}_{1}>\widetilde{y}_{1}-\widetilde{y}_{2} \end{cases}\] \[\iff\begin{cases}x_{2}-\frac{1}{y_{2}}-x_{1}+\frac{1}{y_{1}}>-x_ {1}+x_{2}-\frac{1}{y_{1}}+\frac{1}{y_{2}},\\ x_{2}-\frac{1}{y_{2}}-x_{1}+\frac{1}{y_{1}}>x_{1}-x_{2}+\frac{1}{y_{1}}-\frac{ 1}{y_{2}}\end{cases}\quad\iff\begin{cases}\frac{1}{y_{1}}>\frac{1}{y_{2}},\\ x_{1}-x_{2}<0,\end{cases}\] and (11.6) follows. Now let \(d(q_{1},q_{2})\neq 0\), \(\widetilde{d}(\widetilde{q}_{1},\widetilde{q}_{2})\neq 0\), and we prove equality (11.5). We have \[d^{2}(q_{1},q_{2}) =d^{2}(\mathrm{Id},\bar{q})=\left(\sqrt{\frac{\bar{x}\bar{y}}{ \bar{y}}-1}-\sqrt{\frac{\bar{x}}{\bar{y}(\bar{y}-1)}}^{2}=\frac{\bar{x}(\bar{ y}-1)}{\bar{y}}=\frac{(x_{2}-x_{1})(y_{2}-y_{1})}{y_{1}y_{2}},\right.\] \[\widetilde{d}^{2}(\widetilde{q}_{1},\widetilde{q}_{2}) =(\widetilde{x}_{2}-\widetilde{x}_{1})^{2}-(\widetilde{y}_{2}- \widetilde{y}_{1})^{2}=\frac{1}{4}\left(-\frac{1}{y_{2}}+x_{2}+\frac{1}{y_{1}} -x_{1}\right)^{2}-\frac{1}{4}\left(-\frac{1}{y_{2}}-x_{2}+\frac{1}{y_{1}}+x_{ 1}\right)^{2}\] \[=\frac{(x_{2}-x_{1})(y_{2}-y_{1})}{y_{1}y_{2}},\] and equality (11.5) follows. **Remark 11**.: _The explicit formulas (11.3) for the isometry \(i\,:\,\mathrm{Aff}_{+}(\mathbb{R})\to\Pi\) were discovered as follows. The exponential mappings for \(\mathrm{Aff}_{+}(\mathbb{R})\) in the case \(K=0\) and for the Minkowski plane \(\mathbb{R}_{1}^{2}\) have respectively the form:_ \[\mathrm{Exp}\ :\ \left(\begin{array}{c}\psi\\ t\end{array}\right)\mapsto\left(\begin{array}{c}x\\ y\end{array}\right)=\left(\begin{array}{c}\rho(f(\tau-\rho)+g\left(\frac{1}{ \rho}-\frac{1}{\tau}\right)\\ \frac{\rho}{\tau}\end{array}\right)\right),\qquad\widetilde{\mathrm{Exp}}\ :\ \left(\begin{array}{c} \widetilde{\psi}\\ \widetilde{t}\end{array}\right)\mapsto\left(\begin{array}{c}\widetilde{x}\\ \widetilde{y}\end{array}\right)=\left(\begin{array}{c}\widetilde{t}\cosh \widetilde{\psi}\\ \widetilde{t}\sinh\widetilde{\psi}\end{array}\right). \tag{11.7}\] _We set in these formulas \(t=\widetilde{t}\), \(\psi=\widetilde{\psi}\), and obtain (11.4)._ **Remark 12**.: _In the case \(K=0\) the group \(\mathrm{Aff}_{+}(\mathbb{R})\) cannot be isometric to the whole Minkowski space \(\mathbb{R}_{1}^{2}\) since the first is not geodesically complete (see Cor. 3), while the second is._ _It would be interesting to construct isometric embeddings of \(\mathrm{Aff}_{+}(\mathbb{R})\) to \(\mathbb{S}_{1}^{2}\) (\(\widetilde{\mathbb{H}}_{1}^{2}\)) in the case \(K>0\) (resp. \(K<0\)). This is more complicated since in this case the formulas analogous to (11.7) are more involved._ ## 12 Examples In this section we present detailed results for the problems \(P_{1}\)-\(P_{3}\) defined in Example 5. ### Problem \(P_{1}\) In this case \(K<0\). The causal future of the point \(\mathrm{Id}\) is \(J^{+}=\exp(\mathcal{U})=\{(x,y)\in G\mid x\geq|y-1|\}\), see Fig. 8. The group \(G\) is not globally hyperbolic since for \(q_{1}=(x_{1},y_{1})\in G\) with \(x_{1}>y_{1}+1\) the intersection \(J^{+}(\mathrm{Id})\cap J^{-}(q_{1})\) is not compact, see Fig. 9. Although, the domain \(\mathrm{int}\,D=\{(x,y)\in G\mid x>|y-1|,\ x<y+1\}\) is globally hyperbolic, see Fig. 10. **Theorem 21**.: _Let \(q_{1}=(x_{1},y_{1})\in M\setminus\{\mathrm{Id}\}\) for the problem \(P_{1}\)._ 1. _If_ \(x_{1}=|y_{1}-1|\)_, then_ \(x(t)=\pm(e^{\pm t}-1)\)_,_ \(y(t)=e^{\pm t}\)_,_ \(\pm=\mathrm{sgn}(y_{1}-1)\)_,_ \(t_{1}=\pm\ln y_{1}\)_,_ \(d(q_{1})=0\)_._ 2. _If_ \(x_{1}>|y_{1}-1|\)_, then_ \[x(t)=\cos\rho(\tan\tau-\tan\rho),\qquad y(t)=\frac{\cos\rho}{\cos\tau},\qquad \tau=\rho+t,\qquad t_{1}=\tau-\rho=d(q_{1}),\] \[\tau=\arcsin\frac{x_{1}^{2}+y_{1}^{2}-1}{2x_{1}y_{1}},\qquad\rho=\arcsin\frac{ y_{1}^{2}-x_{1}^{2}-1}{2x_{1}},\] _the curve_ \((x(t),y(t))\) _is an arc of the hyperbola_ \(y^{2}-(x-\sin\rho)^{2}=\cos^{2}\rho\) Figure 11: Lorentzian length maximizers in \(P_{1}\) Figure 12: Lorentzian spheres in \(P_{1}\) ### Problem \(P_{2}\) In this case \(K>0\). **Theorem 22**.: _Let \(q_{1}=(x_{1},y_{1})\in J^{+}\setminus\{q_{0}\}\) for the problem \(P_{2}\)._ 1. _If_ \(y_{1}-1=|x_{1}|\)_, then_ \(x(t)=\pm(e^{t}-1)\)_,_ \(y(t)=e^{t}\)_,_ \(\pm=\operatorname{sgn}x_{1}\)_,_ \(t_{1}=\ln y_{1}\)_,_ \(d(q_{1})=0\)_._ 2. _If_ \(x_{1}=0\)_, then_ \(x(t)\equiv 0\)_,_ \(y(t)=e^{t}\)_,_ \(t_{1}=\ln y_{1}=d(q_{1})\)_._ 3. _If_ \(0<|x_{1}|<y_{1}-1\)_, then_ \[x(t)=\pm(\sinh\rho\coth\tau-\cosh\rho),\quad y(t)=\frac{\sinh\rho}{ \sinh\tau},\qquad\pm=\operatorname{sgn}x_{1},\qquad\tau=\rho-t,\] \[\rho=\operatorname{arcosh}\frac{1+x_{1}^{2}-y_{1}^{2}}{2|x_{1}|},\qquad\tau=\operatorname{arcosh}\frac{1-x_{1}^{2}-y_{1}^{2}}{2|x_{1}|y_{1}},\qquad t_{1}=\rho-\tau=d(q_{1}),\] Figure 13: Plot of Lorentzian distance in \(P_{1}\) _is the arc of the hyperbola \((\pm x+\cosh\rho)^{2}-y^{2}=\sinh^{2}\rho\)._ ### Problem \(P_{3}\) In this case \(K=0\). **Theorem 23**.: _Let \(q_{1}=(x_{1},y_{1})\in J^{+}\setminus\{q_{0}\}\) for the problem \(P_{3}\)._ 1. _If_ \(x_{1}=0\)_, then_ \(x(t)\equiv 0\)_,_ \(y(t)=e^{t}\)_,_ \(t_{1}=\ln y_{1}\)_,_ \(d(q_{1})=0\)_._ 2. _If_ \(y_{1}=1\)_, then_ \(x(t)=t\)_,_ \(y(t)\equiv 1\)_,_ \(t_{1}=x_{1}\)_,_ \(d(q_{1})=0\)_._ 3. _If_ \(x_{1}>0\) _and_ \(y_{1}>1\)_, then_ \(x(t)=\rho(\rho-\tau)\)_,_ \(y(t)=\frac{\rho}{\tau}\)_,_ \[\tau=\rho-t,\qquad\rho=\sqrt{\frac{x_{1}y_{1}}{y_{1}-1}},\qquad\tau=\sqrt{ \frac{x_{1}}{y_{1}(y_{1}-1)}},\qquad t_{1}=\rho-\tau=d(q_{1}),\] Figure 16: Lorentzian length maximizers in \(P_{2}\) Figure 17: Lorentzian spheres in \(P_{2}\) Figure 18: Plot of Lorentzian distance in \(P_{2}\) _is the arc of the hyperbola \(x=\rho^{2}\left(1-\frac{1}{y}\right)\)._ * [8]\(J^{+}\) for the problem \(P_{1}\) * [9] Problem \(P_{1}\): \(G\) is not globally hyperbolic * [10] Problem \(P_{1}\): \(D\) is globally hyperbolic * [11] Lorentzian length maximizers in \(P_{1}\) * [12] Lorentzian spheres in \(P_{1}\) * [13] Plot of Lorentzian distance in \(P_{1}\) * [14]\(J^{+}\) for the problem \(P_{2}\) * [15] Problem \(P_{2}\): \(G\) is globally hyperbolic * [16] Lorentzian length maximizers in \(P_{2}\) * [17] Lorentzian spheres in \(P_{2}\) * [18] Plot of Lorentzian distance in \(P_{2}\) * [19]\(J^{+}\) for the problem \(P_{3}\) * [20] Problem \(P_{3}\): \(G\) is globally hyperbolic * [21] Lorentzian length maximizers in \(P_{3}\) * [22] Lorentzian spheres in \(P_{3}\) * [23] Plot of Lorentzian distance in \(P_{3}\)